Autumn ships a pluggable file-storage abstraction so apps that accept user-uploaded files (avatars, attachments, generated reports) don't have to pick an SDK, design a key scheme, or hand-roll URL signing every time.
This is the layer that turns the
Multipart extractor's "stream-to-disk"
primitive into something that survives container restarts and works
across replicas.
When you need it
Reach for BlobStore if you answer "yes" to any of:
- Does my app accept user-uploaded files?
- Will I run more than one web replica?
- Do I redeploy on a schedule, where local disk is wiped between containers?
If "no" to all three, the existing MultipartField::save_to(path)
primitive is fine.
Quick start
Enable the storage feature on autumn-web (for the Local backend and the
BlobStore trait). For S3-compatible production storage also add
autumn-storage-s3:
[dependencies]
autumn-web = { version = "0.4", features = ["storage", "multipart"] }
autumn-storage-s3 = "0.4" # only needed when storage.backend = "s3"
The framework gives you a working Local backend in dev out of the
box — bytes land under target/blobs/ and signed URLs are served
from /_blobs/.... No further config required.
use autumn_web::extract::{Multipart, State};
use autumn_web::prelude::*;
use autumn_web::storage::BlobStoreState;
#[post("/avatar")]
async fn upload(
State(state): State<AppState>,
mut form: Multipart,
) -> AutumnResult<String> {
let blobs = state
.extension::<BlobStoreState>()
.ok_or_else(|| AutumnError::internal_server_error_msg("storage not configured"))?;
let store = blobs.store().clone();
while let Some(field) = form.next_field().await? {
if field.name() == Some("avatar") {
let blob = field
.save_to_blob_store(&*store, "avatars/me.png")
.await?;
return Ok(blob.key);
}
}
Err(AutumnError::bad_request_msg("missing 'avatar' field"))
}
The full working version — Maud-rendered upload form, presigned-URL
<img> rendering, and a Blob column on a #[model] user — lives
in examples/reddit-clone. See
src/routes/avatars.rs for the upload handler,
src/routes/auth.rs::profile for the rendering, and
migrations/20260427000000_add_user_avatar/up.sql for the migration
generated by add_blob_column!.
Configuration
[storage]
backend = "local" # "local" | "s3" | "disabled"
default_provider = "default"
allow_local_in_production = false
[storage.local]
root = "target/blobs"
mount_path = "/_blobs"
default_url_expiry_secs = 900
# signing_key = "..." # optional; falls back to AUTUMN_STORAGE__LOCAL__SIGNING_KEY
[storage.s3]
bucket = "my-app-uploads"
region = "us-east-1"
endpoint = "https://s3.amazonaws.com" # optional; required for R2/MinIO/Spaces/Wasabi
access_key_id_env = "AWS_ACCESS_KEY_ID"
secret_access_key_env = "AWS_SECRET_ACCESS_KEY"
force_path_style = false
Every field is overridable via AUTUMN_STORAGE__* env vars (see the
in-source config docs for the canonical
list).
Profile-aware defaults
| Profile | [storage].backend | Notes |
|---|---|---|
dev | disabled | Opt in by setting backend = "local" |
prod | disabled | backend = "local" fails fast unless storage.allow_local_in_production = true |
The fail-fast in prod is intentional: a single-replica Local
deployment is fine, but it has to be explicitly acknowledged. Apps
that scale beyond one replica should select s3.
The Blob column story
Apps store Blob columns; the BlobStore owns the bytes; the
database owns lifecycle.
use autumn_web::model;
use autumn_web::storage::Blob;
#[model]
pub struct User {
pub id: i64,
pub name: String,
pub avatar: Option<Blob>,
}
Blob is Serialize + Deserialize and (when the db feature is on)
implements AsExpression / FromSqlRow for Postgres JSONB, so the
default #[model] derives Just Work.
Blob column migrations
Adding a blob column to an existing table is one macro call:
use autumn_web::storage::migrations::add_blob_column;
let (up, down) = add_blob_column!("users", "avatar");
// up = "ALTER TABLE users ADD COLUMN avatar JSONB NULL"
// down = "ALTER TABLE users DROP COLUMN avatar"
For Diesel file-based migrations, paste the strings into
migrations/<name>/{up,down}.sql; for runtime migrations, hand them
to diesel::sql_query(...). The macro deliberately accepts only
string literals — runtime-derived identifiers can't be safely
interpolated without a quoting layer, so passing them through here
would be an injection footgun.
Reddit-clone's migrations/20260427000000_add_user_avatar/ is the
worked example.
Presigned-URL semantics
| Backend | URL shape | Signing |
|---|---|---|
Local | /{mount_path}/{key}?exp=…&sig=… | HMAC-SHA256 over {key}:{exp}, verified by the mounted serving route |
S3 | Real S3 presigned URL | AWS SigV4 (or your provider's equivalent) |
Both expire. Both are tamper-resistant. Both are safe to embed in templates and emails.
For the local backend, set [storage.local].signing_key (or the
AUTUMN_STORAGE__LOCAL__SIGNING_KEY env var) so URLs survive a
process restart and replicas agree on signatures. Without it the
framework generates a random key per process — fine for dev, never
for prod.
Multi-replica safety
The local backend writes to a single host's disk. That's broken across replicas:
- replica A serves the upload, the bytes land on A's disk
- replica B serves the next request, can't see A's bytes
The framework doesn't try to paper over this. It surfaces the constraint:
prod+localwithoutallow_local_in_productionfails fast at startup.prod+local+ acknowledgement logs a warning explaining the replicas can't see each other's bytes.
Multi-replica production should choose backend = "s3".
Production checklist
Before flipping a real app to backend = "s3":
- [ ] Bucket exists and is private (no public-read policy unless you really mean it).
- [ ] Bucket policy permits
PutObject,GetObject,DeleteObject, and (forhead)HeadObjectfrom the credentials your app uses. - [ ] CORS is configured if you'll ever generate browser-served presigned URLs across origins.
- [ ] Lifecycle rules are in place to expire orphaned blobs (the
framework's first slice deliberately does not garbage-collect
for you —
deletethe row anddeletethe blob in a transaction-bracketed pattern). - [ ] Credentials come from your secrets manager via
access_key_id_env/secret_access_key_env, not committedautumn.toml. - [ ] You're using a region that's geographically near your app
tier (latency on every
put/get).
Using the S3 backend
Add autumn-storage-s3 to your Cargo.toml and wire it up in main:
[dependencies]
autumn-web = { version = "0.4", features = ["storage", "multipart"] }
autumn-storage-s3 = "0.4"
use autumn_storage_s3::S3BlobStore;
#[tokio::main]
async fn main() {
let config = autumn_web::config::TomlEnvConfigLoader::new()
.load()
.await
.expect("config");
let store = S3BlobStore::from_config(&config.storage.s3)
.await
.expect("S3 store");
autumn_web::app()
.routes(routes![...])
.with_blob_store(store)
.run()
.await;
}
And in autumn.toml:
[storage]
backend = "s3"
[storage.s3]
bucket = "my-app-uploads"
region = "us-east-1"
access_key_id_env = "AWS_ACCESS_KEY_ID"
secret_access_key_env = "AWS_SECRET_ACCESS_KEY"
S3BlobStore::from_config resolves credentials from the named
environment variables, or falls back to the AWS default chain
(~/.aws/credentials, instance-metadata, ECS task role, etc.) when
neither access_key_id_env nor secret_access_key_env is set.
The S3 plugin lives in its own crate (autumn-storage-s3) so apps
that don't need S3 don't pull in the AWS SDK tree. Peer plugins for
other providers (GCS, Azure, B2) follow the same pattern.
What's out of scope (for now)
- Image processing / resizing. Track separately.
imageandimageprochave their own dependency surfaces. - Direct-to-S3 browser uploads (presigned PUT). Useful eventually; the first slice keeps bytes flowing through the autumn process so the multipart MIME / size-cap policies still apply.
- Native non-S3 backends (GCS, Azure Blob, B2 native). Anyone
whose object store speaks S3 is covered by
autumn-storage-s3. Native backends are a future plugin-crate extension. - Antivirus / content moderation. Compose a Tower middleware on
top of
BlobStorefor this. - Orphan-blob garbage collection. Document: lifecycle is the
application's job (delete the row, then delete the blob). A
harvest-backed sweeper can come later. - Migration tooling for moving data between backends. Not framework's job today.