Skip to content

Feature request: Streaming & Chunked File Uploads (Memory Optimization & Large File Support) #196

@everyx

Description

@everyx

Currently, the file upload implementation looks like buffers the entire file content into memory before processing and not support chunked file uploads. This approach has two critical limitations:

  1. High Memory Usage (OOM Risk): Uploading large files (e.g., 500MB+) consumes an equivalent amount of RAM. Concurrent uploads can easily crash the server with an Out-Of-Memory error, especially on low-resource instances.
  2. CDN/Proxy Limits: Many reverse proxies and CDNs (e.g., Cloudflare free tier) impose strict request body size limits (e.g., 100MB). Without support for chunked uploads, users cannot upload files exceeding these limits.

Additional context:

  • TUS Protocol: While robust, implementing the full TUS specification might be overkill and introduce too much complexity for TrailBase's "single binary" philosophy.
  • S3 Presigned URLs: For S3 backends, we could offload uploads to the client. However, TrailBase prioritizes being self-contained with Local FS support, so a native chunking implementation is still required.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions