Currently, the file upload implementation looks like buffers the entire file content into memory before processing and not support chunked file uploads. This approach has two critical limitations:
- High Memory Usage (OOM Risk): Uploading large files (e.g., 500MB+) consumes an equivalent amount of RAM. Concurrent uploads can easily crash the server with an Out-Of-Memory error, especially on low-resource instances.
- CDN/Proxy Limits: Many reverse proxies and CDNs (e.g., Cloudflare free tier) impose strict request body size limits (e.g., 100MB). Without support for chunked uploads, users cannot upload files exceeding these limits.
Additional context:
- TUS Protocol: While robust, implementing the full TUS specification might be overkill and introduce too much complexity for TrailBase's "single binary" philosophy.
- S3 Presigned URLs: For S3 backends, we could offload uploads to the client. However, TrailBase prioritizes being self-contained with Local FS support, so a native chunking implementation is still required.