-
Notifications
You must be signed in to change notification settings - Fork 19
Open
Labels
enhancementNew feature or requestNew feature or request
Description
Summary
Appending to large S3 objects via UploadPartCopy can schedule more than 10 000 parts, which exceeds Amazon S3’s hard multipart limit and causes the append to fail with InvalidPartNumber.
Affected code
fusio/src/impls/remotes/aws/writer.rs (copy_existing_object helper)
Steps to reproduce
- Store an object larger than ~160 GiB in the target bucket.
- Open it through Fusio with
OpenOptions::default().write(true)and append any payload. - Observe the multipart append failing once
copy_partorcomplete_parthits the 10 000-part ceiling.
Expected behavior
Either:
- Dynamically increase the copy part size so that
ceil(object_size / part_size) <= 10_000(still respecting the 5 MiB minimum and 5 GiB maximum), or - Detect that even the maximum part size would exceed the limit and return a clear, early error to the caller.
Notes / context
- Current implementation hardcodes 16 MiB copy parts, so any object larger than roughly 160 GiB will overflow the allowed part count.
- AWS reference: https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html (10 000-part limit)
- Needed changes: compute an adaptive part size or proactively surface an
Error::Unsupportedwith guidance before issuing any copy requests.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or request