Skip to content

Guard multipart copy against 10 000-part overflow #238

@ethe

Description

@ethe

Summary

Appending to large S3 objects via UploadPartCopy can schedule more than 10 000 parts, which exceeds Amazon S3’s hard multipart limit and causes the append to fail with InvalidPartNumber.

Affected code

fusio/src/impls/remotes/aws/writer.rs (copy_existing_object helper)

Steps to reproduce

  1. Store an object larger than ~160 GiB in the target bucket.
  2. Open it through Fusio with OpenOptions::default().write(true) and append any payload.
  3. Observe the multipart append failing once copy_part or complete_part hits the 10 000-part ceiling.

Expected behavior

Either:

  • Dynamically increase the copy part size so that ceil(object_size / part_size) <= 10_000 (still respecting the 5 MiB minimum and 5 GiB maximum), or
  • Detect that even the maximum part size would exceed the limit and return a clear, early error to the caller.

Notes / context

  • Current implementation hardcodes 16 MiB copy parts, so any object larger than roughly 160 GiB will overflow the allowed part count.
  • AWS reference: https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html (10 000-part limit)
  • Needed changes: compute an adaptive part size or proactively surface an Error::Unsupported with guidance before issuing any copy requests.

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions