[WIP] Update snapshot expiration to reclaim orphan files that are part of snapshots being expired#447
Open
maluchari wants to merge 1 commit intolinkedin:mainfrom
Open
Conversation
Previously, snapshot expiration only removed metadata while leaving orphaned
data files on storage until a separate Orphan File Deletion (OFD) job ran.
This caused:
- Delayed storage reclamation (days/weeks)
- Increased storage costs
- Operational overhead of coordinating two jobs
This change enables immediate file deletion during snapshot expiration,
providing faster storage reclamation and cost savings.
**Note:** OFD is still required to clean up orphan files from failed jobs or
other edge cases. This optimization addresses the common case of normal
snapshot expiration.
- Added --deleteFiles command-line flag to JobsScheduler
- Updated OperationTaskFactory and TableSnapshotsExpirationTask to propagate
the deleteFiles parameter
- Modified Operations.expireSnapshots() to conditionally delete files based
on the flag (default: metadata-only for backward compatibility)
- Added metrics tracking for snapshot expiration with deleteFiles flag
- Fixed jobs-scheduler.Dockerfile ENTRYPOINT for proper argument passing
- Updated tests with deleteFiles parameter and added test coverage for both
deletion modes
Metadata-only expiration (default):
```bash
--type SNAPSHOTS_EXPIRATION --cluster local \
--tablesURL http://openhouse-tables:8080 \
--jobsURL http://openhouse-jobs:8080
```
With file deletion:
```bash
--type SNAPSHOTS_EXPIRATION --cluster local \
--tablesURL http://openhouse-tables:8080 \
--jobsURL http://openhouse-jobs:8080 \
--deleteFiles
```
Verified with local Docker environment (oh-hadoop-spark):
- Created table with versions=2 policy, generated 5 snapshots via overwrites
- Without --deleteFiles: 5→2 snapshots, 5→5 files (no deletion)
- With --deleteFiles: 5→2 snapshots, 5→1 file (3 files deleted)
- Job logs and HDFS verification confirmed expected behavior
[Screenshots to be added]
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Previously, snapshot expiration only removed metadata while leaving orphaned data files on storage until a separate Orphan File Deletion (OFD) job ran. This caused delayed storage reclamation (days) and also a need for OFD to process a lot of files.
This change enables immediate file deletion during snapshot expiration, providing faster storage reclamation and cost savings.
Note: OFD is still required to clean up orphan files from failed jobs or other edge cases. This optimization addresses the common case of normal snapshot expiration.
Metadata-only expiration (default):
--type SNAPSHOTS_EXPIRATION --cluster local \ --tablesURL http://openhouse-tables:8080 \ --jobsURL http://openhouse-jobs:8080With file deletion:
--type SNAPSHOTS_EXPIRATION --cluster local \ --tablesURL http://openhouse-tables:8080 \ --jobsURL http://openhouse-jobs:8080 \ --deleteFilesVerified with local Docker environment (oh-hadoop-spark):
// HDFS before and after with deletefiles set

Summary
Issue] Briefly discuss the summary of the changes made in this
pull request in 2-3 lines.
Changes
For all the boxes checked, please include additional details of the changes made in this pull request.
Testing Done
For all the boxes checked, include a detailed description of the testing done for the changes made in this pull request.
Additional Information
For all the boxes checked, include additional details of the changes made in this pull request.