Skip to content

[WIP] Update snapshot expiration to reclaim orphan files that are part of snapshots being expired#447

Open
maluchari wants to merge 1 commit intolinkedin:mainfrom
maluchari:malini/optimize_se_with_files_deletion
Open

[WIP] Update snapshot expiration to reclaim orphan files that are part of snapshots being expired#447
maluchari wants to merge 1 commit intolinkedin:mainfrom
maluchari:malini/optimize_se_with_files_deletion

Conversation

@maluchari
Copy link
Collaborator

Previously, snapshot expiration only removed metadata while leaving orphaned data files on storage until a separate Orphan File Deletion (OFD) job ran. This caused delayed storage reclamation (days) and also a need for OFD to process a lot of files.

This change enables immediate file deletion during snapshot expiration, providing faster storage reclamation and cost savings.

Note: OFD is still required to clean up orphan files from failed jobs or other edge cases. This optimization addresses the common case of normal snapshot expiration.

  • Added --deleteFiles command-line flag to JobsScheduler
  • Updated OperationTaskFactory and TableSnapshotsExpirationTask to propagate the deleteFiles parameter
  • Modified Operations.expireSnapshots() to conditionally delete files based on the flag (default: metadata-only for backward compatibility)
  • Added metrics tracking for snapshot expiration with deleteFiles flag
  • Updated tests with deleteFiles parameter and added test coverage for both deletion modes

Metadata-only expiration (default):

--type SNAPSHOTS_EXPIRATION --cluster local \
    --tablesURL http://openhouse-tables:8080 \
    --jobsURL http://openhouse-jobs:8080

With file deletion:

--type SNAPSHOTS_EXPIRATION --cluster local \
    --tablesURL http://openhouse-tables:8080 \
    --jobsURL http://openhouse-jobs:8080 \
    --deleteFiles

Verified with local Docker environment (oh-hadoop-spark):

  • Created table with versions=2 policy, generated 5 snapshots via overwrites
  • Without --deleteFiles: 5→2 snapshots, 5→5 files (no deletion)
  • With --deleteFiles: 5→2 snapshots, 5→1 file (3 files deleted)
  • Job logs and HDFS verification confirmed expected behavior

// HDFS before and after with deletefiles set
Screenshot 2026-02-07 at 11 43 40 AM

Screenshot 2026-02-07 at 11 43 17 AM Screenshot 2026-02-07 at 11 45 17 AM

Summary

Issue] Briefly discuss the summary of the changes made in this
pull request in 2-3 lines.

Changes

  • Client-facing API Changes
  • Internal API Changes
  • Bug Fixes
  • New Features
  • Performance Improvements
  • Code Style
  • Refactoring
  • Documentation
  • Tests

For all the boxes checked, please include additional details of the changes made in this pull request.

Testing Done

  • Manually Tested on local docker setup. Please include commands ran, and their output.
  • Added new tests for the changes made.
  • Updated existing tests to reflect the changes made.
  • No tests added or updated. Please explain why. If unsure, please feel free to ask for help.
  • Some other form of testing like staging or soak time in production. Please explain.

For all the boxes checked, include a detailed description of the testing done for the changes made in this pull request.

Additional Information

  • Breaking Changes
  • Deprecations
  • Large PR broken into smaller PRs, and PR plan linked in the description.

For all the boxes checked, include additional details of the changes made in this pull request.

Previously, snapshot expiration only removed metadata while leaving orphaned
data files on storage until a separate Orphan File Deletion (OFD) job ran.
This caused:
- Delayed storage reclamation (days/weeks)
- Increased storage costs
- Operational overhead of coordinating two jobs

This change enables immediate file deletion during snapshot expiration,
providing faster storage reclamation and cost savings.

**Note:** OFD is still required to clean up orphan files from failed jobs or
other edge cases. This optimization addresses the common case of normal
snapshot expiration.

- Added --deleteFiles command-line flag to JobsScheduler
- Updated OperationTaskFactory and TableSnapshotsExpirationTask to propagate
  the deleteFiles parameter
- Modified Operations.expireSnapshots() to conditionally delete files based
  on the flag (default: metadata-only for backward compatibility)
- Added metrics tracking for snapshot expiration with deleteFiles flag
- Fixed jobs-scheduler.Dockerfile ENTRYPOINT for proper argument passing
- Updated tests with deleteFiles parameter and added test coverage for both
  deletion modes

Metadata-only expiration (default):
```bash
--type SNAPSHOTS_EXPIRATION --cluster local \
    --tablesURL http://openhouse-tables:8080 \
    --jobsURL http://openhouse-jobs:8080
```

With file deletion:
```bash
--type SNAPSHOTS_EXPIRATION --cluster local \
    --tablesURL http://openhouse-tables:8080 \
    --jobsURL http://openhouse-jobs:8080 \
    --deleteFiles
```

Verified with local Docker environment (oh-hadoop-spark):
- Created table with versions=2 policy, generated 5 snapshots via overwrites
- Without --deleteFiles: 5→2 snapshots, 5→5 files (no deletion)
- With --deleteFiles: 5→2 snapshots, 5→1 file (3 files deleted)
- Job logs and HDFS verification confirmed expected behavior

[Screenshots to be added]

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
@maluchari maluchari changed the title Update snapshot expiration to reclaim orphan files that are part of snapshots being expired [WIP] Update snapshot expiration to reclaim orphan files that are part of snapshots being expired Feb 7, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant