-
Notifications
You must be signed in to change notification settings - Fork 4.5k
Make GCS filesystem lookup lazy to match S3 behavior #37488
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Make GCS filesystem lookup lazy to match S3 behavior #37488
Conversation
Ensure FileSystems.get_filesystem(gs://) does not require the GCP extra at lookup time, aligning behavior with S3FileSystem. Import errors are raised only when GCS operations are used.
Align GCSFileSystem behavior with S3 by deferring GCP dependency validation until filesystem usage instead of lookup time. Adds a regression test for lazy lookup.
Summary of ChangesHello @RushabhRatnaparkhi, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request standardizes the dependency loading mechanism for cloud storage filesystems within Apache Beam. By making the GCS filesystem lookup lazy, it ensures that the necessary GCP dependencies are only validated and loaded when GCS operations are actually performed, rather than at the initial filesystem object retrieval. This change improves consistency with the existing S3 filesystem behavior and provides a more robust and user-friendly experience by deferring dependency-related errors to the point of usage. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
Checks are failing. Will not request review until checks are succeeding. If you'd like to override that behavior, comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR makes GCS filesystem lookup lazy to align with S3 filesystem behavior, addressing issue #37445. The change allows FileSystems.get_filesystem('gs://...') to return a GCSFileSystem instance without requiring apache-beam[gcp] dependencies to be installed, deferring dependency validation until the filesystem is actually used.
Changes:
- Modified
GCSFileSystemto use lazy loading for thegcsiomodule via a new_get_gcsio()helper method - Converted
CHUNK_SIZEfrom a class variable to a lazily-evaluated property - Updated all test methods to mock
_get_gcsio()instead of the module-levelgcsioimport - Added a test verifying that filesystem lookup succeeds without GCP extras installed
- (Unrelated) Added write disposition to BigQuery write operation in Java TriggerExample
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 4 comments.
| File | Description |
|---|---|
| sdks/python/apache_beam/io/gcp/gcsfilesystem.py | Implements lazy loading of gcsio module and converts CHUNK_SIZE to a property |
| sdks/python/apache_beam/io/gcp/gcsfilesystem_test.py | Adds lazy loading test and updates existing tests to mock the new _get_gcsio method |
| examples/java/src/main/java/org/apache/beam/examples/cookbook/TriggerExample.java | Adds write disposition to BigQuery operation (appears unrelated to PR objective) |
Comments suppressed due to low confidence (1)
sdks/python/apache_beam/io/gcp/gcsfilesystem.py:392
- The exception handler only catches
ValueError, but_get_gcsio()can raiseImportErrorwhen GCP dependencies are missing. Since the comment indicates "report lineage is fail-safe", the exception handler should also catchImportErrorto prevent failures when GCP extras are not installed.
try:
gcsio = self._get_gcsio()
components = gcsio.parse_gcs_path(path, object_optional=True)
except ValueError:
# report lineage is fail-safe
traceback.print_exc()
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| _CHUNK_SIZE = None | ||
|
|
||
| @property | ||
| def CHUNK_SIZE(self): | ||
| if self._CHUNK_SIZE is None: | ||
| self._CHUNK_SIZE = self._get_gcsio().MAX_BATCH_OPERATION_SIZE | ||
| return self._CHUNK_SIZE # Chuck size in batch operations |
Copilot
AI
Feb 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
_CHUNK_SIZE is defined as a class variable (shared across all instances), but it's being used as an instance variable in the property getter. This creates a subtle bug where if one instance initializes _CHUNK_SIZE, all other instances will see it as initialized, even if they haven't accessed the gcsio module yet. Consider making this a true instance variable by initializing it in __init__ as self._chunk_size = None and updating the property accordingly.
| def CHUNK_SIZE(self): | ||
| if self._CHUNK_SIZE is None: | ||
| self._CHUNK_SIZE = self._get_gcsio().MAX_BATCH_OPERATION_SIZE | ||
| return self._CHUNK_SIZE # Chuck size in batch operations |
Copilot
AI
Feb 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typo in comment: "Chuck size" should be "Chunk size".
| return self._CHUNK_SIZE # Chuck size in batch operations | |
| return self._CHUNK_SIZE # Chunk size in batch operations |
| BigQueryIO.writeTableRows() | ||
| .to(tableRef) | ||
| .withSchema(getSchema()) | ||
| .withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_APPEND)); |
Copilot
AI
Feb 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This Java file change appears unrelated to the Python GCS filesystem lazy loading feature described in the PR. The change adds a write disposition to a BigQuery write operation in a Java example, which has no connection to making GCS filesystem lookup lazy in Python. This should likely be in a separate PR.
| def test_get_filesystem_does_not_require_gcp_extra(self): | ||
| fs = FileSystems.get_filesystem('gs://test-bucket/path') | ||
| self.assertEqual(fs.scheme(), 'gs') |
Copilot
AI
Feb 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The test validates that get_filesystem() succeeds without GCP extras, but it doesn't verify the second part of the lazy loading behavior: that actual filesystem operations (like open(), match(), etc.) properly raise an ImportError when GCP dependencies are missing. Consider adding a test that verifies an ImportError is raised when attempting to use the filesystem without GCP dependencies installed, to ensure the lazy loading works as intended.
Resolves #37445
What changes were proposed in this pull request?
FileSystems.get_filesystem()previously raised aValueErrorforgs://paths when GCP dependencies were not installed, while
s3://paths returneda filesystem object and deferred dependency validation until usage time.
This pull request aligns GCS behavior with S3 by making GCS filesystem lookup
lazy.
get_filesystem()now returns aGCSFileSysteminstance withoutrequiring
apache-beam[gcp], deferring dependency validation until thefilesystem is actually used (for example, on
open,match, etc.).A regression test has been added to ensure that GCS filesystem lookup does not
require GCP extras and that dependency errors are raised only at usage time.
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
CHANGES.mdwith noteworthy changes (not required; no user-facing API change)How was this tested?
FileSystems.get_filesystem("gs://...")succeeds without
apache-beam[gcp]installed.filesystem operations (e.g.,
open) and not during lookup.