Skip to content

Conversation

@RushabhRatnaparkhi
Copy link

Resolves #37445

What changes were proposed in this pull request?

FileSystems.get_filesystem() previously raised a ValueError for gs://
paths when GCP dependencies were not installed, while s3:// paths returned
a filesystem object and deferred dependency validation until usage time.

This pull request aligns GCS behavior with S3 by making GCS filesystem lookup
lazy. get_filesystem() now returns a GCSFileSystem instance without
requiring apache-beam[gcp], deferring dependency validation until the
filesystem is actually used (for example, on open, match, etc.).

A regression test has been added to ensure that GCS filesystem lookup does not
require GCP extras and that dependency errors are raised only at usage time.


Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:

How was this tested?

  • Added a unit test verifying that FileSystems.get_filesystem("gs://...")
    succeeds without apache-beam[gcp] installed.
  • Manually verified that GCP dependency errors are raised only when invoking
    filesystem operations (e.g., open) and not during lookup.

RushabhRatnaparkhi and others added 5 commits January 24, 2026 00:46
Ensure FileSystems.get_filesystem(gs://) does not require the GCP extra
at lookup time, aligning behavior with S3FileSystem. Import errors are
raised only when GCS operations are used.
Align GCSFileSystem behavior with S3 by deferring GCP dependency
validation until filesystem usage instead of lookup time.
Adds a regression test for lazy lookup.
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @RushabhRatnaparkhi, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request standardizes the dependency loading mechanism for cloud storage filesystems within Apache Beam. By making the GCS filesystem lookup lazy, it ensures that the necessary GCP dependencies are only validated and loaded when GCS operations are actually performed, rather than at the initial filesystem object retrieval. This change improves consistency with the existing S3 filesystem behavior and provides a more robust and user-friendly experience by deferring dependency-related errors to the point of usage.

Highlights

  • Lazy GCS Dependency Loading: The GCS filesystem now defers the import of GCP dependencies (apache-beam[gcp]) until an actual GCS operation is attempted, aligning its behavior with S3.
  • Consistent Error Handling: FileSystems.get_filesystem() for gs:// paths will no longer raise a ValueError immediately if GCP dependencies are missing; instead, ImportError will be raised only when GCS functionality is used.
  • New Regression Test: A unit test has been added to specifically verify that FileSystems.get_filesystem("gs://...") succeeds without requiring apache-beam[gcp] to be installed.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • examples/java/src/main/java/org/apache/beam/examples/cookbook/TriggerExample.java
    • Modified the BigQueryIO.writeTableRows() configuration to explicitly use withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_APPEND).
  • sdks/python/apache_beam/io/gcp/gcsfilesystem.py
    • Implemented a _get_gcsio method to encapsulate the gcsio import, making it lazy and raising an ImportError if apache-beam[gcp] is not present.
    • Converted CHUNK_SIZE into a property that dynamically retrieves its value via _get_gcsio, ensuring lazy initialization.
    • Updated all internal methods (_gcsIO, report_lineage) to utilize the new lazy _get_gcsio helper.
  • sdks/python/apache_beam/io/gcp/gcsfilesystem_test.py
    • Introduced GCSFileSystemLazyLoadTest to specifically test the lazy loading behavior of get_filesystem for GCS paths.
    • Refactored existing GCS filesystem tests to mock the _get_gcsio method, adapting them to the new lazy dependency loading architecture.
Activity
  • The pull request was created by RushabhRatnaparkhi.
  • It addresses issue GCSFileSystem requires gcp extra at lookup time while S3FileSystem does not #37445.
  • A unit test was added to confirm that FileSystems.get_filesystem("gs://...") works without apache-beam[gcp] installed.
  • Manual verification was performed to ensure GCP dependency errors are raised only when GCS operations (like open) are invoked, not during initial lookup.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@github-actions
Copy link
Contributor

github-actions bot commented Feb 3, 2026

Checks are failing. Will not request review until checks are succeeding. If you'd like to override that behavior, comment assign set of reviewers

Copilot AI review requested due to automatic review settings February 4, 2026 08:21
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR makes GCS filesystem lookup lazy to align with S3 filesystem behavior, addressing issue #37445. The change allows FileSystems.get_filesystem('gs://...') to return a GCSFileSystem instance without requiring apache-beam[gcp] dependencies to be installed, deferring dependency validation until the filesystem is actually used.

Changes:

  • Modified GCSFileSystem to use lazy loading for the gcsio module via a new _get_gcsio() helper method
  • Converted CHUNK_SIZE from a class variable to a lazily-evaluated property
  • Updated all test methods to mock _get_gcsio() instead of the module-level gcsio import
  • Added a test verifying that filesystem lookup succeeds without GCP extras installed
  • (Unrelated) Added write disposition to BigQuery write operation in Java TriggerExample

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 4 comments.

File Description
sdks/python/apache_beam/io/gcp/gcsfilesystem.py Implements lazy loading of gcsio module and converts CHUNK_SIZE to a property
sdks/python/apache_beam/io/gcp/gcsfilesystem_test.py Adds lazy loading test and updates existing tests to mock the new _get_gcsio method
examples/java/src/main/java/org/apache/beam/examples/cookbook/TriggerExample.java Adds write disposition to BigQuery operation (appears unrelated to PR objective)
Comments suppressed due to low confidence (1)

sdks/python/apache_beam/io/gcp/gcsfilesystem.py:392

  • The exception handler only catches ValueError, but _get_gcsio() can raise ImportError when GCP dependencies are missing. Since the comment indicates "report lineage is fail-safe", the exception handler should also catch ImportError to prevent failures when GCP extras are not installed.
    try:
      gcsio = self._get_gcsio()
      components = gcsio.parse_gcs_path(path, object_optional=True)
    except ValueError:
      # report lineage is fail-safe
      traceback.print_exc()

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +53 to +59
_CHUNK_SIZE = None

@property
def CHUNK_SIZE(self):
if self._CHUNK_SIZE is None:
self._CHUNK_SIZE = self._get_gcsio().MAX_BATCH_OPERATION_SIZE
return self._CHUNK_SIZE # Chuck size in batch operations
Copy link

Copilot AI Feb 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_CHUNK_SIZE is defined as a class variable (shared across all instances), but it's being used as an instance variable in the property getter. This creates a subtle bug where if one instance initializes _CHUNK_SIZE, all other instances will see it as initialized, even if they haven't accessed the gcsio module yet. Consider making this a true instance variable by initializing it in __init__ as self._chunk_size = None and updating the property accordingly.

Copilot uses AI. Check for mistakes.
def CHUNK_SIZE(self):
if self._CHUNK_SIZE is None:
self._CHUNK_SIZE = self._get_gcsio().MAX_BATCH_OPERATION_SIZE
return self._CHUNK_SIZE # Chuck size in batch operations
Copy link

Copilot AI Feb 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo in comment: "Chuck size" should be "Chunk size".

Suggested change
return self._CHUNK_SIZE # Chuck size in batch operations
return self._CHUNK_SIZE # Chunk size in batch operations

Copilot uses AI. Check for mistakes.
Comment on lines +489 to +492
BigQueryIO.writeTableRows()
.to(tableRef)
.withSchema(getSchema())
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_APPEND));
Copy link

Copilot AI Feb 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This Java file change appears unrelated to the Python GCS filesystem lazy loading feature described in the PR. The change adds a write disposition to a BigQuery write operation in a Java example, which has no connection to making GCS filesystem lookup lazy in Python. This should likely be in a separate PR.

Copilot uses AI. Check for mistakes.
Comment on lines +43 to +45
def test_get_filesystem_does_not_require_gcp_extra(self):
fs = FileSystems.get_filesystem('gs://test-bucket/path')
self.assertEqual(fs.scheme(), 'gs')
Copy link

Copilot AI Feb 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test validates that get_filesystem() succeeds without GCP extras, but it doesn't verify the second part of the lazy loading behavior: that actual filesystem operations (like open(), match(), etc.) properly raise an ImportError when GCP dependencies are missing. Consider adding a test that verifies an ImportError is raised when attempting to use the filesystem without GCP dependencies installed, to ensure the lazy loading works as intended.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

GCSFileSystem requires gcp extra at lookup time while S3FileSystem does not

1 participant