-
Notifications
You must be signed in to change notification settings - Fork 289
Description
When I am writing functions that use s3fs, I find myself using skip_instance_cache a lot. Doing so seems to noticeably worsen performance, which makes sense. The reason I am skipping caching of the instances is because with the cache, I have found that if I have certain issues (like ProfileNotFound in ~/.aws/credentials or PermissionError) then even if I subsequently generate the credentials, s3fs will continue to throw the same errors. Ultimately I have to restart my kernel (I think this is the solution) in order to reset things.
It would be nice if s3fs automatically removed profiles from the cache if they get some kind of access / permission error like above.
Alternatively, or perhaps in addition to, it would be cool if s3fs.S3FileSystem() had like a check argument that could (optionally) test the credentials during instantiation. For example, running aws sts get-caller-identity, per: https://stackoverflow.com/a/66874437/9244371
More context on the problem I'm trying to solve:
In my environment, I have to generate new keys on a regular basis that provide access to a particular bucket with a known profile name. I will write functions that other users will run to interact with certain buckets (so I know the profile). I skip instance caching because I don't yet know if they have generated valid keys. I can then catch some exception and generate keys and then try again.