Skip to content

Conversation

@kashifkhan0771
Copy link
Contributor

@kashifkhan0771 kashifkhan0771 commented Feb 3, 2026

Description:

Earlier, we used a map to temporarily store GitLab project metadata. While maps work well for small datasets, they don’t scale efficiently for larger ones. There was also a bug in the caching logic: when storing entries, we used the GitLab HTTPURLToRepo field as the cache key, but when retrieving entries, we used the normalized URL. As a result, cache lookups almost never succeeded, and the cache kept growing without being effectively used.

With this fix, we’ve replaced the map with an LRU cache, which is better suited for this use case. The cache now stores up to 15,000 entries for one hour, after which the LRU mechanism automatically evicts old items, keeping memory usage under control. We also consistently use the normalized URL for both setting and fetching cache entries.

I also added some comments to improve the readability :)

Checklist:

  • Tests passing (make test-community)?
  • Lint passing (make lint this requires golangci-lint)?

@kashifkhan0771 kashifkhan0771 requested a review from a team February 3, 2026 12:10
@kashifkhan0771 kashifkhan0771 requested a review from a team as a code owner February 3, 2026 12:10
@@ -1022,77 +1028,6 @@ func (s *Source) WithScanOptions(scanOptions *git.ScanOptions) {
s.scanOptions = scanOptions
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These funcs are moved at the end of the file.

Earlier, we used a map to temporarily store GitLab project metadata. While maps work well for small datasets, they don’t scale efficiently for larger ones.
There was also a bug in the caching logic: when storing entries, we used the GitLab HTTPURLToRepo field as the cache key, but when retrieving entries,
we used the normalized URL. As a result, cache lookups almost never succeeded, and the cache kept growing without being effectively used.

With this fix, we’ve replaced the map with an LRU cache, which is better suited for this use case.
The cache now stores up to 15,000 entries for one hour, after which the LRU mechanism automatically evicts old items, keeping memory usage under control.
We also consistently use the normalized URL for both setting and fetching cache entries.
Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.

Comment on lines +26 to +31
return &projectMetadataCache{
cache: expirable.NewLRU[string, *project](
15000, // upto 15000 entries
nil,
60*time.Minute, // time-based expiration - 1 hour
),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm interested to know about the thought process that went into deciding these numbers. Is that based on our past experience about the rate at which we scan gitlab projects?

Can there be a possibility of an entry getting expired before we might want to use it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There isn’t any deep science behind these numbers. They’re rough, initial choices. The 15K limit is something most organizations won’t hit at all. For organizations with more than 15K projects, I think we should be able to process roughly 15K repositories per hour. In practice, it’s very unlikely that a repository would be enumerated and not scanned within an hour. We need to start with baseline numbers, and if we run into issues, we can always adjust them based on observed behavior.

If someone has a strong alternative (though I don’t think we do) for how many repositories we should process per hour, we can start with that number instead.

Using expirable LRU cache so that entries are automatically cleaned up after their TTL via lazy deletion and a background cleanup routine. Additionally, if the cache reaches the 15K entry limit, it will evict the least recently used entries by design so the new inserts are never blocked

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. Thanks for the explanation 👍

Copy link
Contributor

@mustansir14 mustansir14 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have some questions

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants