added rate limiting to address biased trie cache preloading#2029
added rate limiting to address biased trie cache preloading#2029pratikspatil024 wants to merge 6 commits intodevelopfrom
Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## develop #2029 +/- ##
===========================================
+ Coverage 49.57% 49.61% +0.04%
===========================================
Files 873 873
Lines 150507 150615 +108
===========================================
+ Hits 74615 74730 +115
+ Misses 70860 70854 -6
+ Partials 5032 5031 -1
... and 22 files with indirect coverage changes
🚀 New features to boost your workflow:
|
cffls
left a comment
There was a problem hiding this comment.
Looks good overall! What long does it take to preload mainnet with the default throttling setting?
| // Apply rate limiting after reading, based on actual bytes read | ||
| if limiter != nil { | ||
| if err := limiter.WaitN(c.ctx, len(nodeData)); err != nil { | ||
| log.Info("Preload interrupted during rate limit wait", |
There was a problem hiding this comment.
Would this line flood the log?
There was a problem hiding this comment.
This log will be printed only once (per address) during shutdown.
There was a problem hiding this comment.
From the api doc, looks like it will also return an error if n exceeds the limit size?
| "max depth", maxDepthReached, | ||
| "size", common.StorageSize(totalBytesLoaded).String(), | ||
| "elapsed", time.Since(startTime)) | ||
| return |
There was a problem hiding this comment.
Not sure why the function returns here. If we return here it means the cache won't be warmed.
There was a problem hiding this comment.
Correct, this is expected as this will be triggered during shutdown.
@cffls - with I think this |
72cb861 to
faf031c
Compare
Code ReviewIssue found: Documentation discrepancy in The comments say This is a 10x difference that could cause confusion for users configuring the node. Suggested fix: Update the comments to say Checked for bugs and CLAUDE.md compliance. |
|
@claude addressed your comments. |
|
Claude finished @pratikspatil024's task in 1m 6s —— View job Re-reviewing PR
Review SummaryThe previous documentation discrepancy has been fixed. The PR now consistently uses 1MB/s as the default rate limit across all files:
Code Quality AssessmentStrengths:
Implementation Details:
No issues found in this re-review. |
lucca30
left a comment
There was a problem hiding this comment.
Looks good to me, just proposed a small change if makes sense in comparison to nodeData which I'm not sure what is the size.
|
|
||
| // Apply rate limiting after reading, based on actual bytes read | ||
| if limiter != nil { | ||
| if err := limiter.WaitN(c.ctx, len(nodeData)); err != nil { |
There was a problem hiding this comment.
Regarding len(nodeData) to measure the rate:
- The cache is composed by
key+nodeData
key := append(accountHash.Bytes(), item.path...)
addrCache.Set(key, nodeData)
- Key is basically
accountHash+path
I guess is super low, but hash is 32 bytes size, so if nodeData has something like 100 bytes than would be good account it
|



Description
Please provide a detailed description of what was done in this PR
Changes
Breaking changes
Please complete this section if any breaking changes have been made, otherwise delete it
Nodes audience
In case this PR includes changes that must be applied only to a subset of nodes, please specify how you handled it (e.g. by adding a flag with a default value...)
Checklist
Cross repository changes
Testing
Manual tests
Please complete this section with the steps you performed if you ran manual tests for this functionality, otherwise delete it
Additional comments
Please post additional comments in this section if you have them, otherwise delete it