-
Notifications
You must be signed in to change notification settings - Fork 28
feat: query multiple beacon nodes in parallel #751
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: unstable
Are you sure you want to change the base?
Changes from 5 commits
e99f7bc
e10120b
7c1777b
101ec8f
de9082f
b3f85c7
769a816
82cb833
f37b4be
014916d
0d64187
452fe30
450f1c0
83248ae
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -520,4 +520,11 @@ pub struct Node { | |
|
|
||
| #[clap(flatten)] | ||
| pub logging_flags: FileLoggingFlags, | ||
|
|
||
| #[clap( | ||
| long, | ||
| help = "Enable attestation data scoring across multiple beacon nodes.", | ||
| display_order = 0 | ||
| )] | ||
| pub with_weighted_attestation_data: bool, | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is parallel querying with weighted selection strictly better when we have multiple beacon nodes? If so, then it should just be the implementation, not a user choice. We already have 54 parameters.
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Fair point about the number of parameters... I wouldn't say it's strictly better. SSV describes it as:
So it's a tradeoff between accuracy and latency so it should probably stay as an option.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thanks, that makes sense. But then the question isn't whether a tradeoff exists; it's whether operators should be required to understand it and make a choice.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What we could do is to collect data on mainnet that will help us to make an informed decision. A practical measure first plan (no new operator config required) could look like this: We ship it as “shadow mode” for one release: That gives us mainnet distributions with near-zero functional risk, and we can decide later whether it should become the default implementation. What do you think?
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. cc @dknopik
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I like the idea of collecting metrics first and maybe hiding the flag for the first release, but I would still leave the functionality as accessible if the users (staking pools) which requested it want to use it right away. And for the "general" public we track metrics and see if it should become the default implementation.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Makes sense that some pools want to try this ASAP - but after thinking more about it, I’m worried we’re framing it as a trade-off when it’s really an unvalidated hypothesis. So the real claim here is: waiting/doing more work to pick "better" attestation data will improve outcomes (head correctness / inclusion) more than it harms them via added delay / load. That needs evidence. Adding a new public flag effectively ships a production-path behavior change and asks operators to run the experiment for us. I’d strongly prefer we measure first: ship instrumentation + run the weighted selection in shadow mode when 2+ beacon nodes are configured (compute scores/timings and export metrics, but keep the current selection for duties). Then we can decide default/kill based on real mainnet distributions - without permanently growing the CLI surface. If we must unblock specific pools immediately, I’d rather keep it clearly experimental/temporary (e.g. hidden-from-help / config-only) + mandatory metrics, with an explicit revisit and remove/make-default after N releases plan. Also, making it very clear that they are using it at their own risk. |
||
| } | ||
Uh oh!
There was an error while loading. Please reload this page.