Skip to content
Open
Show file tree
Hide file tree
Changes from 8 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 11 additions & 0 deletions anchor/client/src/cli.rs
Original file line number Diff line number Diff line change
Expand Up @@ -532,4 +532,15 @@ pub struct Node {

#[clap(flatten)]
pub logging_flags: FileLoggingFlags,

#[clap(
long,
help = "Enable parallel querying and scoring of attestation data across multiple beacon nodes. \
When enabled, Anchor queries all configured beacon nodes simultaneously and selects \
the attestation data with the highest score based on checkpoint epochs and head block \
proximity. Only useful when multiple beacon nodes are configured via --beacon-nodes. \
Disabled by default.",
display_order = 0
)]
pub with_weighted_attestation_data: bool,
Copy link
Member

@diegomrsantos diegomrsantos Dec 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is parallel querying with weighted selection strictly better when we have multiple beacon nodes? If so, then it should just be the implementation, not a user choice. We already have 54 parameters.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair point about the number of parameters... I wouldn't say it's strictly better. SSV describes it as:

Improves attestation accuracy by scoring responses from multiple Beacon nodes based on epoch and slot proximity. Adds slight latency to duties but includes safeguards (timeouts, retries).

So it's a tradeoff between accuracy and latency so it should probably stay as an option.
More about trade offs here

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, that makes sense. But then the question isn't whether a tradeoff exists; it's whether operators should be required to understand it and make a choice.

  1. The latency concern is already addressed by design; the timeouts enforce bounded latency regardless of beacon node response times.
  2. The tradeoff isn't operator-dependent: Unlike some configuration choices (e.g., "how much disk space to allocate"), there's no operator-specific context that changes the optimal decision here:
    • 1 beacon node → WAD provides no benefit (nothing to compare)
    • Multiple beacon nodes → WAD provides better accuracy with bounded latency
  3. Operators shouldn't need to read SSV WAD documentation: To make an informed choice about this flag, an operator would need to understand attestation scoring algorithms, epoch proximity, timeout implications, etc. That's an unreasonable cognitive load for what should be "run my validator correctly."

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What we could do is to collect data on mainnet that will help us to make an informed decision.

A practical measure first plan (no new operator config required) could look like this:

We ship it as “shadow mode” for one release:
• When --beacon-nodes has 2+ entries, run the parallel fetch/scoring in the background, but still use the current behavior for the actual duty result.
• Record metrics about what would have been selected and how long it took.

That gives us mainnet distributions with near-zero functional risk, and we can decide later whether it should become the default implementation. What do you think?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the idea of collecting metrics first and maybe hiding the flag for the first release, but I would still leave the functionality as accessible if the users (staking pools) which requested it want to use it right away. And for the "general" public we track metrics and see if it should become the default implementation.

Copy link
Member

@diegomrsantos diegomrsantos Dec 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense that some pools want to try this ASAP - but after thinking more about it, I’m worried we’re framing it as a trade-off when it’s really an unvalidated hypothesis. So the real claim here is: waiting/doing more work to pick "better" attestation data will improve outcomes (head correctness / inclusion) more than it harms them via added delay / load. That needs evidence. Adding a new public flag effectively ships a production-path behavior change and asks operators to run the experiment for us.

I’d strongly prefer we measure first: ship instrumentation + run the weighted selection in shadow mode when 2+ beacon nodes are configured (compute scores/timings and export metrics, but keep the current selection for duties). Then we can decide default/kill based on real mainnet distributions - without permanently growing the CLI surface.

If we must unblock specific pools immediately, I’d rather keep it clearly experimental/temporary (e.g. hidden-from-help / config-only) + mandatory metrics, with an explicit revisit and remove/make-default after N releases plan. Also, making it very clear that they are using it at their own risk.

}
5 changes: 5 additions & 0 deletions anchor/client/src/config.rs
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,8 @@ pub struct Config {
pub operator_dg: bool,
/// Number of epochs to monitor for twins after grace period
pub operator_dg_wait_epochs: u64,
/// Enable attestation data scoring across multiple beacon nodes
pub with_weighted_attestation_data: bool,
/// Whether to check for matching checkpoint roots in QBFT.
pub strict_mfp: bool,
}
Expand Down Expand Up @@ -123,6 +125,7 @@ impl Config {
disable_latency_measurement_service: false,
operator_dg: false,
operator_dg_wait_epochs: 2,
with_weighted_attestation_data: false,
strict_mfp: false,
}
}
Expand Down Expand Up @@ -279,6 +282,8 @@ pub fn from_cli(cli_args: &Node, global_config: GlobalConfig) -> Result<Config,
config.processor.queue_size.insert(queue, size);
}

config.with_weighted_attestation_data = cli_args.with_weighted_attestation_data;

Ok(config)
}

Expand Down
1 change: 1 addition & 0 deletions anchor/client/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -662,6 +662,7 @@ impl Client {
beacon_nodes.clone(),
executor.clone(),
spec.clone(),
config.with_weighted_attestation_data,
);

// We use `SLOTS_PER_EPOCH` as the capacity of the block notification channel, because
Expand Down
Loading
Loading