Skip to content
Open
Show file tree
Hide file tree
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

8 changes: 8 additions & 0 deletions anchor/client/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -378,6 +378,12 @@ impl Client {
// Create fork phase channel for fork transition events
let (fork_phase_tx, fork_phase_rx) = async_broadcast::broadcast(16);

// Create shared fork lifecycle state for cross-component fork awareness
let fork_lifecycle = fork::SharedForkLifecycle::new(fork::ForkLifecycle::Normal {
current: initial_fork_config.fork,
domain_type: initial_fork_config.domain_type,
});

// Start fork monitor to log fork transitions and send ForkPhase events
fork::monitor::spawn(
fork_schedule.clone(),
Expand All @@ -386,6 +392,7 @@ impl Client {
spec.seconds_per_slot,
executor.clone(),
fork_phase_tx,
fork_lifecycle.clone(),
);

// Start validator index syncer
Expand Down Expand Up @@ -552,6 +559,7 @@ impl Client {
executor.clone(),
spec.clone(),
fork_phase_rx,
fork_lifecycle,
)
.await
.map_err(|e| format!("Unable to start network: {e}"))?;
Expand Down
1 change: 1 addition & 0 deletions anchor/common/fork/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ edition = { workspace = true }

[dependencies]
async-broadcast = { workspace = true }
parking_lot = { workspace = true }
serde = { workspace = true }
slot_clock = { workspace = true }
ssv_types = { workspace = true }
Expand Down
2 changes: 2 additions & 0 deletions anchor/common/fork/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -20,9 +20,11 @@
//! - **"What"**: Each subsystem queries the active fork to determine behavior

mod fork;
mod lifecycle;
pub mod monitor;
mod schedule;

pub use fork::{ALAN_TOPIC_PREFIX, Fork};
pub use lifecycle::{ForkLifecycle, SharedForkLifecycle};
pub use monitor::{ForkPhase, ForkPhaseSender};
pub use schedule::{FORK_PREPARATION_EPOCHS, ForkConfig, ForkSchedule, SUBSEQUENT_WINDOW_SLOTS};
200 changes: 200 additions & 0 deletions anchor/common/fork/src/lifecycle.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,200 @@
//! Fork lifecycle state management.
//!
//! Provides [`ForkLifecycle`] and [`SharedForkLifecycle`] for tracking the current
//! fork transition state across all components. The [`ForkMonitor`](crate::monitor)
//! is the sole writer; all other components read via [`SharedForkLifecycle`].

use std::sync::Arc;

use parking_lot::RwLock;
use ssv_types::domain_type::DomainType;

use crate::Fork;

/// Fork lifecycle state. Updated only by ForkMonitor.
#[derive(Clone, Debug, PartialEq, Eq)]
pub enum ForkLifecycle {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thinking of ways we could simplify this further @dknopik. I think one way would be to convert this enum to a flat struct:

pub struct ForkState {
    pub domain_type: DomainType,
    pub current_fork: Fork,
    pub in_transition: bool,
}

since network components seem to just care about what is the current domain type and whether the network is in a fork transition period (warm up/grace period) vs if normal operation

another idea would just be to extend ForkSchedule with a current field:

  pub struct ForkSchedule {
      configs: BTreeMap<Fork, ForkConfig>,
      network_name: String,
      // Runtime: updated by ForkMonitor, read by
  network components
      current: RwLock<ActiveFork>,
  }

  struct ActiveFork {
      fork: Fork,
      in_transition: bool,
  }

benefit of this is killing need for new ForkState module. con is that it mixes config (current ForkSchedule struct) with runtime characteristics (new current field)

or perhaps you wanted to take a look and come up with something. if so, feel free to lmk if you want me to try out whatever you're thinking

Copy link
Member Author

@diegomrsantos diegomrsantos Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good call on simplification. I’d keep runtime state separate from ForkSchedule (it’s config-only today), so I don’t think adding mutable current there is the right direction.
I’m fine simplifying the consumer API with an is_transition() helper, but I’d keep ForkLifecycle as an enum for now so we retain WarmUp vs GracePeriod semantics if/when we need them

/// Operating on a single fork. No transition in progress.
///
/// Used in two scenarios:
/// - Pre-fork: only one fork exists (e.g., Alan at genesis).
/// - Post-grace-period: the fork transition is complete and only the current fork's context is
/// relevant.
Normal {
current: Fork,
domain_type: DomainType,
},

/// Preparing for an upcoming fork. Dual-subscribing to new topics.
///
/// Both forks' contexts are relevant — peers subscribed to either
/// the current or upcoming fork are useful.
WarmUp {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about collapsing WarmUp and GracePeriod variants into something like

Transitioning {
   current,
   domain_type
}

just an idea since consumers don't seem to care whether they're in WarmUp or GracePeriod, just that current and domain_type are correct. and upcoming/previous fields seem unused

current: Fork,
upcoming: Fork,
domain_type: DomainType,
},

/// Fork activated but grace period still active. Keeping old subscriptions
/// to catch late messages from the previous fork.
///
/// Both forks' contexts are relevant — peers subscribed to either
/// the current or previous fork are still useful.
GracePeriod {
current: Fork,
previous: Fork,
domain_type: DomainType,
},
}

impl ForkLifecycle {
/// Returns the current active fork.
pub fn current_fork(&self) -> Fork {
match self {
Self::Normal { current, .. }
| Self::WarmUp { current, .. }
| Self::GracePeriod { current, .. } => *current,
}
}

/// Returns the domain type for the current fork.
pub fn domain_type(&self) -> DomainType {
match self {
Self::Normal { domain_type, .. }
| Self::WarmUp { domain_type, .. }
| Self::GracePeriod { domain_type, .. } => *domain_type,
}
}
}

/// Shared fork lifecycle state, readable by all components.
///
/// Updated only by [`ForkMonitor`](crate::monitor). Uses [`parking_lot::RwLock`]
/// for interior mutability. Writes are brief and rare (only on fork transitions),
/// so contention is negligible.
#[derive(Clone, Debug)]
pub struct SharedForkLifecycle(Arc<RwLock<ForkLifecycle>>);

impl SharedForkLifecycle {
/// Create a new shared lifecycle with the given initial state.
pub fn new(lifecycle: ForkLifecycle) -> Self {
Self(Arc::new(RwLock::new(lifecycle)))
}

/// Read the full lifecycle state.
pub fn get(&self) -> ForkLifecycle {
self.0.read().clone()
}

/// Update the lifecycle state. Called only by ForkMonitor.
pub fn set(&self, lifecycle: ForkLifecycle) {
*self.0.write() = lifecycle;
}

/// Convenience: get current domain type without cloning full enum.
pub fn domain_type(&self) -> DomainType {
self.0.read().domain_type()
}

/// Convenience: get current fork without cloning full enum.
pub fn current_fork(&self) -> Fork {
self.0.read().current_fork()
}
}

#[cfg(test)]
mod tests {
use super::*;

const ALAN_DOMAIN: DomainType = DomainType([0, 0, 0, 1]);
const BOOLE_DOMAIN: DomainType = DomainType([0, 0, 0, 2]);

#[test]
fn shared_fork_lifecycle_get_set_roundtrip() {
let shared = SharedForkLifecycle::new(ForkLifecycle::Normal {
current: Fork::Alan,
domain_type: ALAN_DOMAIN,
});

assert_eq!(
shared.get(),
ForkLifecycle::Normal {
current: Fork::Alan,
domain_type: ALAN_DOMAIN,
}
);

shared.set(ForkLifecycle::GracePeriod {
current: Fork::Boole,
previous: Fork::Alan,
domain_type: BOOLE_DOMAIN,
});

assert_eq!(
shared.get(),
ForkLifecycle::GracePeriod {
current: Fork::Boole,
previous: Fork::Alan,
domain_type: BOOLE_DOMAIN,
}
);
}

#[test]
fn current_fork_returns_correct_fork_for_each_variant() {
let normal = ForkLifecycle::Normal {
current: Fork::Alan,
domain_type: ALAN_DOMAIN,
};
assert_eq!(normal.current_fork(), Fork::Alan);

let warmup = ForkLifecycle::WarmUp {
current: Fork::Alan,
upcoming: Fork::Boole,
domain_type: ALAN_DOMAIN,
};
assert_eq!(warmup.current_fork(), Fork::Alan);

let grace = ForkLifecycle::GracePeriod {
current: Fork::Boole,
previous: Fork::Alan,
domain_type: BOOLE_DOMAIN,
};
assert_eq!(grace.current_fork(), Fork::Boole);
}

#[test]
fn domain_type_returns_correct_type_for_each_variant() {
let normal = ForkLifecycle::Normal {
current: Fork::Alan,
domain_type: ALAN_DOMAIN,
};
assert_eq!(normal.domain_type(), ALAN_DOMAIN);

let warmup = ForkLifecycle::WarmUp {
current: Fork::Alan,
upcoming: Fork::Boole,
domain_type: ALAN_DOMAIN,
};
assert_eq!(warmup.domain_type(), ALAN_DOMAIN);

let grace = ForkLifecycle::GracePeriod {
current: Fork::Boole,
previous: Fork::Alan,
domain_type: BOOLE_DOMAIN,
};
assert_eq!(grace.domain_type(), BOOLE_DOMAIN);
}

#[test]
fn shared_convenience_methods_match_full_get() {
let shared = SharedForkLifecycle::new(ForkLifecycle::WarmUp {
current: Fork::Alan,
upcoming: Fork::Boole,
domain_type: ALAN_DOMAIN,
});

assert_eq!(shared.current_fork(), Fork::Alan);
assert_eq!(shared.domain_type(), ALAN_DOMAIN);
}
}
Loading
Loading