Conversation
**Motivation** - the function isValidBlsToExecutionChange() requires a state which is used to get config and validator from - but we'll not have `CachedBeaconStateAllForks` after #8650 **Description** - pass config and validator to this function instead part of #8657 --------- Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
…lashing() (#8744) **Motivation** - the 2 functions `assertValidAttesterSlashing()` and `assertValidProposerSlashing()` requrie the full `CachedBeaconStateAllForks` but we don't need to - this PR simplifies it so that it'll work with the future BeaconStateView when we integrate the native state-transition, see #8650 **Description** pass required properties from state instead - `assertValidAttesterSlashing()`: pass config, stateSlot, validators length instead of the whole `CachedBeaconStateAllForks` - `assertValidProposerSlashing()`: config, index2pubkey, stateSlot, proposer instead of the whole `CachedBeaconStateAllForks` part of #8657 --------- Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
**Motivation** - the code to get attestations to include to a pre-electra block is complex and it makes it hard for a migration to `IBeaconStateView`, we need to remove it - it's a good chance to upgrade some e2e tests to run from `electra` to `fulu` which is a good preparation for `gloas` **Description** - throw error for `getAttestationsForBlockPreElectra()` in `AggregatedAttestationPool` - e2e tests to start from electra - dev command to start from electra part of #8658 --------- Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com> Co-authored-by: Nico Flaig <nflaig@protonmail.com>
…8742) **Motivation** Enables tests that need DB persistence and state resumption across node restarts. **Description** - Use `options.db?.name ?? tmpDir.name` instead of hardcoded `tmpDir.name` in `getDevBeaconNode()`. - Added `resumeFromDb` option that loads anchor state from existing DB via `initStateFromDb()` instead of creating fresh genesis. This preserves the finalized epoch from previous runs. **Use case**: Backfill sync tests that restart nodes mid-sync need to resume from persisted state, not start fresh from epoch 0.
**Motivation** - When `lodestar-z` happens, `BeaconStateAllForks` will be a blocker and The `build()` method in `ShufflingCache` depends on it. - Post-Fulu proposer lookahead is stored in `BeaconState`, requiring shufflings synchronously during epoch transitions—making the async `build()` pattern no longer viable. **Description** - Remove `build()` method from `IShufflingCache` interface and `ShufflingCache` class - Add `set()` to `IShufflingCache` interface to add shufflings - Remove `asyncShufflingCalculation` Closes #8653 **AI Assistance Disclosure** - [x] External Contributors: I have read the [contributor guidelines](https://github.com/ChainSafe/lodestar/blob/unstable/CONTRIBUTING.md#ai-assistance-notice) and disclosed my usage of AI below. use claude to understand how ShufflingCache avoids recomputation <!-- Insert any AI assistance disclosure here --> --------- Co-authored-by: matthewkeil <me@matthewkeil.com>
**Motivation** - we never mutate state inside beacon-node so we should not do the clone() there, same to commit() - it's not a problem for ts BeaconStateView, even not a big performance issue for the native BeaconStateView, it's just that we don't have to because it's a principle to not to mutate any BeaconStates in beacon-node - this helps us not having to implement `clone()` and `commit()` in the BeaconStateView interface **Description** - remove `clone()` in state caches and `regen.getState()` api - remove `commit()` - remove unused functions - do `state.clone()` of rewards api to inside its implementation - simplify `computeBlockRewards()` Closes #8725 --------- Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
## Summary Backport of #8781 to unstable branch. - Fix JavaScript falsy check bug where `!0 === true` caused slot 0 to incorrectly fail validation - Fix inverted blobsCount logic in dataColumnResponseValidation.ts ## Problem During custody backfill for epoch 0 (slots 0-7), the `data_column_sidecars_by_range` RPC handler was throwing: ``` Can not parse the slot from block bytes ``` This was caused by the slot parsing check using JavaScript's falsy check: ```typescript if (!slot) throw new Error("Can not parse the slot from block bytes"); ``` Since `!0 === true` in JavaScript, slot 0 (genesis block) incorrectly triggered this error. ## Changes 1. **Root cause fix** (`sszBytes.ts`): Changed `if (!slot)` to `if (slot === null)` for explicit null check 2. **Logic fix** (`dataColumnResponseValidation.ts`): Changed `blobsCount > 0` to `blobsCount === 0` 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-authored-by: Ubuntu <ubuntu@ethereum-node1.local> Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
All the files inside `node_modules/.bin` are shell scripts since we switched to pnpm which can't be executed via `node`.
**Motivation** Support single db operation to delete/put in a single repository. Will partially covers #8244 and enable next step for the across db repositories atomic write. **Description** - Allow bundle `put` and `delete` in a single batch **Steps to test or reproduce** - Run all tests --------- Co-authored-by: Nico Flaig <nflaig@protonmail.com>
**Motivation** - after we migrate to the native state-transition, we're not able to query EpochCache methods anymore **Description** - use our ShufflingCache to query for these methods instead, the list includes: - getIndexedAttestation() - getAttestingIndices() - getBeaconCommittee() - getBeaconCommittees() Closes #8655 blocked by #8721 --------- Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
**Motivation** - our benchmark is super flaky and I rarely see we have CI green - our snappy benchmark is only useful in scope of a PR #6483, it's not worth to keeps running it on CI - we want to only run benchmark for functions developed by us to save CI time, see #8664 **Description** - skip snappy benchmark Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
**Motivation** - found failed benchmark in [CI](https://github.com/ChainSafe/lodestar/actions/runs/21348436170/job/61440256289?pr=8732) but that's not lodestar code - see also #8786 **Description** - skip Map benchmark Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
**Motivation** This PR adds a directory with specification references. These are used to map specification items (configs, presets, functions, etc) to client implementations (code in Lodestar). These specification references are meant to (1) help developers keep track of specification changes and (2) make it easier for third-parties (eg EF Protocol Security) to verify clients adhere to the specifications. Our team is working to do this for all clients. * Consensys/teku#9731 * OffchainLabs/prysm#15592 * sigp/lighthouse#8549 *Note*: The function mappings are the only weak-spot. It's quite difficult to map some of these because of implementation differences & the fact that not everything is implemented (eg Gloas functions). The specref functions will most likely require some additional work, but this PR does identify most functions. **AI Assistance Disclosure** - [x] External Contributors: I have read the [contributor guidelines](https://github.com/ChainSafe/lodestar/blob/unstable/CONTRIBUTING.md#ai-assistance-notice) and disclosed my usage of AI below. Yes, I used Claude Code to identify/map most of these. Fixes: #7477 --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Co-authored-by: Nico Flaig <nflaig@protonmail.com>
**Motivation** This PR is follow up from: * #8778 **Description** This PR enables a few optional features: * `auto_add_missing_entries`: Add missing spec items to the relevant mapping files. It _will not_ add missing spec items if there's an exception for that spec item. * `auto_standardize_names`: Automatically add `#fork` tags to specref names, for explicitness. * `require_exceptions_have_fork`: Require exceptions include a `#fork` tag. It also removes the KZG functions (which clients do not implement) from the `functions.yml` file. @ensi321 could you give me a list of other items we want to remove? And it also [fixes](da91b29) the search query for `get_committee_assignment` which was moved before the first PR was merged. **AI Assistance Disclosure** - [x] External Contributors: I have read the [contributor guidelines](https://github.com/ChainSafe/lodestar/blob/unstable/CONTRIBUTING.md#ai-assistance-notice) and disclosed my usage of AI below. I used AI to remove the KZG functions.
**Motivation** - currently state repositories (StateArchiveRepository + CheckpointStateRepository) tightly coupled with `BeaconStateAllForks` which make it hard when we migrate to a generic BeaconStateView (to be come later, see #8650) - so we need to let these repos to work with Uint8Array and let consumers decide how to create a type/view of BeaconState from there **Description** - separate the abstract repository to be extended from a newly created `BinaryRepository` - then let `StateArchiveRepository + CheckpointStateRepository` to be extended from `BinaryRepository` - the benefit is consumer can only use methods in `BinaryRepository`, it's a compile time check, vs calling methods in `Repository` and throw runtime error which make it harder to detect errors - so we only need to validate this PR via compilation instead of having to launch a node to confirm, and it's tricky to detect error there - update consumers accordingly Closes #8729 --------- Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
**Motivation** - it's not easy to find logs of `archiveBlocks` for a specified current epoch or finalized epoch **Description** - add log context to it --------- Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com> Co-authored-by: Nico Flaig <nflaig@protonmail.com>
- Introduce builder entity to beacon state - add builder deposit, withdrawal and withdrawal sweep - bump spec test version to `v1.7.0-alpha.1` - skipping certain fork choice spec test as there are retroactive change to the proposer boost spec. Basically covers gloas beacon-chain spec change from v1.6.1 to v1.7.0-alpha.1 https://github.com/ethereum/consensus-specs/compare/v1.6.0...v1.7.0-alpha.1?path=specs/gloas/beacon-chain.md Spec ref: ethereum/consensus-specs#4788 --------- Co-authored-by: Nico Flaig <nflaig@protonmail.com>
This just adds assertions to check prior withdrawals against limit which is done in the spec [here](https://github.com/ethereum/consensus-specs/blob/ee5d067abf6486b77753e7c2928a81cf50972c75/specs/gloas/beacon-chain.md?plain=1#L862). This shouldn't happen unless there is a bug in our implementation but it's good to have them as sanity checks.
**Motivation** During v1.39.0 release, we decided it should be clear that release notes should be clearly added on the Github release page. **Description** This PR is just to more clearly outline the required step of release for publishing the release notes to the Github release page also. --------- Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com> Co-authored-by: wemeetagain <1348242+wemeetagain@users.noreply.github.com>
Closes #8698 **Motivation** Infrastructure operators want to easily see which validator indices are connected to their beacon node at a glance. **Description** - Add `GET /eth/v1/lodestar/monitored_validators` API endpoint that returns array of validator indices currently being monitored - Add info-level logs when validators register/unregister from the monitor, including the full list of monitored indices: - `Validator registered to monitor index=X, total=Y, indices=0,1,2,...` - `Validator removed from monitor index=X, total=Y, indices=0,1,3,...` **Usage** API endpoint: ```bash curl http://localhost:9596/eth/v1/lodestar/monitored_validators # Response: {"data":[0,1,2,3,4,5,6,7]} ``` For dashboard integration (e.g., Grafana), you can use the JSON API datasource plugin to poll this endpoint and display the validator indices. ***Design decisions*** - Used debug namespace instead of lodestar because it's enabled by default (no need for --rest.namespace all) - Used API + logs approach instead of metrics to avoid cardinality issues with validator index labels - Logs include full indices list so operators can see the complete state at each change without calling the API - add static metric with validator indices as label <img width="810" height="134" alt="image" src="https://github.com/user-attachments/assets/3f5e3ed6-2e23-470f-bf4e-7692cc744cc2" /> <img width="696" height="31" alt="image" src="https://github.com/user-attachments/assets/cf0addfc-58cf-4efb-8638-6d94df61948e" /> [link to issue](#8698) Closes #8698 **AI Assistance Disclosure** - [x] External Contributors: I have read the [contributor guidelines](https://github.com/ChainSafe/lodestar/blob/unstable/CONTRIBUTING.md#ai-assistance-notice) and disclosed my usage of AI below. Used Claude Code to assist with implementation and code exploration. --------- Co-authored-by: Nico Flaig <nflaig@protonmail.com>
**Motivation** - towards #8408 **Description** - Make `writeBlockInputToDb` async with block import/head update - Add a job queue to trigger trigger writes (one at a time) - For serving unfinalized blocks, check the block input cache first, before checking hot db - For serving unfinalized block blob sidecars, check the block input cache first, before checking hot db - see new `chain.getBlobSidecars` and `chain.getSerializedBlobSidecars` -- note: only serves all or none - new chain methods used in API and reqresp - note: old db method still used in by_range - For serving unfinalized block data column sidecars, check the block input cache first, before checking hot db - see new `chain.getDataColumnSidecars` and `chain.getSerializedDataColumnSidecars` - Let the `writeBlockInputToDb` process prune the block input cache after its run - Remove the `eagerPersistBlock` option, since it's now irrelevant
## Description
Fixes the failing lightclient e2e test after upgrading to electra.
### Root Cause
The test incorrectly assumed sync committees would have alternating
pubkeys `[pk0, pk1, pk0, pk1, ...]`:
```typescript
const committeePubkeys = Array.from({length: SYNC_COMMITTEE_SIZE}, (_, i) =>
i % 2 === 0 ? pubkeys[0] : pubkeys[1]
);
```
However, sync committees are computed using a **weighted random
shuffle** based on:
- A seed derived from the state
- Validator effective balances
### Why it broke post-electra
In `getNextSyncCommitteeIndices()`, the shuffle parameters changed for
electra:
```typescript
if (fork >= ForkSeq.electra) {
maxEffectiveBalance = MAX_EFFECTIVE_BALANCE_ELECTRA; // Different!
randByteCount = 2; // Different! (was 1)
}
```
The shuffle algorithm now uses 2 random bytes instead of 1, producing a
completely different committee distribution even with the same
validators.
### Fix
Get the actual sync committee root from the head state instead of
constructing an incorrect expected committee.
Closes #8723
---
> [!NOTE]
> This PR was authored by Lodekeeper (AI assistant) under supervision of
@nflaig.
---------
Co-authored-by: lodekeeper <lodekeeper@users.noreply.github.com>
…#8841) ## Description Fixes flaky lightclient e2e test failures introduced in #8825. ### Root Cause The `waitForBestUpdate()` function can take up to **7000ms**: - 5 slots waiting for `lightClientOptimisticUpdate` event (5000ms) - 2 slots sleep (2000ms) But the default vitest timeout is **5000ms**, causing race condition failures on slower CI machines. ### Evidence Local test run showed `getLightClientUpdatesByRange()` taking **7384ms**: ``` ✓ getLightClientUpdatesByRange() 7384ms ✓ getLightClientOptimisticUpdate() 5982ms ✓ getLightClientCommitteeRoot() for the 1st period 6003ms ``` ### Fix Increases timeout to 10s for tests using `waitForBestUpdate()`. --- > [!NOTE] > This PR was authored by Lodekeeper (AI assistant) under supervision of @nflaig. --------- Co-authored-by: lodekeeper <lodekeeper@users.noreply.github.com> Co-authored-by: Nico Flaig <nflaig@protonmail.com>
…8829) ## Description Fixes an uncaughtException crash when connecting to a peer with a malformed public key. ### Error ``` uncaughtException: Point of length 294 was invalid. Expected 33 compressed bytes or 65 uncompressed bytes at fromBytes (node_modules/@noble/curves/esm/abstract/weierstrass.js:594:23) at uncompressPublicKey (node_modules/@chainsafe/enr/lib/defaultCrypto.js:17:38) at computeNodeId (packages/beacon-node/lib/network/subnets/interface.js:12:37) at PeerManager.onLibp2pPeerConnect (packages/beacon-node/lib/network/peers/peerManager.js:117:28) ``` ### Fix Wrap `computeNodeId(remotePeer)` in a try-catch. If computing the node ID fails (due to invalid public key), log at debug level and disconnect the peer gracefully with a GOODBYE. ### Notes This is a defensive fix - we shouldn't crash the node because one peer has malformed crypto data. The peer is simply disconnected. Closes #8302 --- *This PR was authored with AI assistance (lodekeeper using Claude Opus 4).* --------- Co-authored-by: lodekeeper <lodekeeper@users.noreply.github.com>
**Motivation** - it takes effort to maintain the obsolete unused state caches, because we gonna need to enhance them for both BeaconStateView and glamsterdam **Description** - remove in-memory state caches - remove nHistoricalState flag, its usage is only to instantiate the correct state caches. This flag is defaulted to true for a long time anyway. --------- Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
**Motivation** - Resolves #8404 - Replaces #8408 **Description** - Add condition to `blockInput.hasAllData` to trigger if the number of columns is enough to reconstruct (gte `NUMBER_OF_COLUMNS / 2`) - Add `blockInputColumns.hasComputedAllData`, used to await full reconstruction during `writeBlockInputToDb`
**Motivation** Discussed in Bun discord thread that since we are no longer targeting Bun currently in our main development path, we can disabled unit tests until there's a bit more maturity upstream. This was also discussed on standup Feb 3, 2026: #8843 **Description** This PR disables `test-bun.yml` by commenting out the yaml.
Adds local development artifacts to .gitignore: - `.venv/` - Python virtual environments (used for spec tools like ethspecify) - `checkpoint_states/` - Checkpoint state files from local testing Co-authored-by: lodekeeper <lodekeeper@users.noreply.github.com>
Add support for GossipSub direct peers, allowing nodes to maintain permanent mesh connections without GRAFT/PRUNE negotiation. This enables proper peering agreements with other clients like Nimbus. Direct peers can be configured via CLI: --directPeers /ip4/192.168.1.1/tcp/9000/p2p/16Uiu2HAm... Both peers must configure each other as direct peers for the feature to work properly (reciprocal configuration). **Motivation** Direct peers are not supposed to send GRAFT/PRUNE messages. Pointing other CLs to Lodestar makes the other CLs receive these messages. Nimbus in response spams its logs with warning on each GRAFT/PRUNE message. **Description** <!-- A clear and concise general description of the changes of this pull request. --> The underlying p2p library already support direct peers. The PR is plumbing this feature to the a cmdline. **AI Assistance Disclosure** - [x] External Contributors: I have read the [contributor guidelines](https://github.com/ChainSafe/lodestar/blob/unstable/CONTRIBUTING.md#ai-assistance-notice) and disclosed my usage of AI below. <!-- Insert any AI assistance disclosure here --> This code was generated by Claude as disclosed in the Co-Author line. Frankly, given that this is mostly plumbing there wasn't much to object to. More importantly, I pushed a modified binary to my production environment and verified that: * Lodestar launches with the new cmdline * Lodestar prints "Adding direct peer" for each peer given. * Lodestar syncs and produces attestations. * Nimbus stops complaining that Lodestar GRAFT/PRUNE messages. --------- Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> Co-authored-by: Nico Flaig <nflaig@protonmail.com>
## Description Fix two lint warnings in the codebase: 1. **packages/beacon-node/src/chain/chain.ts:343** - `let` → `const` for `checkpointStateCache` since it's only assigned once 2. **packages/light-client/test/unit/webEsmBundle.browser.test.ts:3** - Remove unused `biome-ignore` comment (the rule it was suppressing is no longer triggered) --- *This PR was authored with AI assistance (Lodekeeper 🔥)* Co-authored-by: lodekeeper <lodekeeper@users.noreply.github.com>
## Description Addresses multiple flaky test failures observed in CI runs. ### Changes 1. **Health check timeout (30s -> 60s)** in crucible runners - `childProcessRunner.ts` - `dockerRunner.ts` 2. **Test timeouts increased:** - `cachedBeaconState.test.ts`: 20s -> 30s - `unknownBlockSync.test.ts`: 40s -> 60s (also added retry for `db.block.get`) - `syncInMemory.test.ts`: 20s -> 30s - `sync.node.test.ts`: 30s -> 45s 3. **Prover e2e tests:** - `minFinalizedTimeMs`: 64s -> 90s (hookTimeout) - `start.test.ts` beforeAll: 50s -> 80s ### Motivation These tests have been failing intermittently in CI due to: - Slow CI runner startup (health check timeout exceeded) - Slow execution environment causing test timeouts - Race conditions in database persistence (unknownBlockSync) ### Testing - Build passes locally - Lint passes locally The increased timeouts help tests pass on slow CI runners without affecting correctness. --- *This PR was authored with AI assistance. The changes were reviewed before submission.* --------- Co-authored-by: lodekeeper <lodekeeper@users.noreply.github.com> Co-authored-by: Nico Flaig <nflaig@protonmail.com>
…setups (#8849) ## Description Fixes #8848 In multi-beacon-node setups (DVT, fallback configurations, high-availability validators), the same block may be published to multiple beacon nodes simultaneously. When this happens, a race condition can occur: 1. Validator client (e.g., Vero) requests an unsigned block from Node A 2. Validator signs the block 3. Validator publishes the signed block to **multiple nodes** (Node A and Node B) 4. Node A receives the publish first and starts gossiping the block + data columns 5. Node B receives columns via gossip from Node A 6. Node B's API publish handler tries to add columns that already exist in the cache 7. `publishBlockV2` throws `BLOCK_INPUT_ERROR_INVALID_CONSTRUCTION` with message "Cannot addColumn to BlockInputColumns with duplicate column index" The same issue can occur with blobs in pre-Fulu forks. ## Solution Pass `{throwOnDuplicateAdd: false}` to `addColumn()` and `addBlob()` in `publishBlock`. This option already exists and is used correctly in `seenGossipBlockInput.ts` when handling gossip-received data. Receiving the same columns/blobs from both gossip and API is valid behavior in multi-BN setups and should not throw an error. ## Testing The fix uses an existing, well-tested code path. The `throwOnDuplicateAdd: false` option causes `addColumn`/`addBlob` to silently return when a duplicate is detected, which is the correct behavior when the same data arrives from multiple sources. --- *This PR was developed with AI assistance (disclosure per project contribution guidelines).* --------- Co-authored-by: lodekeeper <lodekeeper@users.noreply.github.com> Co-authored-by: Nico Flaig <nflaig@protonmail.com>
## Description Update `@vekexasia/bigint-buffer2` to v1.0.5 and re-enable the Bun CI workflow. ### Changes 1. **Update bigint-buffer2 to v1.0.5** - Fixes native bindings issue in Bun runtime (vekexasia/bigint-swissknife#3) - The maintainer fixed unsafe code that was causing buffer mutations in Bun 2. **Add minimumReleaseAgeExclude** - v1.0.5 was released today, so we need to exclude it from the 48h release age check 3. **Re-enable Bun CI workflow** - Uncommented `.github/workflows/test-bun.yml` - Runs unit tests for compatible packages under Bun runtime ### Local Testing Ran `bun run --bun test:unit` on `@lodestar/utils` package - all 225 tests pass ✅ ### Related - Fixes: vekexasia/bigint-swissknife#3 - Related to: #8789 --- *This PR was authored with AI assistance (Lodekeeper 🔥)* --------- Signed-off-by: lodekeeper <lodekeeper@users.noreply.github.com> Co-authored-by: lodekeeper <lodekeeper@users.noreply.github.com>
#8840) ## Motivation Follow-up from #8821 — adds a manually triggered workflow for ad-hoc Docker image tagging. ## Description Adds a `workflow_dispatch` workflow that allows building and tagging Docker images from any git ref with a custom tag name. ### Features - **Git ref input**: Build from any commit SHA, branch, or tag - **Custom tag**: Apply any custom Docker tag (with validation to prevent overwriting `latest`, `next`, etc.) - **Push toggle**: Option to push to Docker Hub or just build locally for testing - **Multi-platform**: Builds for amd64/arm64 when pushing - **Sanity checks**: Runs `--help` and displays image history - **Job summary**: Shows pull command after successful push ### Usage 1. Go to Actions → "Publish ad-hoc Docker image" 2. Click "Run workflow" 3. Enter: - Git ref (e.g., `abc1234`, `unstable`, `v1.24.0`) - Docker tag (e.g., `test-feature`, `debug-issue-123`) - Whether to push to Docker Hub 4. Pull with `docker pull chainsafe/lodestar:<your-tag>` --- *This PR was authored by an AI contributor (Lodekeeper) with human supervision.* Co-authored-by: lodekeeper <lodekeeper@users.noreply.github.com>
…8862) ## Summary Follow-up to #8840 per [review feedback](#8840 (comment)). ### Changes - **Renamed** `publish-adhoc.yml` → `publish-manual.yml` - **Added** optional `ref` input to `docker.yml` for building from specific commits - **Refactored** manual workflow to call `docker.yml` instead of duplicating build logic - **Removed** `push` option - now always pushes (consistent with other publish workflows) - **Builds** all images (lodestar, grafana, prometheus) via docker.yml ### Why Keeps the manual workflow consistent with other publish workflows and reduces code duplication. --- *Generated with AI assistance (Claude/OpenClaw). Human-supervised and tested.* Co-authored-by: lodekeeper <lodekeeper@users.noreply.github.com>
## Description Add HTTP API endpoints to manage GossipSub direct peers at runtime, allowing operators to dynamically add/remove trusted peers without requiring a node restart. ### New Endpoints | Method | Endpoint | Description | |--------|----------|-------------| | POST | `/eth/v1/lodestar/direct_peers?peer=<multiaddr\|enr>` | Add a direct peer | | DELETE | `/eth/v1/lodestar/direct_peers?peerId=<peer_id>` | Remove a direct peer | | GET | `/eth/v1/lodestar/direct_peers` | List current direct peer IDs | ### Motivation Direct peers maintain permanent mesh connections without GRAFT/PRUNE negotiation. Currently, they can only be configured via the `--directPeers` CLI flag at startup. This PR enables runtime management which is useful for: - Adding trusted peers discovered during operation - Removing misbehaving peers from the direct list - Temporary mesh connections for debugging/testing - Hot-adding bootstrap peers without downtime ### Implementation - Adds `addDirectPeer`, `removeDirectPeer`, `getDirectPeers` methods to `Eth2Gossipsub` class - Reuses existing `parseDirectPeers()` function for multiaddr/ENR parsing - Exposes through NetworkCore → Network → API layer - Follows existing patterns for Lodestar-specific debug endpoints - **Throws `ApiError(400)` on invalid peer input** (instead of returning null) - **Prevents adding self as a direct peer** with appropriate warning log ### Error Handling - Invalid multiaddr/ENR format → `ApiError(400)` with descriptive message - Missing peer addresses → `ApiError(400)` - Adding self as direct peer → `ApiError(400)` ### Testing - Existing `directPeers.test.ts` covers the parsing logic (11 tests passing) - Build and lint pass Closes #7559 --- *This PR was authored with AI assistance (Lodekeeper 🌟)* --------- Co-authored-by: lodekeeper <lodekeeper@users.noreply.github.com> Co-authored-by: Nico Flaig <nflaig@protonmail.com>
## Summary Follow-up to #8855 per [review feedback](#8855 (review)). ### Changes 1. **Remove `bigIntToBytesInto` function** - Not used in production code, can be re-added when needed 2. **Remove unused exports** - `getBigIntBufferImplementation` and `initBigIntBufferNative` were not used outside the module 3. **Remove `bigIntToBytes` benchmark file** - No longer needed without `bigIntToBytesInto` comparison 4. **Remove tests for `bigIntToBytesInto`** - Function no longer exists 5. **Remove `minimumReleaseAgeExclude`** - `@vekexasia/bigint-buffer2` v1.1.0 is now older than 48 hours --- *Generated with AI assistance (Claude/OpenClaw). Human-supervised and tested.* --------- Co-authored-by: lodekeeper <lodekeeper@users.noreply.github.com> Co-authored-by: Nico Flaig <nflaig@protonmail.com>
…#8859) ## Description Similar to `packages/params/test/e2e/ensure-config-is-synced.test.ts` which validates preset values, this test validates chainConfig values against the consensus-specs `configs/*.yaml` files. This would have caught the `MIN_BUILDER_WITHDRAWABILITY_DELAY` mismatch that was discovered manually in #8839. ### What it does - Downloads `configs/mainnet.yaml` and `configs/minimal.yaml` from consensus-specs - Compares values against Lodestar's chainConfig - Excludes network-specific values (fork epochs, genesis params, etc.) since those intentionally differ ### Testing ```bash cd packages/config pnpm vitest run test/e2e/ensure-config-is-synced.test.ts ``` --- *This PR was created by @lodekeeper (AI contributor) based on feedback from @nflaig* --------- Co-authored-by: lodekeeper <lodekeeper@users.noreply.github.com>
## Description Upgrade fastify to 5.7.3 to address [CVE-2026-25224](GHSA-mrq3-vjjr-p77c) (DoS via Unbounded Memory Allocation in sendWebStream). ### Security Analysis **Lodestar is NOT affected by this vulnerability** because: 1. **SSE endpoint** (`/eth/v1/events`): Uses `res.raw.write()` directly on the Node.js HTTP response object, not Web Streams 2. **All other endpoints**: Send JSON payloads via `reply.send()`, not `ReadableStream` or `Response` with Web Stream body 3. **ReadableStream usage**: Only exists in CLI for file downloads, not in Fastify responses This upgrade is a proactive security measure following best practices. ### Changes - Update `fastify` from `^5.2.1` to `^5.7.3` in: - `@lodestar/api` - `@lodestar/beacon-node` - `@chainsafe/lodestar` (cli) - `@lodestar/light-client` - Add `fastify` to `minimumReleaseAgeExclude` in `pnpm-workspace.yaml` to allow the new security release (bypasses the 48-hour policy for security updates) - Fix TypeScript error in `setErrorHandler` due to stricter typing in fastify 5.7.3 (`err` is now `unknown` by default) ### References - [Fastify v5.7.3 Release](https://github.com/fastify/fastify/releases/tag/v5.7.3) - [GHSA-mrq3-vjjr-p77c](GHSA-mrq3-vjjr-p77c) - [HackerOne Report](https://hackerone.com/reports/3524779) --- *AI-assisted development: This PR was developed with assistance from an AI coding agent.* --------- Signed-off-by: lodekeeper <lodekeeper@users.noreply.github.com> Co-authored-by: lodekeeper <lodekeeper@users.noreply.github.com> Co-authored-by: Nico Flaig <nflaig@protonmail.com> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
## Description Use a hybrid approach for `byteArrayEquals` in `@lodestar/utils` that selects the optimal comparison method based on array size: - **Loop for ≤48 bytes**: V8 JIT optimizations in Node v24 make loops faster for small arrays - **Buffer.compare for >48 bytes**: Native code is significantly faster for larger arrays This ensures optimal performance for the most common use cases (32-byte roots, 48-byte pubkeys) while still benefiting from native code for larger comparisons (signatures). **Also replaces all direct `Buffer.compare` calls** across the codebase with `byteArrayEquals` for consistency. ## Node v24.13.0 Benchmarks | Size | Loop | Buffer.compare | Winner | |------|------|----------------|--------| | 32 bytes | **14.7 ns/op** | 49.7 ns/op | Loop 3.4x faster | | 48 bytes | **36 ns/op** | 56 ns/op | Loop 1.5x faster | | 96 bytes | 130 ns/op | **50 ns/op** | Buffer 2.6x faster | | 1024 bytes | 940 ns/op | **55 ns/op** | Buffer 17x faster | | 131072 bytes | 14.8 μs/op | **270 ns/op** | Buffer 55x faster | ## Usage Analysis | Size | Count | % | Examples | |------|-------|---|----------| | **32 bytes** | 59 | **94%** | roots, hashes, stateRoot, blockHash, parentRoot, randao, credentials | | 48 bytes | 2 | 3% | pubkeys | | 96 bytes | 2 | 3% | signatures (G2_POINT_AT_INFINITY comparisons) | ## Changes - Added `byteArrayEquals` to `@lodestar/utils` with hybrid implementation - Uses loop for small arrays (≤48 bytes) where V8 JIT is faster - Uses `Buffer.compare` for larger arrays where native code wins - Updated all imports across the codebase to use the new implementation - **Replaced 14 direct `Buffer.compare` calls** with `byteArrayEquals`: - beacon-node: 7 files (block validation, sync, state) - state-transition: 4 files (consolidation, load state utils) - era: 2 files (reader, e2s) - Added benchmark results as comments in test files ## Note The `@chainsafe/ssz` library also has a `byteArrayEquals` implementation that could benefit from a similar change, but that would need to be addressed upstream. Closes #5955 --- *This PR was created with AI assistance (Lodekeeper 🌟)* --------- Co-authored-by: lodekeeper <lodekeeper@users.noreply.github.com> Co-authored-by: Nico Flaig <nflaig@protonmail.com>
## Description This PR adds an `AGENTS.md` file to provide context for AI coding assistants (Claude Code, Codex, GitHub Copilot, etc.) working with the Lodestar codebase. Inspired by [ethereum/consensus-specs#4894](ethereum/consensus-specs#4894) which adds a similar file to the consensus specs repo. ## Contents The file includes: - **Project overview**: What Lodestar is and its role in the ecosystem - **Directory structure**: Layout of packages and their purposes - **Build commands**: Essential `pnpm` commands for building, testing, linting - **Code style**: Conventions from biome.jsonc and CONTRIBUTING.md - **Testing guidelines**: How to run and write tests - **PR guidelines**: Branch naming, commit conventions, AI disclosure requirements - **Common tasks**: Step-by-step guides for typical contributions - **Style learnings**: Specific preferences learned from PR reviews ## Why? AI assistants often struggle with project-specific conventions, test commands, and style requirements. This file serves as a concise reference that: 1. Helps AI assistants produce higher-quality contributions 2. Reduces review friction from style/convention violations 3. Also serves as a quick reference for human contributors The file intentionally stays concise while covering the most important aspects for day-to-day contributions. --- 🤖 This PR was authored by AI (Lodekeeper/Claude) with supervision. --------- Co-authored-by: lodekeeper <lodekeeper@users.noreply.github.com>
Summary of ChangesHello @nflaig, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request focuses on preparing the codebase for the v1.40.0 release by integrating the upcoming Gloas fork, enhancing API capabilities, and optimizing internal state management. It introduces new features for builder operations and peer management, alongside significant refactoring to improve performance and maintainability. The changes also include comprehensive updates to development documentation and monitoring metrics. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Ignored Files
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request prepares the v1.40.0 release. It includes version bumps across packages, dependency updates, and a significant number of feature additions and refactorings. Key changes include the implementation of features for the upcoming Gloas fork, a major refactoring of state and shuffling caches for better performance and memory management, and improvements to the BLS signature verification process to use validator indices instead of full public keys. I've identified a few important bug fixes in the caching and data validation logic. Overall, the changes are extensive but appear well-structured and move the codebase forward significantly.
packages/beacon-node/src/network/reqresp/utils/dataColumnResponseValidation.ts
Show resolved
Hide resolved
Performance Report✔️ no performance regression detected Full benchmark results
|
## Motivation Fixes a race condition that can cause state corruption and `First offset must equal to fixedEnd` errors on restart. See discussion: https://discord.com/channels/593655374469660673/1469368525180113078 ## Description The `using` keyword in `serializeState.ts` releases the buffer back to the pool when the block exits. Since `processFn` is async (returns a Promise), the buffer was being released before the DB write completed. If another serialization (checkpoint state or archive state) happened before the write finished, it would: 1. Get the same buffer from the pool 2. Call `fill(0)` on it (per BufferPool.alloc behavior) 3. Corrupt the data being written by the first serialization This could cause `First offset must equal to fixedEnd 0 != <large number>` errors on restart when the corrupted state is read. ## Fix Add `await` before `processFn(stateBytes)` to ensure the buffer is not released until the async operation completes. --- **AI Disclosure:** This PR was authored with AI assistance (Lodekeeper/Claude). Co-authored-by: lodekeeper <lodekeeper@users.noreply.github.com>
No description provided.