Conversation
…20919) ## Summary - Adds `ETHEREUM_HTTP_TIMEOUT_MS` env var to configure the HTTP timeout on viem's `http()` transport (default is viem's 10s) - Introduces `makeL1HttpTransport` helper in `ethereum/src/client.ts` to centralize the repeated `fallback(urls.map(url => http(url, { batch: false })))` pattern - Updates all non-test `createPublicClient` call sites (archiver, aztec-node, sequencer, prover-node, epoch-cache, blob-client) to use the helper with the configurable timeout ## Motivation Users hitting `TimeoutError: The request took too long to respond` on archiver `eth_getLogs` calls when querying slow or public L1 RPCs. Viem's default 10s timeout is too short for large log queries and there was no way to configure it. ## Test plan - `yarn build` passes - `yarn format` and `yarn lint` pass - Set `ETHEREUM_HTTP_TIMEOUT_MS=60000` and confirm the archiver no longer times out on large `eth_getLogs` ranges 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Resolves the referenceBlock hash to a block number in the AztecNode and passes it down as upToBlockNumber so the LogStore stops returning logs from blocks beyond the client's sync point. Also adds an ordering check on log insertion to guard against out-of-order appends. Fixes F-417 Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
If there was a block in L2 slot zero, then `getL2ToL1Messages` returned an incorrect response, since the `slotNumber !== previousSlotNumber` would fail in the first iteration of the loop.
…ng blocks (#21378) Update the per-block budgets so that, on every block, the limits are further adjusted to `remainingCheckpointBudget / remainingBlocks * multiplier`. This prevents the last blocks from starvation. Also adjusts the multiplier from 2x to 1.2x. ## Visualization Using the https://github.com/AztecProtocol/explorations/tree/main/block-distribution-simulator ### Before this PR No redistribution, 2x multiplier <img width="1544" height="737" alt="Screenshot From 2026-03-11 15-50-38" src="https://github.com/user-attachments/assets/fda36d04-5d9e-456a-9ced-4649fa58d724" /> ### After this PR Redistribution enabled, 1.2x multiplier <img width="1544" height="737" alt="Screenshot From 2026-03-11 15-50-49" src="https://github.com/user-attachments/assets/2bc196f3-77fa-47bf-9294-4eb4199f8f93" /> ### With no multiplier For comparison purposes only, note the lower gas utilization <img width="1544" height="737" alt="Screenshot From 2026-03-11 15-50-59" src="https://github.com/user-attachments/assets/0facbc36-65e3-446e-abaf-eb7f637b87c9" /> ## Summary - Adds `SEQ_REDISTRIBUTE_CHECKPOINT_BUDGET` (default: true) to distribute remaining checkpoint budget evenly across remaining blocks instead of letting one block consume it all. Fair share per block is `ceil(remainingBudget / remainingBlocks * multiplier)`, applied to all four dimensions (L2 gas, DA gas, blob fields, tx count). - Changes default `perBlockAllocationMultiplier` from 2 to 1.2 for smoother distribution. - Wires `maxBlocksPerCheckpoint` from the timetable through to the checkpoint builder config. ## Test plan - Existing `capLimitsByCheckpointBudgets` tests pass with `redistributeCheckpointBudget: false` (old behavior) - New tests cover: even split with multiplier=1, fair share with multiplier=1.2, last block gets all remaining, disabled flag falls back to old behavior, DA gas and tx count redistribution - `computeBlockLimits` tests updated for new default multiplier and `maxBlocksPerCheckpoint` return value 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
## Summary Fixes `aztec --version` returning `unknown` when installed via `install.aztec-labs.com`. Closes https://linear.app/aztec-labs/issue/A-642/aztec-version-returns-unkonwn ClaudeBox log: https://claudebox.work/s/9b17d34db367f45c?run=1
Removes the default multiplier setting in our deployment scripts.
Our `network_defaults.yml` hardcoded a single snapshots provider for mainnet. But this should be removed in favour of using the network config json file.
This PR adds transaction filestore config to the network config schema.
When requesting world state at a given block hash, double check that the returned world state is actually at that same block hash. Also check that world state is synced to the requested block if using block hash.
Simplifies the issue of summing two different rates over two different magnitudes. Fixes A-655
…1281) ## Summary - Previously, `GasTxValidator` returned `skipped` when a tx's `maxFeesPerGas` was below current block fees, allowing it to wait for lower fees. This changes it to `invalid`, rejecting the tx outright. - Extracts `MaxFeePerGasValidator` as a standalone generic validator (like `GasLimitsValidator`) so it can be used in pool migration alongside full `Tx` validation. - Adds `InsufficientFeePerGasEvictionRule` that evicts pending txs after a new block is mined if their `maxFeesPerGas` no longer meets the block's gas fees. - Adds `maxFeesPerGas` to `TxMetaValidationData` so the eviction rule and pool migration validator can access it from metadata. **Caveat**: This may evict transactions that would become valid if block fees later drop. A more nuanced approach would define a threshold (e.g. 50% of current fees) and only reject/evict below that. The current approach is simpler and ensures the pool doesn't accumulate low-fee txs unlikely to be mined soon. 🤖 Generated with [Claude Code](https://claude.com/claude-code) --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Reduces the severity of a log line.
…s into pw/revert-to
## Summary When a peer prunes a block proposal, it can no longer serve tx requests by index. Previously, if such a peer was already marked "smart", it stayed smart indefinitely — causing repeated failed queries and eventual penalty escalation to "bad" status. This PR demotes smart peers back to dumb when they can no longer serve index-based requests, and distinguishes legitimate pruning (`NOT_FOUND`) from actual failures. - Add `markPeerDumb()` to `IPeerCollection`/`PeerCollection` — removes peer from the smart set - Add `clearPeerData()` to `ITxMetadataCollection`/`MissingTxMetadataCollection` — removes stale "peer has tx X" associations on demotion - In `handleFailResponseFromPeer()`: demote on both `FAILURE`/`UNKNOWN` (with penalty) and `NOT_FOUND` (without penalty, since pruning is a legitimate state) - In `decideIfPeerIsSmart()`: replace `isBlockResponseValid()` with an explicit archive root mismatch check and `handleArchiveRootMismatch()` which distinguishes three cases: - Archive roots match → continue to smart/dumb decision - `archiveRoot` is `Fr.ZERO` (peer pruned proposal, legitimate) → demote to dumb, no penalty - `archiveRoot` is non-zero but mismatches ours (malicious response) → penalise (`LowToleranceError`) + demote to dumb - Three new tests covering the demotion paths Fixes A-512
…databases (#21539) part of #21514 ## Summary - **Problem**: `AztecLmdbStore.clear()` and `drop()` called their respective operations on each sub-database (`#data`, `#multiMapData`, `#rootDb`) sequentially without a wrapping transaction. A crash between operations could leave the store in an inconsistent state (some sub-DBs cleared, others not). - **Fix**: Wrap all sub-database operations within a single `this.#rootDb.transaction()` call using synchronous variants (`clearSync()` / `dropSync()`) so they execute atomically. - **Tests**: Added comprehensive test suite covering clear (maps, multimaps, singletons, counters, sets), drop, and delete operations. ## Changes - `yarn-project/kv-store/src/lmdb/store.ts`: `clear()` now uses `clearSync()` inside a transaction; `drop()` now uses `dropSync()` inside a transaction. - `yarn-project/kv-store/src/lmdb/store.test.ts`: New test file with 7 test cases. ## Test plan - [x] New unit tests for `clear()`, `drop()`, and `delete()` operations - [ ] Existing kv-store tests pass: `yarn workspace @aztec/kv-store test` 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
## Motivation `syncImmediate` syncs world state to a target block number but does not verify block identity. If a reorg occurred and the world state is at the same height but on a different fork, the method returns early without detecting the mismatch. Additionally, `skipThrowIfTargetNotReached` was dead code with no caller ever passing `true`. ## Approach Added an optional typed `BlockHash` parameter to `syncImmediate`. When at or past the target height, the implementation checks the hash via `getL2BlockHash` before returning early. On mismatch it falls through to trigger a resync. After syncing, if the hash still doesn't match, it throws a `WorldStateSynchronizerError` with reason `block_hash_mismatch`. Removed the unused `skipThrowIfTargetNotReached` parameter entirely. ## Changes - **stdlib**: Updated `WorldStateSynchronizer` interface signature — replaced `skipThrowIfTargetNotReached?: boolean` with `blockHash?: BlockHash` - **world-state**: Implemented block hash verification in `ServerWorldStateSynchronizer.syncImmediate` with pre-sync and post-sync checks - **p2p**: Both tx pool v1 and v2 `FeePayerBalanceEvictionRule` now pass `BlockHash` from `context.block.hash()` on BLOCK_MINED events - **prover-node**: Passes last block's header hash when syncing world state before creating proving jobs - **txe**: Updated mock synchronizer parameter name and type - **world-state (tests)**: Added tests for hash match (early return), hash mismatch (triggers resync), and hash mismatch after sync (throws) - **p2p (tests)**: Updated mock expectations to verify blockHash is passed in BLOCK_MINED handlers
Helps debugging fee issues in tests. Also includes a README on how fees
are computed in L2.
```
[16:51:01.188] INFO: e2e:e2e_epochs:epochs_mbps L1 block 25 mined at 16:58:17 {"currentTimestamp":1773431897,"l1Timestamp":"1773431897","l1BlockNumber":25,"l2SlotNumber":13,"l2Epoch":3,"checkpointNumber":2,"provenCheckpointNumber":0,"totalL2Messages":0,"sequencerCost":"214068600000","proverCost":"642215700000","congestionCost":"0","congestionMultiplier":"1000000000","minFeePerMana":"856284300000","l1BaseFee":"713561860","l1BlobFee":"1","ethPerFeeAsset":"10000000","manaTarget":"100000000"}
[16:51:05.194] INFO: e2e:e2e_epochs:epochs_mbps L1 block 26 mined at 16:58:21 {"currentTimestamp":1773431901,"l1Timestamp":"1773431901","l1BlockNumber":26,"l2SlotNumber":13,"l2Epoch":3,"checkpointNumber":2,"provenCheckpointNumber":0,"totalL2Messages":0,"sequencerCost":"214068600000","proverCost":"642215700000","congestionCost":"0","congestionMultiplier":"1000000000","minFeePerMana":"856284300000","l1BaseFee":"713561860","l1BlobFee":"1","ethPerFeeAsset":"10000000","manaTarget":"100000000"}
[16:51:09.208] INFO: e2e:e2e_epochs:epochs_mbps L1 block 27 mined at 16:58:25 with minFee=248383500000 {"currentTimestamp":1773431905,"l1Timestamp":"1773431905","l1BlockNumber":27,"l2SlotNumber":14,"l2Epoch":3,"checkpointNumber":2,"provenCheckpointNumber":0,"totalL2Messages":0,"sequencerCost":"62093400000","proverCost":"186290100000","congestionCost":"0","congestionMultiplier":"1000000000","minFeePerMana":"248383500000","l1BaseFee":"206977871","l1BlobFee":"1","ethPerFeeAsset":"10000000","manaTarget":"100000000"}
```
…#21532) ## Summary - Adds depth-aware `commitAllCheckpointsTo(depth)` and `revertAllCheckpointsTo(depth)` to the world state checkpoint system. These revert/commit all checkpoints at or above the given depth (inclusive), preserving any checkpoints created by callers below that depth. - `createCheckpoint()` now returns the depth of the newly created checkpoint, threading it through the full C++ async callback chain (cache → tree store → append-only tree → world state → NAPI → TypeScript). - `ForkCheckpoint` stores its depth and exposes `revertToCheckpoint()` which encapsulates the revert-to-depth pattern, replacing the previous `revertAllCheckpoints()` + `markCompleted()` two-step. - The public processor uses `revertToCheckpoint()` on tx timeout/panic, so per-tx reverts no longer destroy checkpoints created by callers (e.g., `CheckpointBuilder`). ## Changes **C++ (barretenberg)** - `ContentAddressedCache`: `checkpoint()` returns depth, new `commit_to_depth()`/`revert_to_depth()` methods - `CachedContentAddressedTreeStore`: passes through depth-aware operations - `ContentAddressedAppendOnlyTree`: `CheckpointCallback` now receives `TypedResponse<CheckpointResponse>` with depth - `WorldState`: `checkpoint()` returns depth, `commit_all_checkpoints_to`/`revert_all_checkpoints_to` take required depth - NAPI layer: new `ForkIdWithDepthRequest`/`CheckpointDepthResponse` message types **TypeScript** - `MerkleTreeCheckpointOperations` interface: `createCheckpoint()` returns `Promise<number>`, depth is required on `commitAllCheckpointsTo`/`revertAllCheckpointsTo` - `MerkleTreesFacade`: passes depth through native message channel - `ForkCheckpoint`: stores depth, new `revertToCheckpoint()` method - `PublicProcessor`: uses `checkpoint.revertToCheckpoint()` on error paths **Tests** - C++ cache tests: depth return, `commit_to_depth`, `revert_to_depth`, edge cases - C++ append-only tree tests: depth return, commit/revert to depth - TypeScript native world state tests: depth return, commit/revert to depth, backward compat - TypeScript fork checkpoint unit tests - TypeScript public processor tests: verifies depth passed on revert ## Test plan - C++ cache tests pass (`crypto_content_addressed_cache_tests`) - C++ append-only tree tests pass (`crypto_content_addressed_append_only_tree_tests`) - TypeScript `native_world_state.test.ts` passes - TypeScript `fork_checkpoint.test.ts` passes - TypeScript `public_processor.test.ts` passes - TypeScript `timeout_race.test.ts` passes
Temporarily skips the `acir_tests/browser-test-app` browser prove tests (`verify_honk_proof` and `a_1_mul`) which are failing with "Failed to fetch" errors in CI, blocking the v4 merge train. This unblocks #21595 and transitively #21592 and #21443. ClaudeBox log: https://claudebox.work/s/8663550bd346778b?run=1 --------- Co-authored-by: Santiago Palladino <santiago@aztec-labs.com>
The `P2PClient` used the test `newCheckpointNumber !== oldCheckpointNumber` to detect if a prune is a checkpoint prune or not. A better test is that the new checkpoint number should be less than the old one. For example, in HA setups, one node will frequently experience a checkpoint prune where `newCheckpointNumber === oldCheckpointNumber + 1`. Replacing the losing HA node's view of the pending chain with that from L1.
Fix A-630
…ns (#21443) This PR fixes a bug in block building. Reproduction was as follows: 1. Sequencer would fail to process sufficient transactions in the first block of the checkpoint (e.g. timeout). 2. `CheckpointBuilder.buildBlock()` would update the state regardless. 3. `CheckpointProposalJob.buildSingleBlock()` would detect an insufficient number of transactions and discard the block, leaving the state invalid for further block building. This PR moves the detection of sufficient transaction count into `CheckpointBuilder.buildBlock()` so the state is left unmodified. It also introduces an additional fork checkpoint and reverts all changes made for blocks that are discarded.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
BEGIN_COMMIT_OVERRIDE
feat: add ETHEREUM_HTTP_TIMEOUT_MS env var for viem HTTP transport (#20919)
fix(archiver): filter tagged log queries by block number (#21388)
fix(node): handle slot zero in getL2ToL1Messages (#21386)
feat(sequencer): redistribute checkpoint budget evenly across remaining blocks (#21378)
fix: fall back to package.json for CLI version detection (#21382)
chore: Removed multiplier config (#21412)
chore: Removed default snapshot url config (#21413)
chore: Read tx filestores from network config (#21416)
fix(node): check world state against requested block hash (#21385)
feat(p2p): use l2 priority fee only for tx priority (#21420)
feat(p2p): reject and evict txs with insufficient max fee per gas (#21281)
revert "feat(p2p): reject and evict txs with insufficient max fee per gas (#21281)" (#21432)
chore: Reduce log spam (#21436)
fix(tx): reject txs with invalid setup when unprotecting (#21224)
fix: orchestrator enqueue yield (#21286)
chore(builder): check archive tree next leaf index during block building (#21457)
fix: scenario deployment (#21428)
chore: add claude skill to read network-logs (#21495)
chore: update claude network-logs skill (#21523)
feat(rpc): add package version to RPC response headers (#21526)
chore(prover): silence "epoch to prove" debug logs (#21527)
chore(sequencer): do not log blob data (#21530)
fix: dependabot alerts (#21531)
docs(p2p): nicer READMEs (#21456)
fix(archiver): guard getL1ToL2Messages against incomplete message sync (#21494)
fix(sequencer): await syncing proposed block to archiver (#21554)
feat(ethereum): check VK tree root and protocol contracts hash in rollup compatibility (#21537)
fix: marking peer as dumb on failed responses (#21316)
fix(kv-store): make LMDB clear and drop operations atomic across sub-databases (#21539)
feat(world-state): add blockHash verification to syncImmediate (#21556)
chore(monitor): print out l2 fees components (#21559)
chore: rm faucet (#21538)
chore: remove old merkle trees (#21577)
feat: Implement commit all and revert all for world state checkpoints (#21532)
chore: skip flaky browser acir tests in CI (#21596)
fix: Better detection for epoch prune (#21478)
chore: logging (#21604)
fix: Don't update state if we failed to execute sufficient transactions (#21443)
END_COMMIT_OVERRIDE