* update libp2p and address compiler errors
* remove bandwidth logging from transport
* use libp2p registry
* make clippy happy
* use rust 1.73
* correct rpc keep alive
* remove comments and obsolte code
* remove libp2p prefix
* make clippy happy
* use quic under facade
* remove fast msg id
* bubble up close statements
* fix wrong comment
* Changes to use required Endpoint
* Format
* fixed doppleganger service
* minor fix
* efficiency changes
* fixed tests
* remove commented line
---------
Co-authored-by: Jimmy Chen <jchen.tc@gmail.com>
* rebase and add comment
* conditional test
* test
* optimistic chould be working now
* finality should be working now
* try again
* try again
* clippy fix
* add lc bootstrap beacon api
* add lc optimistic/finality update to events
* fmt
* That error isn't occuring on my computer but I think this should fix it
* Add missing test file
* Update light client types to comply with Altair light client spec.
* Fix test compilation
* Support deserializing light client structures for the Bellatrix fork
* Move `get_light_client_bootstrap` logic to `BeaconChain`. `LightClientBootstrap` API to return `ForkVersionedResponse`.
* Misc fixes.
- log cleanup
- move http_api config mutation to `config::get_config` for consistency
- fix light client API responses
* Add light client bootstrap API test and fix existing ones.
* Fix test for `light-client-server` http api config.
* Appease clippy
* Efficiency improvement when retrieving beacon state.
---------
Co-authored-by: Jimmy Chen <jchen.tc@gmail.com>
* Added in process_justification_and_finalization
Added in process_justification_and_finalization to compute_attestation_rewards_altair to take into account justified attestations when coming out of inactivity leak. Also added in test to check for this edge case.
* Added in justification and finalization for compute_attestation_rewards_base
* Added in test for altair rewards without inactivity leak
* Delete BN spec flag and VC beacon-node flag
* Remove warn
* slog
* add warn
* delete eth1-endpoint
* delete server from vc cli.rs
* delete server flag in config.rs
* delete delete-lockfiles in vc
* delete allow-unsynced flag in VC
* delete strict-fee-recipient in VC and warn log
* delete merge flag in bn (hidden)
* delete count-unrealized and count-unrealized-full in bn (hidden)
* delete http-disable-legacy-spec in bn (hidden)
* delete eth1-endpoint in lcli
* delete warn message lcli
* delete eth1-endpoints
* delete minify in slashing protection
* delete minify related
* Remove mut
* add back warn! log
* Indentation
* Delete count-unrealized
* Delete eth1-endpoints
* Delete eth1-endpoint test
* delete eth1-endpints test
* delete allow-unsynced test
* Add back lcli eth1-endpoint
---------
Co-authored-by: Michael Sproul <michael@sigmaprime.io>
## Issue Addressed
-downgrades `Missing components over rpc` to debug because this isn't unusual and just results in a re-try
-removes the result from `Block component processed for lookup` because this prints the full block on an unknown parent error
Co-authored-by: realbigsean <seananderson33@gmail.com>
## Issue Addressed
The comment implies that observations for the given slot would be retained but they are not.
## Proposed Changes
I'm pretty sure the functionality is correct and the comment is slightly incorrect, so just update the comment. The comment needs to say something along the lines of "less than or equal to" rather than just "less than."
## Additional Info
It doesn't make sense to keep finalized observations since those are no longer accepted.
## Issue Addressed
Closes#4481.
(Continuation of #4648)
## Proposed Changes
- [x] Add `lighthouse db prune-states`
- [x] Make it work
- [x] Ensure block roots are handled correctly (to be addressed in 4735)
- [x] Check perf on mainnet/Goerli/Gnosis (takes a few seconds max)
- [x] Run block root healing logic (#4875 ) at the beginning
- [x] Add some tests
- [x] Update docs
- [x] Add `--freezer` flag and other improvements to `lighthouse db inspect`
Co-authored-by: Michael Sproul <michael@sigmaprime.io>
Co-authored-by: Jimmy Chen <jimmy@sigmaprime.io>
Co-authored-by: Michael Sproul <micsproul@gmail.com>
## Issue Addressed
#4582
## Proposed Changes
Add a new v3 block fetching flow that can decide to return a Full OR Blinded payload
## Additional Info
Co-authored-by: Michael Sproul <micsproul@gmail.com>
## Issue Addressed
I observed our forward sync on devnet 9 would stall when we would hit this log:
```
250425:Oct 19 00:54:17.133 WARN Blocks and blobs request for range received invalid data, error: KzgCommitmentMismatch, batch_id: 4338, peer_id: 16Uiu2HAmHbmkEQFDrJfNuy1aYyAfHkNUwSD9FN7EVAqGJ8YTF9Mh, service: sync, module: network::sync::manager:1036
```
## Proposed Changes
`range_sync_block_and_blob_response` [here](1cb02a13a5/beacon_node/network/src/sync/manager.rs (L1013)) removes the request from the sync manager. later, however if there's an error, `inject_error` [here](1cb02a13a5/beacon_node/network/src/sync/manager.rs (L1055)) expects the request to exist so we can handle retry logic. So this PR just re-inserts the request (withthout any accumulated blobs or blocks) when we hit an error here.
The issue is unique to block+blob sync because the error here is only possible from mismatches between blocks + blobs after we've downloaded both, there's no equivalent error in block sync
Co-authored-by: realbigsean <seananderson33@gmail.com>
## Issue Addressed
Closes#4817.
## Proposed Changes
- Fill in the linear block roots array between 0 and the slot of the first block (e.g. slots 0 and 1 on Holesky).
- Backport the `--freezer`, `--skip` and `--limit` options for `lighthouse db inspect` from tree-states. This allows us to easily view the database corruption of 4817 using `lighthouse db inspect --network holesky --freezer --column bbr --output values --limit 2`.
- Backport the `iter_column_from` change and `MemoryStore` overhaul from tree-states. These are required to enable `lighthouse db inspect`.
- Rework `freezer_upper_limit` to allow state lookups for slots below the `state_lower_limit`. Currently state lookups will fail until state reconstruction completes entirely.
There is a new regression test for the main bug, but no test for the `freezer_upper_limit` fix because we don't currently support running state reconstruction partially (see #3026). This will be fixed once we merge `tree-states`! In lieu of an automated test, I've tested manually on a Holesky node while it was reconstructing.
## Additional Info
Users who backfilled Holesky to slot 0 (e.g. using `--reconstruct-historic-states`) need to either:
- Re-sync from genesis.
- Re-sync using checkpoint sync and the changes from this PR.
Due to the recency of the Holesky genesis, writing a custom pass to fix up broken databases (which would require its own thorough testing) was deemed unnecessary. This is the primary reason for this PR being marked `backwards-incompat`.
This will create few conflicts with Deneb, which I've already resolved on `tree-states-deneb` and will be happy to backport to Deneb once this PR is merged to unstable.
1. Add commitments to logs and update the `Display` implementation of `KzgCommitment` to become truncated similarly to block root.
I've been finding it difficult to debug scenarios involving multiple blobs for the same `(index, block_root)`. Logging the commitment will help with this, we can match it to what exists in the block.
Example output:
```
Oct 20 21:13:36.700 DEBG Successfully verified gossip blob commitment: 0xa3c1…1cd8, index: 0, root: 0xf31e…f9de, slot: 154568
Oct 20 21:13:36.785 DEBG Successfully verified gossip block commitments: [0xa3c1…1cd8, 0x8655…02ff, 0x8d6a…955a, 0x84ac…3a1b, 0x9752…629b, 0xb9fc…20fb], root: 0xf31eeb732702e429e89057b15e1c0c631e8452e09e03cb1924353f536ef4f9de, slot: 154568, graffiti: teku/besu, service: beacon
```
Example output in a block with no blobs (this will show up pre-deneb):
```
426734:Oct 20 21:15:24.113 DEBG Successfully verified gossip block, commitments: [], root: 0x619db1360ba0e8d44ae2a0f2450ebca47e167191feecffcfac0e8d7b6c39623c, slot: 154577, graffiti: teku/nethermind, service: beacon, module: beacon_chain::beacon_chain:2765
```
2. Remove `strum::IntoStaticStr` from `AvailabilityCheckError`. This is because `IntoStaticStr` end up dropping information inside the enum. So kzg commitments in this error are dropped, making it more difficult to debug
```
AvailabilityCheckError::KzgCommitmentMismatch {
blob_commitment: KzgCommitment,
block_commitment: KzgCommitment,
},
```
which is output as just `AvailabilityCheckError`
3. Some additional misc sync logs I found useful in debugging https://github.com/sigp/lighthouse/pull/4869
4. This downgrades ”Block returned for single block lookup not present” to debug because I don’t think we can fix the scenario that causes this unless we can cancel inflight rpc requests
Co-authored-by: realbigsean <seananderson33@gmail.com>
## Issue Addressed
Fixes#4697.
This also unblocks the state pruning PR (#4835). Because self healing breaks if state pruning is applied to a database with missing block roots.
## Proposed Changes
- Fill in the missing block roots between last restore point slot and split slot when upgrading to latest database version.
## Issue Addressed
Makes lighthouse compliant with new kzg changes in https://github.com/ethereum/consensus-specs/releases/tag/v1.4.0-beta.3
## Proposed Changes
1. Adds new official trusted setup
2. Refactors kzg to match upstream changes in https://github.com/ethereum/c-kzg-4844/pull/377
3. Updates pre-generated `BlobBundle` to work with official trusted setup. ~~Using json here instead of ssz to account for different value of `MaxBlobCommitmentsPerBlock` in minimal and mainnet. By using json, we can just use one pre generated bundle for both minimal and mainnet. Size of 2 separate ssz bundles is approximately equal to one json bundle cc @jimmygchen~~
Dunno what I was doing, ssz works without any issues
4. Stores trusted_setup as just bytes in eth2_network_config so that we don't have kzg dependency in that lib and in lcli.
Co-authored-by: realbigsean <seananderson33@gmail.com>
Co-authored-by: realbigsean <seananderson33@GMAIL.com>
## Issue Addressed
N/A
## Proposed Changes
Sends blocks and blobs from http_api to the network channel for publishing in a single network channel send. This is to avoid overhead of multiple calls.
Also adds a metric for rpc blob retrieval duration.
## Issue Addressed
Addresses the recent CI failures caused by caching `blst` for the wrong CPU type.
## Proposed Changes
- Use `FEATURES: jemalloc,portable` when building Lighthouse & `lcli` in tests
- Add a new `TEST_FEATURES` and set to `portable` for all CI test jobs.
- Updated Makefiles to read the `TEST_FEATURES` environment variable, and default to none.
## Issue Addressed
#4512
Which issue # does this PR address?
## Proposed Changes
Add inactivity calculation for Altair
Please list or describe the changes introduced by this PR.
Add inactivity calculation for Altair
## Additional Info
Please provide any additional information. For example, future considerations
or information useful for reviewers.
Co-authored-by: Jimmy Chen <jchen.tc@gmail.com>
## Issue Addressed
updates libp2p to the latest version and uses the new `SwarmBuilder`. Superseeds https://github.com/sigp/lighthouse/pull/4695/
CC @mxinden I don't think we can use both `bandwidth_loggers` with the new syntax right?
## Issue Addressed
Following the conversation on https://github.com/libp2p/rust-libp2p/pull/3666 the changes introduced in this PR will allow us to give more insights if the bandwidth limitations happen at the transport level, namely if quic helps vs yamux and it's [window size limitation](https://github.com/libp2p/rust-yamux/issues/162) or if the bottleneck is at the gossipsub level.
## Proposed Changes
introduce new quic and tcp bandwidth metric gauges.
cc @mxinden (turned out to be easier, Thomas gave me a hint)
## Issue Addressed
Fix a deadlock in the tests that was causing tests on tree-states to run for hours without finishing: https://github.com/sigp/lighthouse/actions/runs/6491194654/job/17628138360.
## Proposed Changes
Avoid using a Mutex under the Rayon `par_iter`. Instead, use an `AtomicUsize`. I've run the new version several times in a loop and it hasn't deadlocked (it was deadlocking consistently on tree-states).
## Additional Info
The same bug exists in unstable and tree-states, but I'm not sure why it was triggering so consistently on the tree-states branch.
## Proposed Changes
Fix the misplacement of the total block production time metric, which occurred during a previous refactor.
Total block production times are no longer skewed low (data from Holesky + blockdreamer):
```
# HELP beacon_block_production_seconds Full runtime of block production
# TYPE beacon_block_production_seconds histogram
beacon_block_production_seconds_bucket{le="0.005"} 0
beacon_block_production_seconds_bucket{le="0.01"} 0
beacon_block_production_seconds_bucket{le="0.025"} 0
beacon_block_production_seconds_bucket{le="0.05"} 0
beacon_block_production_seconds_bucket{le="0.1"} 0
beacon_block_production_seconds_bucket{le="0.25"} 0
beacon_block_production_seconds_bucket{le="0.5"} 37
beacon_block_production_seconds_bucket{le="1"} 65
beacon_block_production_seconds_bucket{le="2.5"} 66
beacon_block_production_seconds_bucket{le="5"} 66
beacon_block_production_seconds_bucket{le="10"} 66
beacon_block_production_seconds_bucket{le="+Inf"} 66
beacon_block_production_seconds_sum 34.225780452
beacon_block_production_seconds_count 66
```
## Additional Info
Cheers to @jimmygchen for helping spot this.
## Issue Addressed
Addresses #4778, and potentially fixes the flaky deneb builder test `builder_works_post_deneb`.
The [deneb builder test](c5c84f1213/beacon_node/http_api/tests/tests.rs (L5371)) has been quite flaky on our CI (`release-tests`) since it was introduced. I'm guessing that it might be timing out on the builder `get_header` call (1 second), and therefore the local payload is used, while the test expects builder payload to be used.
On my machine the [`get_header` ](c5c84f1213/beacon_node/execution_layer/src/test_utils/mock_builder.rs (L367)) call takes about 550ms, which could easily go over 1s on slower environments (our windows CI runner is much slower than the ubuntu one).
I did a profile on the test and it showed that `blob_to_kzg_commiment` and `compute_kzg_proof` was taking a large chunk of time, so perhaps pre-generating the blobs could help stablise this test.
## Proposed Changes
Pre-generate blobs bundle for Mainnet and Minimal presets.
Before the change `get_header` took about **550ms**, and it's now reduced to **50-55ms** after the change. If timeout was indeed the cause of the flaky test, this fix should stablise it. This also brings the flaky `builder_works_post_deneb` test time from 50s to 10s. (8s if we only use a single blob)
* use workspace deps in kzg crate
* delete unused blobs dp path field
* full match on fork name in engine api get payload v3
* only accept v3 payloads on get payload v3 endpoint in mock el
* remove FIXMEs related to merge transition tests
* move static tx to test utils
* default max_per_epoch_activation_churn_limit to mainnet value
* remove unnecessary async
* remove comment
* use task executor in `blob_sidecars` endpoint
* Add `blob_sidecar` event to SSE.
* Return 202 if a block is published but failed blob validation when validation level is `Gossip`.
* Move `BlobSidecar` event to `process_gossip_blob` and add test.
* Emit `BlobSidecar` event when blobs are received over rpc.
* Improve test assertions on `SseBlobSidecar`s.
* Add quotes to blob index serialization in `SseBlobSidecar`
Co-authored-by: realbigsean <seananderson33@GMAIL.com>
---------
Co-authored-by: realbigsean <seananderson33@GMAIL.com>
* Initial Commit of State LRU Cache
* Build State Caches After Reconstruction
* Cleanup Duplicated Code in OverflowLRUCache Tests
* Added Test for State LRU Cache
* Prune Cache of Old States During Maintenance
* Address Michael's Comments
* Few More Comments
* Removed Unused impl
* Last touch up
* Fix Clippy
## Issue Addressed
This PR closes https://github.com/sigp/lighthouse/issues/3237
## Proposed Changes
Remove topic weight of old topics when the fork happens.
## Additional Info
- Divided `NetworkService::start()` into `NetworkService::build()` and `NetworkService::start()` for ease of testing.
## Issue Addressed
We've had a report of sync committee performance suffering with the beacon processor HTTP API prioritisations.
## Proposed Changes
Increase the priority of `/eth/v1/beacon/blocks/head/root` requests, which are used by the validator client to form sync committee messages, here:
441fc1691b/validator_client/src/sync_committee_service.rs (L181-L188)
Additionally, avoid loading the blinded block in all but the `block_id=block_root` case. I'm not sure why we were doing this previously, I suspect it was just an oversight during the implementation of the `finalized` status on API requests.
## Additional Info
I think this change should have minimal negative impact as:
- The block root endpoint is quick to compute (a few ms max).
- Only the priority of `head` requests is increased. Analytical processes that are making lots of block root requests for past slots are unable to DoS the beacon processor, as their requests will still be processed after attestations.
## Issue Addressed
N/A
## Proposed Changes
We were currently downscoring a peer for sending us a block that we already have in fork choice. This is unnecessary as we get duplicates in lighthouse only when
1. We published the block, so the block is already in fork choice
2. We imported the same block over rpc
In both scenarios, the peer who sent us the block over gossip is not at fault.
This isn't exploitable as valid duplicates will get dropped by the gossipsub duplicate filter
## Issue Addressed
Right now lighthouse accepts zero as enr ports. Since enr ports should be reachable, zero ports should be rejected here
## Proposed Changes
- update the config to use `NonZerou16` as an ENR port for all enr-related fields.
- the enr builder from config now sets the enr to the listening port only if the enr port is not already set (prev behaviour) and the listening port is not zero (new behaviour)
- reject zero listening ports when used with `enr-match`.
- boot node now rejects listening port as zero, since those are advertised.
- generate-bootnode-enr also rejected zero listening ports for the same reason.
- update local network scripts
## Additional Info
Unrelated, but why do we overwrite `enr-x-port` values with listening ports if `enr-match` is present? we prob should only do this for enr values that are not already set.
## Issue Addressed
https://github.com/sigp/lighthouse/issues/4543
## Proposed Changes
- Removes `NotBanned` from `BanResult`, implements `Display` and `std::error::Error` for `BanResult` and changes `ban_result` return type to `Option<BanResult>` which helps returning `BanResult` on `handle_established_inbound_connection`
- moves the check from for banned peers from `on_connection_established` to `handle_established_inbound_connection` to start addressing #4543.
- Removes `allow_block_list` as it's now redundant? Not sure about this one but if `PeerManager` keeps track of the banned peers, no need to send a `Swarm` event for `alow_block_list` to also keep that list right?
## Questions
- #4543 refers:
> More specifically, implement the connection limit behaviour inside the peer manager.
@AgeManning do you mean copying `libp2p::connection_limits::Behaviour`'s code into `PeerManager`/ having it as an inner `NetworkBehaviour` of `PeerManager`/other? If it's the first two, I think it probably makes more sense to have it as it is as it's less code to maintain.
> Also implement the banning of peers inside the behaviour, rather than passing messages back up to the swarm.
I tried to achieve this, but we still need to pass the `PeerManagerEvent::Banned` swarm event as `DiscV5` handles it's node and ip management internally and I did not find a method to query if a peer is banned. Is there anything else we can do from here?
3397612160/beacon_node/lighthouse_network/src/discovery/mod.rs (L931-L940)
Same as the question above, I did not find a way to check if `DiscV5` has the peer banned, so that we could check here and avoid sending `Swarm` events
3397612160/beacon_node/lighthouse_network/src/peer_manager/network_behaviour.rs (L168-L178)
Is there a chance we try to dial a peer that has been banned previously?
Thanks!
## Proposed Changes
- only use LH types to avoid build issues
- use warp instead of axum for the server to avoid importing the dep
## Additional Info
- wondering if we can move the `execution_layer/test_utils` to its own crate and import it as a dev dependency
- this would be made easier by separating out our engine API types into their own crate so we can use them in the test crate
- or maybe we can look into using reth types for the engine api if they are in their own crate
Co-authored-by: realbigsean <seananderson33@gmail.com>
* add processing and processed caching to the DA checker
* move processing cache out of critical cache
* get it compiling
* fix lints
* add docs to `AvailabilityView`
* some self review
* fix lints
* fix beacon chain tests
* cargo fmt
* make availability view easier to implement, start on testing
* move child component cache and finish test
* cargo fix
* cargo fix
* cargo fix
* fmt and lint
* make blob commitments not optional, rename some caches, add missing blobs struct
* Update beacon_node/beacon_chain/src/data_availability_checker/processing_cache.rs
Co-authored-by: ethDreamer <37123614+ethDreamer@users.noreply.github.com>
* marks review feedback and other general cleanup
* cargo fix
* improve availability view docs
* some renames
* some renames and docs
* fix should delay lookup logic
* get rid of some wrapper methods
* fix up single lookup changes
* add a couple docs
* add single blob merge method and improve process_... docs
* update some names
* lints
* fix merge
* remove blob indices from lookup creation log
* remove blob indices from lookup creation log
* delayed lookup logging improvement
* check fork choice before doing any blob processing
* remove unused dep
* Update beacon_node/beacon_chain/src/data_availability_checker/availability_view.rs
Co-authored-by: Michael Sproul <micsproul@gmail.com>
* Update beacon_node/beacon_chain/src/data_availability_checker/availability_view.rs
Co-authored-by: Michael Sproul <micsproul@gmail.com>
* Update beacon_node/beacon_chain/src/data_availability_checker/availability_view.rs
Co-authored-by: Michael Sproul <micsproul@gmail.com>
* Update beacon_node/beacon_chain/src/data_availability_checker/availability_view.rs
Co-authored-by: Michael Sproul <micsproul@gmail.com>
* Update beacon_node/network/src/sync/block_lookups/delayed_lookup.rs
Co-authored-by: Michael Sproul <micsproul@gmail.com>
* remove duplicate deps
* use gen range in random blobs geneartor
* rename processing cache fields
* require block root in rpc block construction and check block root consistency
* send peers as vec in single message
* spawn delayed lookup service from network beacon processor
* fix tests
---------
Co-authored-by: ethDreamer <37123614+ethDreamer@users.noreply.github.com>
Co-authored-by: Michael Sproul <micsproul@gmail.com>
## Issue Addressed
#4675
## Proposed Changes
- Update local ENR (**only port numbers**) with local addresses received from libp2p (via `SwarmEvent::NewListenAddr`)
- Only use the zero port for CLI tests
## Additional Info
### See Also ###
- #4705
- #4402
- #4745
* Start testing blob pruning
* Get rid of unnecessary orphaned blob column
* Make random blob tests deterministic
* Test for pruning being blocked by finality
* Fix bugs and test fork boundary
* A few more tweaks to pruning conditions
* Tweak oldest_blob_slot semantics
* Test margin pruning
* Clean up some terminology and lints
* Schema migrations for v18
* Remove FIXME
* Prune blobs on finalization not every slot
* Fix more bugs + tests
* Address review comments
## Issue Addressed
Synchronize dependencies and edition on the workspace `Cargo.toml`
## Proposed Changes
with https://github.com/rust-lang/cargo/issues/8415 merged it's now possible to synchronize details on the workspace `Cargo.toml` like the metadata and dependencies.
By only having dependencies that are shared between multiple crates aligned on the workspace `Cargo.toml` it's easier to not miss duplicate versions of the same dependency and therefore ease on the compile times.
## Additional Info
this PR also removes the no longer required direct dependency of the `serde_derive` crate.
should be reviewed after https://github.com/sigp/lighthouse/pull/4639 get's merged.
closes https://github.com/sigp/lighthouse/issues/4651
Co-authored-by: Michael Sproul <michael@sigmaprime.io>
Co-authored-by: Michael Sproul <micsproul@gmail.com>
## Issue Addressed
Closes#4751
## Proposed Changes
Prevent `state_root_at_slot` and `block_root_at_slot` from erroring out due to a call to `self.slot()?` that fails before genesis. This fixes pre-genesis queries for:
- block at slot 0
- block by genesis block root
- state at slot 0
- state by genesis state root
- state at `finalized` tag
- state at `justified` tag
## Issue Addressed
#4531
## Proposed Changes
add SSZ support to the following block production endpoints:
GET /eth/v2/validator/blocks/{slot}
GET /eth/v1/validator/blinded_blocks/{slot}
## Additional Info
i updated a few existing tests to use ssz instead of writing completely new tests
## Issue Addressed
#4738
## Proposed Changes
See the above issue for details. Went with option #2 to use the async reqwest client in `Eth2NetworkConfig` and propagate the async-ness.
## Proposed Changes
This PR adds more logging prior to genesis, particularly on networks that start with execution enabled.
There are new checks using `eth_getBlockByHash/Number` to verify that the genesis state's `latest_execution_payload_header` matches the execution node's genesis block.
The first commit also runs the merge-readiness/Capella-readiness checks prior to genesis. This has two effects:
- Give more information on the execution node's status and its readiness for genesis.
- Prevent the `el_offline` status from being set on `/eth/v1/node/syncing`, which previously caused the VC to complain loudly.
I would like to include this for the Holesky reboot. It would have caught the misconfig that doomed the first Holesky.
## Additional Info
- Geth doesn't serve payload bodies prior to genesis, which is why we use the legacy methods. I haven't checked with other ELs yet.
- Currently this is logging errors with _Capella_ genesis states generated by `ethereum-genesis-generator` because the `withdrawals_root` is not set correctly (it is 0x0). This is not a blocker for Holesky, as it starts from Bellatrix (Pari is investigating).
## Issue Addressed
I went through the code base and look for places where we acquire fork choice locks (after the deadlock bug was found and fixed in #4687), and discovered an instance where we re-acquire a lock immediately after dropping it. This shouldn't cause deadlock like the other issue, but is slightly less efficient.
## Issue Addressed
CI is plagued by `AddrAlreadyInUse` failures, which are caused by race conditions in allocating free ports.
This PR removes all usages of the `unused_port` crate for Lighthouse's HTTP API, in favour of passing `:0` as the listen address. As a result, the listen address isn't known ahead of time and must be read from the listening socket after it binds. This requires tying some self-referential knots, which is a little disruptive, but hopefully doesn't clash too much with Deneb 🤞
There are still a few usages of `unused_tcp4_port` left in cases where we start external processes, like the `watch` Postgres DB, Anvil, Geth, Nethermind, etc. Removing these usages is non-trivial because it's hard to read the port back from an external process after starting it with `--port 0`. We might be able to do something on Linux where we read from `/proc/`, but I'll leave that for future work.
* move length update outside of if let in LRU cache
* add comment and use hex for G1_POINT_AT_INFINITY
* remove some misleading comments from `ssz_snappy`
* make sure we can't overflow on blobs by range requests with large counts
* downgrade gossip verification internal availability check error
* change blob rpc responses from BlockingFnWithManualSendOnIdle to BlockingFn
* remove unnecessary collect in blobs by range response
* add a comment to blobs by range response start slot logic
* typo persist_data_availabilty_checker -> persist_data_availability_checker
* unify cheap_state_advance_to_obtain_committees
## Issue Addressed
#4402
## Proposed Changes
This PR adds QUIC support to Lighthouse. As this is not officially spec'd this will only work between lighthouse <-> lighthouse connections. We attempt a QUIC connection (if the node advertises it) and if it fails we fallback to TCP.
This should be a backwards compatible modification. We want to test this functionality on live networks to observe any improvements in bandwidth/latency.
NOTE: This also removes the websockets transport as I believe no one is really using it. It should be mentioned in our release however.
Co-authored-by: João Oliveira <hello@jxs.pt>
* increase the max topic subscriptions #4581
* make the max_subscription limitation based off constants / configuration
* format
* wording & add deneb topic array
* reduce max_subscriptions_per_request to 2x
* format
* update comment
* Update comments and small cleanup.
* Deserialize into `SsePayloadAttributesV3` for Deneb fork. Update `SignedBlockContents::blobs_cloned` to return blobs for `BlindedBlockAndBlobSidecars`.
* Improve code readability and error handling when converting blinded block into full block.
## Proposed Changes
New release to replace the cancelled v4.4.0 release.
This release includes the bugfix #4687 which avoids a deadlock that was present in v4.4.0.
## Additional Info
Awaiting testing over the weekend this will be merged Monday September 4th.
## Issue Addressed
Fix a deadlock introduced in #4236 which was caught during the v4.4.0 release testing cycle (with thanks to @paulhauner and `gdb`).
## Proposed Changes
Avoid re-locking the fork choice read lock when querying a state by root in the HTTP API. This avoids a deadlock due to the lock already being held.
## Additional Info
The [RwLock docs](https://docs.rs/lock_api/latest/lock_api/struct.RwLock.html#method.read) explicitly advise against re-locking:
> Note that attempts to recursively acquire a read lock on a RwLock when the current thread already holds one may result in a deadlock.
## Issue Addressed
Fix a bug in the storage of the linear block roots array in the freezer DB. Previously this array was always written as part of state storage (or block backfill). With state pruning enabled by #4610, these states were no longer being written and as a result neither were the block roots.
The impact is quite low, we would just log an error when trying to forwards-iterate the block roots, which for validating nodes only happens when they try to look up blocks for peers:
> Aug 25 03:42:36.980 ERRO Missing chunk in forwards iterator chunk index: 49726, service: freezer_db
Any node checkpoint synced off `unstable` is affected and has a corrupt database. If you see the log above, you need to re-sync with the fix. Nodes that haven't checkpoint synced recently should _not_ be corrupted, even if they ran the buggy version.
## Proposed Changes
- Use a `ChunkWriter` to write the block roots when states are not being stored.
- Tweak the usage of `get_latest_restore_point` so that it doesn't return a nonsense value when state pruning is enabled.
- Tweak the guarantee on the block roots array so that block roots are assumed available up to the split slot (exclusive). This is a bit nicer than relying on anything to do with the latest restore point, which is a nonsensical concept when there aren't any restore points.
## Additional Info
I'm looking forward to deleting the chunked vector code for good when we merge tree-states 😁
## Issue Addressed
NA
## Proposed Changes
Add the Holesky network config as per 36e4ff2d51/custom_config_data.
Since the genesis state is ~190MB, I've opted to *not* include it in the binary and instead download it at runtime (see #4564 for context). To download this file we have:
- A hard-coded URL for a SigP-hosted S3 bucket with the Holesky genesis state. Assuming this download works correctly, users will be none the wiser that the state wasn't included in the binary (apart from some additional logs)
- If the user provides a `--checkpoint-sync-url` flag, then LH will download the genesis state from that server rather than our S3 bucket.
- If the user provides a `--genesis-state-url` flag, then LH will download the genesis state from that server regardless of the S3 bucket or `--checkpoint-sync-url` flag.
- Whenever a genesis state is downloaded it is checked against a checksum baked into the binary.
- A genesis state will never be downloaded if it's already included in the binary.
- There is a `--genesis-state-url-timeout` flag to tweak the timeout for downloading the genesis state file.
## Log Output
Example of log output when a state is downloaded:
```bash
Aug 23 05:40:13.424 INFO Logging to file path: "/Users/paul/.lighthouse/holesky/beacon/logs/beacon.log"
Aug 23 05:40:13.425 INFO Lighthouse started version: Lighthouse/v4.3.0-bd9931f+
Aug 23 05:40:13.425 INFO Configured for network name: holesky
Aug 23 05:40:13.426 INFO Data directory initialised datadir: /Users/paul/.lighthouse/holesky
Aug 23 05:40:13.427 INFO Deposit contract address: 0x4242424242424242424242424242424242424242, deploy_block: 0
Aug 23 05:40:13.427 INFO Downloading genesis state info: this may take some time on testnets with large validator counts, timeout: 60s, server: https://sigp-public-genesis-states.s3.ap-southeast-2.amazonaws.com/
Aug 23 05:40:29.895 INFO Starting from known genesis state service: beacon
```
Example of log output when there are no URLs specified:
```
Aug 23 06:29:51.645 INFO Logging to file path: "/Users/paul/.lighthouse/goerli/beacon/logs/beacon.log"
Aug 23 06:29:51.646 INFO Lighthouse started version: Lighthouse/v4.3.0-666a39c+
Aug 23 06:29:51.646 INFO Configured for network name: goerli
Aug 23 06:29:51.647 INFO Data directory initialised datadir: /Users/paul/.lighthouse/goerli
Aug 23 06:29:51.647 INFO Deposit contract address: 0xff50ed3d0ec03ac01d4c79aad74928bff48a7b2b, deploy_block: 4367322
The genesis state is not present in the binary and there are no known download URLs. Please use --checkpoint-sync-url or --genesis-state-url.
```
## Additional Info
I tested the `--genesis-state-url` flag with all 9 Goerli checkpoint sync servers on https://eth-clients.github.io/checkpoint-sync-endpoints/ and they all worked 🎉
My IDE eagerly formatted some `Cargo.toml`. I've disabled it but I don't see the value in spending time reverting the changes that are already there.
I also added the `GenesisStateBytes` enum to avoid an unnecessary clone on the genesis state bytes baked into the binary. This is not a huge deal on Mainnet, but will become more relevant when testing with big genesis states.
When we do a fresh checkpoint sync we're downloading the genesis state to check the `genesis_validators_root` against the finalised state we receive. This is not *entirely* pointless, since we verify the checksum when we download the genesis state so we are actually guaranteeing that the finalised state is on the same network. There might be a smarter/less-download-y way to go about this, but I've run out of cycles to figure that out. Perhaps we can grab it in the next release?
## Issue Addressed
N/A
## Proposed Changes
Remove the `hidden(true)` modifier on the `--gui` flag so it shows up when running `lighthouse bn --help`
## Additional Info
We need to include this now that Siren has had its first stable release.
## Issue Addressed
#4654
## Proposed Changes
Only log error if we're unable to read slot clock after genesis.
I thought about simply down grading the `error` to a `warn`, but feel like it's still unnecessary noise before genesis, and it would be good to retain error log if we're pass genesis. But I'd be ok with just downgrading the log level, too.
## Issue Addressed
Closes#4473 (take 3)
## Proposed Changes
- Send a 202 status code by default for duplicate blocks, instead of 400. This conveys to the caller that the block was published, but makes no guarantees about its validity. Block relays can count this as a success or a failure as they wish.
- For users wanting finer-grained control over which status is returned for duplicates, a flag `--http-duplicate-block-status` can be used to adjust the behaviour. A 400 status can be supplied to restore the old (spec-compliant) behaviour, or a 200 status can be used to silence VCs that warn loudly for non-200 codes (e.g. Lighthouse prior to v4.4.0).
- Update the Lighthouse VC to gracefully handle success codes other than 200. The info message isn't the nicest thing to read, but it covers all bases and isn't a nasty `ERRO`/`CRIT` that will wake anyone up.
## Additional Info
I'm planning to raise a PR to `beacon-APIs` to specify that clients may return 202 for duplicate blocks. Really it would be nice to use some 2xx code that _isn't_ the same as the code for "published but invalid". I think unfortunately there aren't any suitable codes, and maybe the best fit is `409 CONFLICT`. Given that we need to fix this promptly for our release, I think using the 202 code temporarily with configuration strikes a nice compromise.
## Issue Addressed
updates underlying dependencies and removes the ignored `RUSTSEC`'s for `cargo audit`.
Also switches `procinfo` to `procfs` on `eth2` to remove the `nom` warning, `procinfo` is unmaintained see [here](https://github.com/danburkert/procinfo-rs/issues/46).
## Issue Addressed
Fixes a bug in the handling of `--beacon-process-max-workers` which caused it to have no effect.
## Proposed Changes
For this PR I channeled @ethDreamer and saw deep into the faulty CLI config -- this bug is almost identical to the one Mark found and fixed in #4622.
## Issue Addressed
Closes#3210Closes#3211
## Proposed Changes
- Checkpoint sync from the latest finalized state regardless of its alignment.
- Add the `block_root` to the database's split point. This is _only_ added to the in-memory split in order to avoid a schema migration. See `load_split`.
- Add a new method to the DB called `get_advanced_state`, which looks up a state _by block root_, with a `state_root` as fallback. Using this method prevents accidental accesses of the split's unadvanced state, which does not exist in the hot DB and is not guaranteed to exist in the freezer DB at all. Previously Lighthouse would look up this state _from the freezer DB_, even if it was required for block/attestation processing, which was suboptimal.
- Replace several state look-ups in block and attestation processing with `get_advanced_state` so that they can't hit the split block's unadvanced state.
- Do not store any states in the freezer database by default. All states will be deleted upon being evicted from the hot database unless `--reconstruct-historic-states` is set. The anchor info which was previously used for checkpoint sync is used to implement this, including when syncing from genesis.
## Additional Info
Needs further testing. I want to stress-test the pruned database under Hydra.
The `get_advanced_state` method is intended to become more relevant over time: `tree-states` includes an identically named method that returns advanced states from its in-memory cache.
Co-authored-by: realbigsean <seananderson33@gmail.com>
* remove protoc and token from network tests github action
* delete unused beacon chain methods
* downgrade writing blobs to store log
* reduce diff in block import logic
* remove some todo's and deneb built in network
* remove unnecessary error, actually use some added metrics
* remove some metrics, fix missing components on publish funcitonality
* fix status tests
* rename sidecar by root to blobs by root
* clean up some metrics
* remove unnecessary feature gate from attestation subnet tests, clean up blobs by range response code
* pawan's suggestion in `protocol_info`, peer score in matching up batch sync block and blobs
* fix range tests for deneb
* pub block and blob db cache behind the same mutex
* remove unused errs and an empty file
* move sidecar trait to new file
* move types from payload to eth2 crate
* update comment and add flag value name
* make function private again, remove allow unused
* use reth rlp for tx decoding
* fix compile after merge
* rename kzg commitments
* cargo fmt
* remove unused dep
* Update beacon_node/execution_layer/src/lib.rs
Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
* Update beacon_node/beacon_processor/src/lib.rs
Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
* pawan's suggestiong for vec capacity
* cargo fmt
* Revert "use reth rlp for tx decoding"
This reverts commit 5181837d81c66dcca4c960a85989ac30c7f806e2.
* remove reth rlp
---------
Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
* Update mock builder, mev-rs dependencies, eth2 lib to support deneb builder flow
* Replace `sharingForkTime` with `cancunTime`
* Patch `ethereum-consensus` to include some deneb-devnet-8 changes
* Add deneb builder test and fix block contents deserialization
* Fix builder bid encoding issue and passing deneb builder test \o/
* Fix test compilation
* Revert `cancunTime` change in genesis to pass doppelganger tests
Since `tolerant_current_epoch` is expected to be either `current_epoch` or `current_epoch+1`, we can eliminate a case here.
And added a comment about `compute_historic_attester_duties` , since `RelativeEpoch::from_epoch` will only allow `request_epoch == current_epoch-1` when `request_epoch < current_epoch`.
## Issue Addressed
Closes#4245
## Proposed Changes
- If an SSE channel fills up, send a comment instead of terminating the stream.
- Add a CLI flag for scaling up the SSE buffer: `--http-sse-capacity-multiplier N`.
## Additional Info
~~Blocked on #4462. I haven't rebased on that PR yet for initial testing, because it still needs some more work to handle long-running HTTP threads.~~
- [x] Add CLI flag tests.
## Issue Addressed
The feature flag used to control this feature is `disable_backfill` instead of `disable-backfill`.
kudos to @michaelsproul for discovering this bug!
## Issue Addressed
Closes#3404 (mostly)
## Proposed Changes
- Remove all uses of Warp's `and_then` (which backtracks) in favour of `then` (which doesn't).
- Bump the priority of the `POST` method for `v2/blocks` to `P0`. Publishing a block needs to happen quickly.
- Run the new SSZ POST endpoints on the beacon processor. I think this was missed in between merging #4462 and #4504/#4479.
- Fix a minor issue in the validator registrations endpoint whereby an error from spawning the task on the beacon processor would be dropped.
## Additional Info
I've tested this manually and can confirm that we no longer get the dreaded `Unsupported endpoint version` errors for queries like:
```
$ curl -X POST -H "Content-Type: application/json" --data @block.json "http://localhost:5052/eth/v2/beacon/blocks" | jq
{
"code": 400,
"message": "BAD_REQUEST: WeakSubjectivityConflict",
"stacktraces": []
}
```
```
$ curl -X POST -H "Content-Type: application/octet-stream" --data @block.json "http://localhost:5052/eth/v2/beacon/blocks" | jq
{
"code": 400,
"message": "BAD_REQUEST: invalid SSZ: OffsetOutOfBounds(572530811)",
"stacktraces": []
}
```
```
$ curl "http://localhost:5052/eth/v2/validator/blocks/7067595"
{"code":400,"message":"BAD_REQUEST: invalid query: Invalid query string","stacktraces":[]}
```
However, I can still trigger it by leaving off the `Content-Type`. We can re-test this aspect with #4575.
## Issue Addressed
#4538
## Proposed Changes
add newtype wrapper around DialError that extracts error messages and logs them in a more readable format
## Additional Info
I was able to test Transport Dial Errors in the situation where a libp2p instance attempts to ping a nonexistent peer. That error message should look something like
`A transport level error has ocurred: Connection refused (os error 61)`
AgeManning mentioned we should try fetching only the most inner error (in situations where theres a nested error). I took a stab at implementing that
For non transport DialErrors, I wrote out the error messages explicitly (as per the docs). Could potentially clean things up here if thats not necessary
Co-authored-by: Age Manning <Age@AgeManning.com>
## Issue Addressed
NA
## Proposed Changes
Rather than spawning new tasks on the tokio executor to process each HTTP API request, send the tasks to the `BeaconProcessor`. This achieves:
1. Places a bound on how many concurrent requests are being served (i.e., how many we are actually trying to compute at one time).
1. Places a bound on how many requests can be awaiting a response at one time (i.e., starts dropping requests when we have too many queued).
1. Allows the BN prioritise HTTP requests with respect to messages coming from the P2P network (i.e., proiritise importing gossip blocks rather than serving API requests).
Presently there are two levels of priorities:
- `Priority::P0`
- The beacon processor will prioritise these above everything other than importing new blocks.
- Roughly all validator-sensitive endpoints.
- `Priority::P1`
- The beacon processor will prioritise practically all other P2P messages over these, except for historical backfill things.
- Everything that's not `Priority::P0`
The `--http-enable-beacon-processor false` flag can be supplied to revert back to the old behaviour of spawning new `tokio` tasks for each request:
```
--http-enable-beacon-processor <BOOLEAN>
The beacon processor is a scheduler which provides quality-of-service and DoS protection. When set to
"true", HTTP API requests will queued and scheduled alongside other tasks. When set to "false", HTTP API
responses will be executed immediately. [default: true]
```
## New CLI Flags
I added some other new CLI flags:
```
--beacon-processor-aggregate-batch-size <INTEGER>
Specifies the number of gossip aggregate attestations in a signature verification batch. Higher values may
reduce CPU usage in a healthy network while lower values may increase CPU usage in an unhealthy or hostile
network. [default: 64]
--beacon-processor-attestation-batch-size <INTEGER>
Specifies the number of gossip attestations in a signature verification batch. Higher values may reduce CPU
usage in a healthy network whilst lower values may increase CPU usage in an unhealthy or hostile network.
[default: 64]
--beacon-processor-max-workers <INTEGER>
Specifies the maximum concurrent tasks for the task scheduler. Increasing this value may increase resource
consumption. Reducing the value may result in decreased resource usage and diminished performance. The
default value is the number of logical CPU cores on the host.
--beacon-processor-reprocess-queue-len <INTEGER>
Specifies the length of the queue for messages requiring delayed processing. Higher values may prevent
messages from being dropped while lower values may help protect the node from becoming overwhelmed.
[default: 12288]
```
I needed to add the max-workers flag since the "simulator" flavor tests started failing with HTTP timeouts on the test assertions. I believe they were failing because the Github runners only have 2 cores and there just weren't enough workers available to process our requests in time. I added the other flags since they seem fun to fiddle with.
## Additional Info
I bumped the timeouts on the "simulator" flavor test from 4s to 8s. The prioritisation of consensus messages seems to be causing slower responses, I guess this is what we signed up for 🤷
The `validator/register` validator has some special handling because the relays have a bad habit of timing out on these calls. It seems like a waste of a `BeaconProcessor` worker to just wait for the builder API HTTP response, so we spawn a new `tokio` task to wait for a builder response.
I've added an optimisation for the `GET beacon/states/{state_id}/validators/{validator_id}` endpoint in [efbabe3](efbabe3252). That's the endpoint the VC uses to resolve pubkeys to validator indices, and it's the endpoint that was causing us grief. Perhaps I should move that into a new PR, not sure.
* remove closure from `check_availability_mayb_import`
* impove logging, add wrapper struct to requested ids
* improve logging
* only log if we're in deneb. Only delay lookup if we're in deneb
* fix bug in missing components check