* Most of the PR changed files are extra and slightly more complicated
unit tests.
* Fixed Eth1DataVotes not inheriting genesis
* Fixed Attestations simulation using wrong slot when reconstructing
partecipation
* Fixed Copy() operation on BeaconState on Eth1DataVotes
* Used correct ListSSZ type for Eth1DataVotes and HistoricalSummaries
* Fixed wrong []uint64 deltas on empty slots
What does this PR do:
* Optional Backfilling and Caplin Archive Node
* Create antiquary for historical states
* Fixed gaps of chain gap related to the Head of the chain and anchor of
the chain.
* Added basic reader object to Read the Historical state
adds a two indexes to the validators cache
creates beaconhttp package with many utilities for beacon http endpoint
(future support for ssz is baked in)
started on some validator endpoints
This PR is to add the request rate limiter.
The solution is to count the request number for each peer for each
minute, if the peer exceeds the limit, block the requests for a
specified time.
Current limits:
- Request limited to `5000` requests per minute for each handler.
- Penalty blockage time `1-minute`
This fixes an issue where the mumbai testnet node struggle to find
peers. Before this fix in general test peer numbers are typically around
20 in total between eth66, eth67 and eth68. For new peers some can
struggle to find even a single peer after days of operation.
These are the numbers after 12 hours or running on a node which
previously could not find any peers: eth66=13, eth67=76, eth68=91.
The root cause of this issue is the following:
- A significant number of mumbai peers around the boot node return
network ids which are different from those currently available in the
DHT
- The available nodes are all consequently busy and return 'too many
peers' for long periods
These issues case a significant number of discovery timeouts, some of
the queries will never receive a response.
This causes the discovery read loop to enter a channel deadlock - which
means that no responses are processed, nor timeouts fired. This causes
the discovery process in the node to stop. From then on it just
re-requests handshakes from a relatively small number of peers.
This check in fixes this situation with the following changes:
- Remove the deadlock by running the timer in a separate go-routine so
it can run independently of the main request processing.
- Allow the discovery process matcher to match on port if no id match
can be established on initial ping. This allows subsequent node
validation to proceed and if the node proves to be valid via the
remainder of the look-up and handshake process it us used as a valid
peer.
- Completely unsolicited responses, i.e. those which come from a
completely unknown ip:port combination continue to be ignored.
-
Consensus Specification Tests takes less than 8 minutes so I think they
can be in a PR's own CI for whenever it is ready. for reference it is
less than make test
This is a non functional change which consolidates the various packages
under metrics into the top level package now that the dead code is
removed.
It is a precursor to the removal of Victoria metrics after which all
erigon metrics code will be contained in this single package.
This is an update of:
https://github.com/ledgerwatch/erigon/pull/7846
which uses a local fork of victoria metrics to include the changes that
https://github.com/anshalshukla added to the original for we where
using.
It also includes code to address the duplicate metrics issue identified
here:
https://github.com/ledgerwatch/erigon/issues/8053
It has one more associated fix which is to correctly add a metadata
label to counters, these where previously labelled as gauges.
e.g.
```
# TYPE p2p_peers counter
p2p_peers 0
```
rather than
```
# TYPE p2p_peers gauge
p2p_peers 0
```
---------
Co-authored-by: Anshal Shukla <53994948+anshalshukla@users.noreply.github.com>
Co-authored-by: Anshal Shukla <shukla.anshal85@gmail.com>
Basically, pruning is specified by the user, by default, 1 million (in
the PR set to 100 for pruning purposes). the pruning for the database is
stored inside the db
I'm not sure what caused this failure (for instance, CI for PR #7888 was
green), but currently we have the following test failure in `devel`:
```
--- FAIL: TestGnosisForkDigest (0.00s)
fork_test.go:79:
Error Trace: github.com/ledgerwatch/erigon/cl/fork/fork_test.go:79
Error: Not equal:
expected: [4]uint8{0x82, 0x4b, 0xe4, 0x31}
actual : [4]uint8{0x21, 0xa6, 0xf8, 0x36}
```
Miracoulously, hive tests pass first try. YIPPIE.
Also for the future, I added `--experimental.modular` which enables a
secondary engine API for consensus separation.
Now block building is responsibility of the execution module.