Basically, pruning is specified by the user, by default, 1 million (in
the PR set to 100 for pruning purposes). the pruning for the database is
stored inside the db
The current logic is flawed, because it drops all peers that are less
synced.
It is valid to return empty responses by the eth spec.
A proper logic should penalize from the context of the sync process,
where enough "reputation" data is collected about a peer.
In order to be able to connect to erigon 2.48 peers that have
--sentry.drop-useless-peers enabled,
this adds a check to not reply with an empty headers list.
If we reply with an empty list, we're going to be considered useless and
kicked.
Once enough of erigon nodes are updated in the network past this commit,
this check should be removed,
because it is totally acceptable to return an empty list by the eth
spec.
The disconnect message could either be a plain integer, or a list with
one integer element. We were encoding it as a plain integer, but
decoding as a list. Change this to be able to decode any format.
Currently PropagateNewBlockHashes and BroadcastNewBlock
selects a subset of all sentries by taking a `Sqrt(len(sentries))`,
and then for each sentry SendMessageToRandomPeers
selects a subset of its peers by taking `Sqrt(len(peerInfos))`.
This behaviour limits the broadcast scope with a lot of peers, e.g. 100
becomes 10,
but is not great with very few peers, or if the message is very
important
to broadcast to everyone, which is the case of bor validator/proposer
nodes.
* send to all sentries in both BroadcastNewBlock and PropagateNewBlockHashes
* remove peerCountConstrained sqrt logic in SendMessageToRandomPeers
* add maxPeers provider func as a parameter to MultiClient
* default it to 10 for eth and 0 (unlimited) for bor validators
---------
Co-authored-by: Mark Holt <mark@distributed.vision>
because bor doesn't have headers,
ReadCurrentHeader returns nil as expected
refactor isChainPoS to only call ReadCurrentHeader/ReadTd if needed and
use chain config IDs
problem: it was possible to call startSync
and start sending messages before our Status is sent
solution: wait for the sender goroutine to finish
before calling startSync
refactor handShake parameters to not require peerID and a startSync
callback
Co-authored-by: Mark Holt <mark@distributed.vision>
[txpool](https://github.com/ledgerwatch/erigon-lib/blob/main/txpool/pool.go)
expects an `OnNewBlock` update only after the DB transaction is
committed.
This fixes, for example, a nonce gap mis-detection in Hive test
"engine-cancun/Blob Transactions On Block 1, Cancun Genesis".