impacted API:
eth_getTransactionByHash()
eth_getTransactionByNumber()
eth_getTransactionByBlockHashAndIndex()
eth_getTransactionByBlockNumberAndIndex()
eth_getBlockByHash()
eth_getBlockByNumber()
1) In case of legacy transitions the chainId field should be inserted
only if V is not 27/28.
This seems also the Geth/v1.10.23 behaviour via infura
2) In case of dynamicFee/AccessList transaction the access_list should
be inserted in the response also if empty
This is done correctly in cmd/rpcdaemon/commands/eth_api.go and NOT in
internal/ethapi/api.go
This seems also the Geth/v1.10.23 behaviour via infura
eth_gasPrice(): A maximum of 3 samples should be taken per block (see
sample const)
The getBlockPrices() function takes 3 samples on the first block and one
on the others.
In the check s.Len() >= limit s.Len() after first block will contain 3
samples and so the loop will be executed only one time for each block
NOT three times
Fixes an issue where if the peer count didn't change much, we'd send the
message to the same peers nearly every time.
We now pick new random peers on every call to
SendMessageToRandomPeers().
In context of https://github.com/ledgerwatch/erigon/issues/5694, this PR
adds some fixes and improvement in the mining flow. Also, a relevant
change in txpool (present in erigon-lib) is made here:
https://github.com/ledgerwatch/erigon-lib/pull/737
#### Changes in triggering mining in `startMining()`
The mining module didn't honour the block time as a simple 3 second
timer and a notifier from txpool was used to trigger mining. This would
cause inconsistencies, at least with the bor consensus. Hence, a geth
like approach is used instead for simplicity. A new head channel
subscription is added in the `startMining()` loop which would notify the
addition of new block. Hence, this would make sure that the block time
is being honoured. Moreover, the fixed 3 second timer is replaced by the
`miner.recommit` value set using flags.
#### Changes in the arrangement of calls made post mining
When all the mining stages are completed, erigon writes all the data in
a cache. It then processes the block through all the stages as it would
process a block received from P2P. In this case, some of the stages
aren't really required. Like the block header and body download stage is
not required as the block was mined locally. Even execution stage is not
required as it already went through it in the mining stages.
Now, we encountered an issue where the chain was halted and kept mining
the same block again and again (liveness issue). The root cause is
because of an error in a stage of it's parent block. This stage turns
out to be the 4th stage which is "Block body download" stage. This stage
tries to download the block body from peers using the headers. As, we
mined this block locally we don't really need to download anything (or
process anything again). Hence, it reaches out to the cache which we
store for the block body.
Interestingly that cache turned out to be empty for some blocks. This
was because post mining, before adding block header and body to a cache,
we call the broadcast method which starts the staged sync. So,
technically it’s a bit uncertain at any stage if the block header and
body has been written or not.(see
[this](https://github.com/ledgerwatch/erigon/blob/devel/eth/backend.go#L553-L572)).
To achieve complete certainty, we rearranged the calls with the write to
cache being called first and broadcast next. This pretty much solves the
issue as now we’re sure that we’d always have a block body in the cache
when we reach the body download stage.
#### Misc changes
This PR also adds some logs in bor consensus.
For example, see https://gnosisscan.io/block/24938312.
This fixes the following errors:
```
t=2022-11-11T12:39:01+0000 lvl=dbug msg="Handling incoming message" stream=RecvMessage err="newBlock66: too large block difficulty: bitlen 128"
t=2022-11-11T12:39:01+0000 lvl=dbug msg="Handling incoming message" stream=RecvMessage err="newBlock66: too large block difficulty: bitlen 128"
t=2022-11-11T12:39:01+0000 lvl=dbug msg="Handling incoming message" stream=RecvMessage err="newBlock66: too large block difficulty: bitlen 128"
```
Code news up an oracle for every call so existing cache checks always
came back as 0. Moved cache up a layer and pass in via the new
`gasprice.Cache` interface. Looked at putting the oracle instance onto
the ethApi itself to re-use it that way, but the backend transaction
made it a little hard work as we can't re-use that. This seemed cleaner
but happy to take feedback.
Locally takes me from ±2.5k rps to ±43k rps so quite a difference there.
(k6 with 1000 virtual users)