Embedded CL is not supported for Gnosis Chain, so it makes sense to set
`externalcl` to true by default for it.
Also, this PR sets `terminalTotalDifficultyPassed` for Gnosis Chain &
Chiado (see https://docs.gnosischain.com/updates/20221210-merge).
Regarding https://github.com/ledgerwatch/erigon/issues/6260
added flag `--p2p.allowed-ports=<porta>,<portb>` to restrict which ports
to use for sentries for different protocol versions.
Default for this flag is `30303, 30304` (first port is inherited from
`--port` flag defaults.
If `--port` is changed and it's new value is not presented in allowed
port list, provided port will be allowed as well as list provided via
`--p2p.allowed-ports`
Port picking is straightforward, we create sentry gRPC server for
protocol over first allowed port that is not already taken.
If there are no allowed ports left, erigon exits with hint.
This ensures that the filter bad transactions can use the same context
each time, so increases an account nonce in the first batch of
transactions is committed to the simulation just in case the same
account appears in the next batch of transactions.
Also makes use of the new txpool candidate functionality to ensure we're
getting fresh transactions each time.
Change from:
```
begin(TxNoSync)
exec new block, index nee data
do limited pruning
commit()
send notifications about new data arrival to other apps, they may start reading new data at this time (by new read transactions)
```
Change to:
```
begin(TxNoSync)
exec new block, index nee data
commit() // no fsync here
send notifications about new data arrival to other apps, they may start reading new data at this time (by new read transactions)
begin()
do pruning
commit() // fsync here
```
it allows notify earlier. Fsync (of all changes) on modern drives is
fast, but on cloud-drives it’s about 1sec in worst cases.
So there is an issue with tracing certain blocks/transactions on
Polygon, for example:
```
> '{"method": "trace_transaction","params":["0xb198d93f640343a98f90d93aa2b74b4fc5c64f3a649f1608d2bfd1004f9dee0e"],"id":1,"jsonrpc":"2.0"}'
```
gives the error `first run for txIndex 1 error: insufficient funds for
gas * price + value: address 0x10AD27A96CDBffC90ab3b83bF695911426A69f5E
have 16927727762862809 want 17594166808296934`
The reason is that this transaction is from the author of the block,
which doesn't have enough ETH to pay for the gas fee + tx value if he's
not the block author receiving transactions fees.
The issue is that currently the APIs are using `ethash.NewFaker()`
Engine for running traces, etc. which doesn't know how to get the author
for a specific block (which is consensus dependant); as it was noting in
several TODO comments.
The fix is to pass the Engine to the BaseAPI, which can then be used to
create the right Block Context. I chose to split the current Engine
interface in 2, with Reader and Writer, so that the BaseAPI only
receives the Reader one, which might be safer (even though it's only
used for getting the block Author).
Moving the txpool transaction pull into the execution phase and looping
until the gas is used up or the txpool runs dry.
Removed the concept of local vs remote transactions as comments/code
showed this split was no longer in use.
Created a `PreparedTxs` collection to satisfy the use case in the
integration tool when mining is active.
Untested locally as I have no way of mining/validating currently.
Co-authored-by: Alex Sharov <AskAlexSharov@gmail.com>
feat:
1. `erigon_getLatestLogs` doesn't have to match the exact position of
the topics. It will match logs that contain the topics regardless of the
topics' position with original bloom filter. And it accepts `blockCount`
& `crit.ToBlock` params for better pagination.
This PR includes changes required for delhi hard fork schedule at block
`29638656` on mumbai testnet. It changes few major parameters.
1. Sprint length - the number of bor blocks post which a new validator
mines has been reduced from 64 to 16.
2. Block time - the block time which was increased earlier for some
experiments to 5 seconds has been reduced to 2 seconds (along with
backup multiplier and producer delay).
3. Base fee denominator - this fields has been increased from 8 to 16 to
smoothen the effect of EIP 1559.
eth_gasPrice(): A maximum of 3 samples should be taken per block (see
sample const)
The getBlockPrices() function takes 3 samples on the first block and one
on the others.
In the check s.Len() >= limit s.Len() after first block will contain 3
samples and so the loop will be executed only one time for each block
NOT three times
In context of https://github.com/ledgerwatch/erigon/issues/5694, this PR
adds some fixes and improvement in the mining flow. Also, a relevant
change in txpool (present in erigon-lib) is made here:
https://github.com/ledgerwatch/erigon-lib/pull/737
#### Changes in triggering mining in `startMining()`
The mining module didn't honour the block time as a simple 3 second
timer and a notifier from txpool was used to trigger mining. This would
cause inconsistencies, at least with the bor consensus. Hence, a geth
like approach is used instead for simplicity. A new head channel
subscription is added in the `startMining()` loop which would notify the
addition of new block. Hence, this would make sure that the block time
is being honoured. Moreover, the fixed 3 second timer is replaced by the
`miner.recommit` value set using flags.
#### Changes in the arrangement of calls made post mining
When all the mining stages are completed, erigon writes all the data in
a cache. It then processes the block through all the stages as it would
process a block received from P2P. In this case, some of the stages
aren't really required. Like the block header and body download stage is
not required as the block was mined locally. Even execution stage is not
required as it already went through it in the mining stages.
Now, we encountered an issue where the chain was halted and kept mining
the same block again and again (liveness issue). The root cause is
because of an error in a stage of it's parent block. This stage turns
out to be the 4th stage which is "Block body download" stage. This stage
tries to download the block body from peers using the headers. As, we
mined this block locally we don't really need to download anything (or
process anything again). Hence, it reaches out to the cache which we
store for the block body.
Interestingly that cache turned out to be empty for some blocks. This
was because post mining, before adding block header and body to a cache,
we call the broadcast method which starts the staged sync. So,
technically it’s a bit uncertain at any stage if the block header and
body has been written or not.(see
[this](https://github.com/ledgerwatch/erigon/blob/devel/eth/backend.go#L553-L572)).
To achieve complete certainty, we rearranged the calls with the write to
cache being called first and broadcast next. This pretty much solves the
issue as now we’re sure that we’d always have a block body in the cache
when we reach the body download stage.
#### Misc changes
This PR also adds some logs in bor consensus.
Code news up an oracle for every call so existing cache checks always
came back as 0. Moved cache up a layer and pass in via the new
`gasprice.Cache` interface. Looked at putting the oracle instance onto
the ethApi itself to re-use it that way, but the backend transaction
made it a little hard work as we can't re-use that. This seemed cleaner
but happy to take feedback.
Locally takes me from ±2.5k rps to ±43k rps so quite a difference there.
(k6 with 1000 virtual users)
While main thread is idle (apply is on RoTx now):
- periodically still hisotry/index etl collectors and load them to db
- prune old history/index in db
Also:
- fixed RAM estimation calculation of State22
BaseFee is required in AuRa headers when
[EIP-1559](https://eips.ethereum.org/EIPS/eip-1559) is activated.
Also:
- Basic AuRa header verification
- Extract some common RLP methods
- Tiny log clean-up
Revert to older batch size logic to keep memory usage down during
execution phase and closer to the --batchSize flag size.
Spotted a "leak" in key generation as well. The unsafe pointer keeps the
byte slice around for as long as the batch is not committed, takes up a
fair chunk of memory surprisingly doing that so removed the unsafe
pointer usage giving the GC some chance to clean up along the way.
Moved the batch rollback into a defer func call rather than allowing
them to stack in the for loop. If this isn't going to work just let me
know and can change it back.