When running from forks results would need to be uploaded as artifacts
and separately parsed. This is no longer true when running as a
scheduled job and we can therefore fix and get a speedup by reading the
results straight from the workspace.
- Added a new method and type for contract transactions.
- Added functions to emit fallback events from contract transactions.
- Added GetLogs request generator
- Added tests for GetLogs request generator
- remove hive run from CI yaml
- create nightly hive yaml which runs hive at 1am each day on a hosted
runner (can move to self hosted if needs be in future)
https://github.com/ledgerwatch/erigon/issues/5965. This PR starts the
implementation of syncing from a checkpoint to head.
Co-authored-by: Andrew Ashikhmin <34320705+yperbasis@users.noreply.github.com>
Co-authored-by: Alex Sharov <AskAlexSharov@gmail.com>
Co-authored-by: Giulio rebuffo <giulio.rebuffo@gmail.com>
Co-authored-by: Nebojsa Urosevic <nebojsa94@users.noreply.github.com>
Co-authored-by: Mark Shields <4237425+beejiujitsu@users.noreply.github.com>
Co-authored-by: Manav Darji <manavdarji.india@gmail.com>
Co-authored-by: Enrique Jose Avila Asapche <eavilaasapche@gmail.com>
Co-authored-by: Nathan (Blaise) Bruer <thegreatall@gmail.com>
Co-authored-by: lupin012 <58134934+lupin012@users.noreply.github.com>
Co-authored-by: Philippe Schommers <philippe@schommers.be>
impacted API:
eth_getTransactionByHash()
eth_getTransactionByNumber()
eth_getTransactionByBlockHashAndIndex()
eth_getTransactionByBlockNumberAndIndex()
eth_getBlockByHash()
eth_getBlockByNumber()
1) In case of legacy transitions the chainId field should be inserted
only if V is not 27/28.
This seems also the Geth/v1.10.23 behaviour via infura
2) In case of dynamicFee/AccessList transaction the access_list should
be inserted in the response also if empty
This is done correctly in cmd/rpcdaemon/commands/eth_api.go and NOT in
internal/ethapi/api.go
This seems also the Geth/v1.10.23 behaviour via infura
eth_gasPrice(): A maximum of 3 samples should be taken per block (see
sample const)
The getBlockPrices() function takes 3 samples on the first block and one
on the others.
In the check s.Len() >= limit s.Len() after first block will contain 3
samples and so the loop will be executed only one time for each block
NOT three times
Fixes an issue where if the peer count didn't change much, we'd send the
message to the same peers nearly every time.
We now pick new random peers on every call to
SendMessageToRandomPeers().
In context of https://github.com/ledgerwatch/erigon/issues/5694, this PR
adds some fixes and improvement in the mining flow. Also, a relevant
change in txpool (present in erigon-lib) is made here:
https://github.com/ledgerwatch/erigon-lib/pull/737
#### Changes in triggering mining in `startMining()`
The mining module didn't honour the block time as a simple 3 second
timer and a notifier from txpool was used to trigger mining. This would
cause inconsistencies, at least with the bor consensus. Hence, a geth
like approach is used instead for simplicity. A new head channel
subscription is added in the `startMining()` loop which would notify the
addition of new block. Hence, this would make sure that the block time
is being honoured. Moreover, the fixed 3 second timer is replaced by the
`miner.recommit` value set using flags.
#### Changes in the arrangement of calls made post mining
When all the mining stages are completed, erigon writes all the data in
a cache. It then processes the block through all the stages as it would
process a block received from P2P. In this case, some of the stages
aren't really required. Like the block header and body download stage is
not required as the block was mined locally. Even execution stage is not
required as it already went through it in the mining stages.
Now, we encountered an issue where the chain was halted and kept mining
the same block again and again (liveness issue). The root cause is
because of an error in a stage of it's parent block. This stage turns
out to be the 4th stage which is "Block body download" stage. This stage
tries to download the block body from peers using the headers. As, we
mined this block locally we don't really need to download anything (or
process anything again). Hence, it reaches out to the cache which we
store for the block body.
Interestingly that cache turned out to be empty for some blocks. This
was because post mining, before adding block header and body to a cache,
we call the broadcast method which starts the staged sync. So,
technically it’s a bit uncertain at any stage if the block header and
body has been written or not.(see
[this](https://github.com/ledgerwatch/erigon/blob/devel/eth/backend.go#L553-L572)).
To achieve complete certainty, we rearranged the calls with the write to
cache being called first and broadcast next. This pretty much solves the
issue as now we’re sure that we’d always have a block body in the cache
when we reach the body download stage.
#### Misc changes
This PR also adds some logs in bor consensus.