If HeaderDownload.VerifyHeader always returns false, the memory usage
grows at a fast pace
due to Link objects (containing headers) not deallocated even after the
link queue pruning.
Add a check against the inbound headers before publishing a newly mined
block after the wait delay.
If the node received a block while it was processing transactions, or
waiting for its publish slot, do a final check that another node hasn't
already published a block.
This introduces _experimental_ block execution run by embedded Silkworm
API library:
- new command-line option `silkworm.path` to enable the feature by
specifying the path to the Silkworm library
- the Silkworm API shared library is dynamically loaded on-demand
- currently requires to build Silkworm library on the target machine
- available only on Linux at the moment: macOS has issue with [stack
size](https://github.com/golang/go/issues/28024) and Windows would
require [TDM-GCC-64](https://jmeubank.github.io/tdm-gcc/), both need
dedicated effort for an assessment
When a new feature (like for the upcoming `Aalborg` hard fork) for
Polygon hasn't kicked in (in heimdall), the endpoints of heimdall will
now return 503 (Service Unavailable) status code. This PR makes sure
that erigon handles that code separately and doesn't keep retying to
fetch info. It also acts as a notifier of the HF in erigon.
Similar reference PR in bor:
https://github.com/maticnetwork/bor/pull/1023
Whitelisting calculation of the roothash should not be dependent on the
bor api running. This will not always be the case, for example when
erigon is configured with a separate rpc deamon.
To fix this the calculation has been moved to Bor.
Additionally the redundant Bor API code has been removed as this is not
called by any code and the functionality looks to have migrated to the
turbo/jsonrpc package.
Fixes for label discrepancies in collector for summaries etc which have
a template which includes a quantile.
Initial native Prometheus client implementation of metrics - which is
currently turned off except for local testing and interface exports.
This builds upon #3453 and #3524, which previously implemented gas
optimizations for the access lists generated by Erigon's implementation
of the `eth_createAccessList` RPC call.
Erigon currently optimizes inclusion of the recipient address based on
how many storage keys are accessed, but does not perform the same
optimization for sender address and precompiled contract addresses.
These changes make the same optimization available for all of these
cases.
Additionally, this handles the cases of block producer address and
created smart contract addresses. If these cases were omitted on purpose
since they heavily rely on state, it may still make sense to offer them
to users but disable them by default.
Closes#8078
This change is primarily intended to support go 1.21, but as a
side-effect requires updating libp2p, which in turn triggers an update
of golang.org/x/exp which creates quite a bit of (simple) churn in the
slice sorting.
This change introduces a new `cmp.Compare` function which can be used to
return an integer satisfying the compare interface for slice sorting.
In order to continue to support mplex for libp2p, the change references
github.com/libp2p/go-libp2p-mplex instead. Please see the PR at
https://github.com/libp2p/go-libp2p/pull/2498 for the official usptream
comment that indicates official support for mplex being moved to this
location.
Co-authored-by: Jason Yellick <jason@enya.ai>
Apparently `log.New()` doesn't discard messages. Before this change I
saw the following in the console log:
```
[INFO] [09-27|15:25:36.225] [NewPayload] Handling new payload height=18227379 hash=0xa82630b9acf814930f18b45cad533a635076eaaf8e0ae596914d906db02ea3aa
[INFO] [09-27|15:25:36.624] [5/7 Execution] Completed on block=18227379
[INFO] [09-27|15:25:37.245] [updateForkchoice] Fork choice update: flushing in-memory state (built by previous newPayload)
[INFO] [09-27|15:25:37.854] RPC Daemon notified of new headers from=18227378 to=18227379 hash=0xa82630b9acf814930f18b45cad533a635076eaaf8e0ae596914d906db02ea3aa header sending=9.615µs log sending=336ns
[INFO] [09-27|15:25:37.854] head updated hash=0xa82630b9acf814930f18b45cad533a635076eaaf8e0ae596914d906db02ea3aa number=18227379
```
And after this change:
```
[INFO] [09-27|20:31:36.929] [NewPayload] Handling new payload height=18228897 hash=0x276fbffe9dc95fa613815238c870de89de340784b4d3d7a7bd5bf1cea6e54a97
[INFO] [09-27|20:31:37.713] [updateForkchoice] Fork choice update: flushing in-memory state (built by previous newPayload)
[INFO] [09-27|20:31:38.172] RPC Daemon notified of new headers from=18228896 to=18228897 hash=0x276fbffe9dc95fa613815238c870de89de340784b4d3d7a7bd5bf1cea6e54a97 header sending=17.073µs log sending=272ns
[INFO] [09-27|20:31:38.172] head updated hash=0x276fbffe9dc95fa613815238c870de89de340784b4d3d7a7bd5bf1cea6e54a97 number=18228897
```
This is a follow-up to PR #7464.
Add code to the headers state to break processing if a bor milestone
rewind is detected.
The rewind processing happens in the bor/heimdall stage - this change
just avoids unnecessary header loading
if a milestone fork is likely to be detected
---------
Co-authored-by: Anshal Shukla <shukla.anshal85@gmail.com>
Some peer-review changes from the last related PR.
Addition of a flag for BlobSlots - for max allowed blobs per account in
txpool.
Use BlobFee from the block to validate txs in the pool.
See also https://github.com/ledgerwatch/erigon-lib/pull/1125
This is the second part of the bor milestone release it contains the
following changes:
* Initialize services
* This is a change from the initial pull request I have moved all of the
initialization to the bor engine. To facilitate this I have just passed
in the heimdall client interface, rather than the whole engine
* Stage processing
* This is also a change from the original PR - the code is contained in
the bor heimdall stage rather than in headers - the effect should be the
same, but this needs testing
---------
Co-authored-by: Mark Holt <mark@disributed.vision>
Co-authored-by: Anshal Shukla <shukla.anshal85@gmail.com>
First posted [here](https://github.com/ledgerwatch/erigon/pull/8195)
with bugs, which fixed in this PR.
When we call the debug_traceXXX API provided by Erigon, the withLog
option in tracerConfig is very helpful.
However, currently, the tracer cannot guarantee that the order of logs
output is consistent with the event logs returned in the transaction
receipt. This may make it difficult to use the output of the APIs. If
you need to access event logs in order, it is recommended to use the
event logs returned in the transaction receipt.
Here is an example to illustrate the reason why the order of callTracer
logs and receipt eventlogs can be inconsistent:
In a call trace tree, if a call has multiple logs and this call has
multiple child calls, and logs are also output in the child calls, the
logs of the child calls may be output between the logs of the parent
call in receipt.
I add an index field of log to identify log index of the logs in tracer
result, and it helps me a lot.
For example, erigon on devnet8 marked a block as bad due to
"mdbx_cursor_open: cannot allocate memory":
```
[INFO] [09-12|04:57:36.041] [NewPayload] Handling new payload height=171035 hash=0x321dea00c4853ee354bebaf8aef3e63fbe06c4508271c0db4c92b0f087aedc3b
171034
[WARN] [09-12|04:57:36.069] Could not validate block err="[3/7 BlockHashes] table: Header, mdbx_cursor_open: cannot allocate memory, stack: [kv_mdbx.go:1057 kv_mdbx.
go:1069 kv_mdbx.go:1077 memory_mutation.go:473 memory_mutation.go:502 etl.go:123 etl.go:96 block_writer.go:40 stage_blockhashes.go:49 default_stages.go:457 sync.go:425 sync.go:258 s
tageloop.go:414 backend.go:476 fork_validator.go:250 fork_validator.go:156 ethereum_execution.go:151 execution_client.go:51 chain_reader.go:252 engine_server.go:741 engine_server.go
:235 engine_server.go:600 value.go:586 value.go:370 service.go:224 handler.go:494 handler.go:444 handler.go:392 handler.go:223 handler.go:316 asm_amd64.s:1598]"
[WARN] [09-12|04:57:36.069] ethereumExecutionModule.ValidateChain: chain is invalid hash=0x321dea00c4853ee354bebaf8aef3e63fbe06c4508271c0db4c92b0f087aedc3b
```
With this PR blocks are marked as bad only on genuine protocol errors.
When we call the debug_traceXXX API provided by Erigon, the withLog
option in tracerConfig is very helpful.
However, currently, the tracer cannot guarantee that the order of logs
output is consistent with the event logs returned in the transaction
receipt. This may make it difficult to use the output of the APIs. If
you need to access event logs in order, it is recommended to use the
event logs returned in the transaction receipt.
Here is an example to illustrate the reason why the order of callTracer
logs and receipt eventlogs can be inconsistent:
In a call trace tree, if a call has multiple logs and this call has
multiple child calls, and logs are also output in the child calls, the
logs of the child calls may be output between the logs of the parent
call in receipt.
I add an index field of log to identify log index of the logs in tracer
result, and it helps me a lot.