Whitelisting calculation of the roothash should not be dependent on the
bor api running. This will not always be the case, for example when
erigon is configured with a separate rpc deamon.
To fix this the calculation has been moved to Bor.
Additionally the redundant Bor API code has been removed as this is not
called by any code and the functionality looks to have migrated to the
turbo/jsonrpc package.
Fixes for label discrepancies in collector for summaries etc which have
a template which includes a quantile.
Initial native Prometheus client implementation of metrics - which is
currently turned off except for local testing and interface exports.
This builds upon #3453 and #3524, which previously implemented gas
optimizations for the access lists generated by Erigon's implementation
of the `eth_createAccessList` RPC call.
Erigon currently optimizes inclusion of the recipient address based on
how many storage keys are accessed, but does not perform the same
optimization for sender address and precompiled contract addresses.
These changes make the same optimization available for all of these
cases.
Additionally, this handles the cases of block producer address and
created smart contract addresses. If these cases were omitted on purpose
since they heavily rely on state, it may still make sense to offer them
to users but disable them by default.
Closes#8078
This change is primarily intended to support go 1.21, but as a
side-effect requires updating libp2p, which in turn triggers an update
of golang.org/x/exp which creates quite a bit of (simple) churn in the
slice sorting.
This change introduces a new `cmp.Compare` function which can be used to
return an integer satisfying the compare interface for slice sorting.
In order to continue to support mplex for libp2p, the change references
github.com/libp2p/go-libp2p-mplex instead. Please see the PR at
https://github.com/libp2p/go-libp2p/pull/2498 for the official usptream
comment that indicates official support for mplex being moved to this
location.
Co-authored-by: Jason Yellick <jason@enya.ai>
This is to fix an issue with resource usage if the db.size.limit is
increased from its default setting of 2TB. This is applied to the chain
DB, but should not be used on the consensus DB which has smaller data
requirements. Expanding both DBs results in excessive RAM being reserved
by the underlying OS.
Apparently `log.New()` doesn't discard messages. Before this change I
saw the following in the console log:
```
[INFO] [09-27|15:25:36.225] [NewPayload] Handling new payload height=18227379 hash=0xa82630b9acf814930f18b45cad533a635076eaaf8e0ae596914d906db02ea3aa
[INFO] [09-27|15:25:36.624] [5/7 Execution] Completed on block=18227379
[INFO] [09-27|15:25:37.245] [updateForkchoice] Fork choice update: flushing in-memory state (built by previous newPayload)
[INFO] [09-27|15:25:37.854] RPC Daemon notified of new headers from=18227378 to=18227379 hash=0xa82630b9acf814930f18b45cad533a635076eaaf8e0ae596914d906db02ea3aa header sending=9.615µs log sending=336ns
[INFO] [09-27|15:25:37.854] head updated hash=0xa82630b9acf814930f18b45cad533a635076eaaf8e0ae596914d906db02ea3aa number=18227379
```
And after this change:
```
[INFO] [09-27|20:31:36.929] [NewPayload] Handling new payload height=18228897 hash=0x276fbffe9dc95fa613815238c870de89de340784b4d3d7a7bd5bf1cea6e54a97
[INFO] [09-27|20:31:37.713] [updateForkchoice] Fork choice update: flushing in-memory state (built by previous newPayload)
[INFO] [09-27|20:31:38.172] RPC Daemon notified of new headers from=18228896 to=18228897 hash=0x276fbffe9dc95fa613815238c870de89de340784b4d3d7a7bd5bf1cea6e54a97 header sending=17.073µs log sending=272ns
[INFO] [09-27|20:31:38.172] head updated hash=0x276fbffe9dc95fa613815238c870de89de340784b4d3d7a7bd5bf1cea6e54a97 number=18228897
```
This is a follow-up to PR #7464.
This is a fix for at least one cause of this isssue:
https://github.com/ledgerwatch/erigon/issues/8212.
It happens after the end of the snapshot load, because the snapshot
processing which was introduced a couple of month ago does not deal with
validation of the headers at the start of the start of the chain.
I have also added a fix to the persistence so that the last snapshot is
recorded so that subsequent runs are not forced to process the whole
snapshot run from start.
The relationship between this an memory usage is that the fact that
headers are not processed leads to a queue of pending headers with size
of around 5GB. I have not changed any header parameters - so likely a
prolonged stop to header processing will result in a similar level of
memory growth.
Add code to the headers state to break processing if a bor milestone
rewind is detected.
The rewind processing happens in the bor/heimdall stage - this change
just avoids unnecessary header loading
if a milestone fork is likely to be detected
---------
Co-authored-by: Anshal Shukla <shukla.anshal85@gmail.com>
Code to support react based UI for diagnostics:
* pprof, prometheus and diagnistics rationalized to use a single router
(i.e. they can all run in the same port)
* support_cmd updated to support node routing (was only first node)
* Multi content support in router tunnel (application/octet-stream &
appliaction/json)
* Routing requests changed from using http forms to rest + query params
* REST query requests can now be made against erigon base port and
diagnostics with the same url format/params
---------
Co-authored-by: dvovk <vovk.dimon@gmail.com>
Co-authored-by: Mark Holt <mark@disributed.vision>