Bumps [golang.org/x/net](https://github.com/golang/net) from 0.15.0 to
0.17.0.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="b225e7ca6d"><code>b225e7c</code></a>
http2: limit maximum handler goroutines to MaxConcurrentStreams</li>
<li><a
href="88194ad8ab"><code>88194ad</code></a>
go.mod: update golang.org/x dependencies</li>
<li><a
href="2b60a61f1e"><code>2b60a61</code></a>
quic: fix several bugs in flow control accounting</li>
<li><a
href="73d82efb96"><code>73d82ef</code></a>
quic: handle DATA_BLOCKED frames</li>
<li><a
href="5d5a036a50"><code>5d5a036</code></a>
quic: handle streams moving from the data queue to the meta queue</li>
<li><a
href="350aad2603"><code>350aad2</code></a>
quic: correctly extend peer's flow control window after MAX_DATA</li>
<li><a
href="21814e71db"><code>21814e7</code></a>
quic: validate connection id transport parameters</li>
<li><a
href="a600b3518e"><code>a600b35</code></a>
quic: avoid redundant MAX_DATA updates</li>
<li><a
href="ea633599b5"><code>ea63359</code></a>
http2: check stream body is present on read timeout</li>
<li><a
href="ddd8598e56"><code>ddd8598</code></a>
quic: version negotiation</li>
<li>Additional commits viewable in <a
href="https://github.com/golang/net/compare/v0.15.0...v0.17.0">compare
view</a></li>
</ul>
</details>
<br />
[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=golang.org/x/net&package-manager=go_modules&previous-version=0.15.0&new-version=0.17.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/ledgerwatch/erigon/network/alerts).
</details>
---------
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: yperbasis <andrey.ashikhmin@gmail.com>
This fixes 2 related issues:
* Now that the bor consensus engine is required for queries it can't be
created based on the pretense of a db directory, but must be based on
chain config read from the db. Using the DB presence causes Bor to get
instantiated for non bor chains which breaks.
* At the moment eth_calls on a remote daemon don't check Bor headers
prior to calling the EVM code as it was just using a fake ETHash
instance - which performs ETH header validation only.
The current version is mostly working but needs adapting to perform lazy
initialization of the engine.
System contract upgrades for Polygon are already handled by the
`BlockAlloc` logic and there's no need to duplicate it with the
`CalcuttaBlock` logic (there's no Calcutta in
https://github.com/maticnetwork/bor).
As per [EIP-1559](https://eips.ethereum.org/EIPS/eip-1559) , there is
only initial base fee after which it adjusts automatically. There are no
current references to a minimum protocol base fee on Ethereum, Gnosis or
Polygon.
The existing code would put a 7 wei minimum limit on transactions to be
included in the pool. This PR removes this check within txPool. Any
transaction can now be added to the `queued` sub-pool after which it
could get promoted to `pending` or `baseFee` subpools.
The`pricelimit` flag still exists and acts on non-local transactions as
a minimum feeCap for inclusion.
This introduces _experimental_ block execution run by embedded Silkworm
API library:
- new command-line option `silkworm.path` to enable the feature by
specifying the path to the Silkworm library
- the Silkworm API shared library is dynamically loaded on-demand
- currently requires to build Silkworm library on the target machine
- available only on Linux at the moment: macOS has issue with [stack
size](https://github.com/golang/go/issues/28024) and Windows would
require [TDM-GCC-64](https://jmeubank.github.io/tdm-gcc/), both need
dedicated effort for an assessment
Closes#8078
This change is primarily intended to support go 1.21, but as a
side-effect requires updating libp2p, which in turn triggers an update
of golang.org/x/exp which creates quite a bit of (simple) churn in the
slice sorting.
This change introduces a new `cmp.Compare` function which can be used to
return an integer satisfying the compare interface for slice sorting.
In order to continue to support mplex for libp2p, the change references
github.com/libp2p/go-libp2p-mplex instead. Please see the PR at
https://github.com/libp2p/go-libp2p/pull/2498 for the official usptream
comment that indicates official support for mplex being moved to this
location.
Co-authored-by: Jason Yellick <jason@enya.ai>
This is to fix an issue with resource usage if the db.size.limit is
increased from its default setting of 2TB. This is applied to the chain
DB, but should not be used on the consensus DB which has smaller data
requirements. Expanding both DBs results in excessive RAM being reserved
by the underlying OS.