In the previous code WaitGroup db.wg.Add(), Wait() and db.closed were
not treated in sync. In particular, it was theoretically possible to
first check closed, then set closed and Wait, and then call wg.Add()
while waiting (leading to WaitGroup panic).
In theory it was also possible that db.env.BeginTxn() is called on a
closed or nil db.env, because db.wg.Add() was called only after BeginTxn
(db.wg.Wait() could already return).
WaitGroup is replaced with a Cond variable.
Now it is not possible to increase the active transactions count on a
closed database. It is also not possible to call BeginTxn on a closed
database.
If any DB method is called while Close() is waiting for db.kv.Close()
(it waits for ongoing method calls/transactions to finish)
a panic: "WaitGroup is reused before previous Wait has returned" might
happen.
Use context cancellation to ensure that new method calls immediately
return during db.kv.Close().
It is possible that a span update happens during a milestone.
A headers slice might cross to the new span.
Also if 2 forks evolve simulaneously, a shorter fork can still be in the
previous span.
In these cases we need access to the previous span to calculate
difficulty and validate header times.
SpansCache will keep recent spans.
The cache will be updated on new span events from the heimdall.
The cache is pruned on new milestone events and in practice no more than
2 spans are kept.
The header difficulty calculation and time validation depends on having
a span for that header in the cache.
When the sync loop first runs it suppresses block sync events both in
the initial loop and when the blocks being processed are greater than
1000.
This fix removed the first check, because otherwise the first block
received by the process ends up not getting sent to the tx pool. Which
means it won't produce new block for polygon.
As well as this fix - I have also moved the gas initialization to the
txpool start method rather than prompting it with a 'synthetic block
event'
As the txpool start has access to the core & tx DB's it can find the
current block and chain config internally so that it doesn't need to be
externally activated it can just do this itself on start up. This has
the advantage of making the txpool more self contained.
Pr is ready to review and merge.
This PR includes implementing and integrating Ethereum Deneb's hard work
with the Caplin Ethereum client.
Changes:
- Full compatibility with Deneb Ethereum hard fork
- Added new EIPs introduced in Deneb. (`EIP-4788`, `EIP-4844`,
`EIP-7044`, `EIP-7045`, `EIP-7514`)
- Tests integration
---------
Co-authored-by: Giulio <giulio.rebuffo@gmail.com>
This fixes a couple of regressions for running the uploader for mumbai
* Now flags have moved to a higher context they need to be set in the
context not the flag values
* Span 0 of mumbai has a header/span mismatch for span zero sprint 0. So
the check here needs to be suppressed
When discarding spamming we need to remove the tx from the subpools as
well as the sender tx list. Otherwise the tx is ignored by other
operations and left in the subpool
As well as the fix here this PR also contains several changes to TX
TRACING and other logging to make it easier to see what is going on with
pool processing
This PR
- increase the test time
- use the updated python scripts that reports as an issue a long exit
time
- change the test name to states that it is runned in the snapshot
downloading phase
(a later PR will add a similar test on the block downloading phase)
Integration tests CI is failing due to a port clash in devnet tests. I
believe this is because there are 2 packages of devnet integration tests
and go can run tests from separate packages in parallel (by default it
does package level parallelism). A simple fix would be to just have all
devnet integration tests in 1 package and run all these tests
sequentially within the package (ie not use t.Parallel). This PR moves
all devnet integration tests in 1 package.
`"ContextStart devnet start failed: private api: could not create
listener: listen top 127.0.0.1:10090: bind: address already in use,
addr=localhost:10090"`
![Screenshot 2024-01-10 at 13 38
37](https://github.com/ledgerwatch/erigon/assets/94537774/06bda987-45e5-46ef-9e0b-3876b3f85c01)
This PR contains 3 fixes for interaction between the Bor mining loop and
the TX pool which where causing the regular creation of blocks with zero
transactions.
* Mining/Tx pool block synchronization
The synchronization of the tx pool between the sync loop and the mining
loop has been changed so that both are triggered by the same event and
synchronized via a sync.Cond rather than a polling loop with a hard
coded loop limit. This means that mining now waits for the pool to be
updated from the previous block before it starts the mining process.
* Txpool Startup consolidated into its MainLoop
Previously the tx pool start process was dynamically triggered at
various points in the code. This has all now been moved to the start of
the main loop. This is necessary to avoid a timing hole which can leave
the mining loop hanging waiting for a previously block broadcast which
it missed due to its delay start.
* Mining listens for block broadcast to avoid duplicate mining
operations
The mining loop for bor has a recommit timer in case blocks re not
produced on time. However in the case of sprint transitions where the
seal publication is delayed this can lead to duplicate block production.
This is suppressed by introducing a `waiting` state which is exited upon
the block being broadcast from the sealing operation.
logic: 1/42 of ram, but not more than 2Gb for chandata and not more than
256mb for other databases.
---------
Co-authored-by: battlmonstr <battlmonstr@users.noreply.github.com>