The UDP test must be closed after the serveTestnet exits.
If it happens before, the serveTestnet encounters this error.
(it tries to emulate a packet receival after closing the transport)
FindNode triggers a Ping in ensureBond.
This causes an extra Sleep for "ping back".
Don't wait for this in tests.
Close v5 tests.
The requests may also timeout if a lot of them queue up in the udpTest.pipe,
and serveTestnet is slow to process them.
Increase replyTimeout a bit to prevent that.
* exchange RLPx Hello even when maxpeers limit is reached
* bump MaxPendingPeers to increase the default handshake queue
(and the likelyhood of Hello exchange)
The test was flaky, because of the "endpoint prediction".
The test starts 5 nodes one by one.
Node 0 is used as a bootstrap node for nodes 1-4.
When it is about to add, say, node 3, nodes 0 and 1 might already have had a chance to communicate,
and updateEndpoints() deletes the node 0 UDP port, because fallbackUDP port was not configured.
In this case node 3 would get a bootstrap node 0 without a port and lead to an error:
v5_udp_test.go:110: bad bootstrap node "enr:...": missing UDP port
The problem was reproducible by this command:
go test ./p2p/discover -run TestUDPv5_lookupE2E -count 500
* use semaphore instead of a chan struct{}
* move MaxPendingPeers default value to DefaultConfig.P2P
* log Error if Accept fails
* replace quit channel with context
The test was slow, because it was trying to find
predefined nodeIDs (lookupTestnet) by generating random keys
and trying to find their neighbours
until it hits all nodes of the lookupTestnet.
In addition each FindNode response was waited for 0.5 sec (respTimeout).
This could take up to 30 sec and fail the test suite.
A fake random key generator is now used during the test.
It issues the expected keys, and the lookup converges quickly.
The reply timeout is reduced for the test.
Now it normally takes less than.1 sec.
This changes the definitions of Ping and Pong, adding an optional field
for the sequence number. This field was previously encoded/decoded using
the "tail" struct tag, but using "optional" is much nicer.
see https://github.com/ethereum/go-ethereum/pull/22842
Co-authored-by: Felix Lange <fjl@twurst.com>
* Switch peerId from 256 to 512 bit (as in stable)
* go mod tidy
* Fix some tests
* Fixed
* Fixes
* Fix tests
* Update to erigon-lib main
Co-authored-by: Alex Sharp <alexsharp@Alexs-MacBook-Pro.local>
Co-authored-by: Alexey Sharp <alexeysharp@Alexeys-iMac.local>
Most places that used this method were cutting off the 1st byte.
Refactor this idea to a common place.
* better naming: MarshalPubkey matches existing UnmarshalPubkey
* "Std" suffix for the ANSI standard encoding without cut off
* docs
Problem:
QuerySeeds will poke 150 random entries in the whole node DB and ignore hitting "field" entries.
In a bootstrap scenario it might hit hundreds of :lastping :lastpong entries,
and very few true "node record" entries.
After running for 15 minutes I've got totalEntryCount=1508 nodeRecordCount=114 entries.
There's a 1/16 chance of hitting a "node record" entry.
It means finding just about 10 nodes of 114 total on average from 150 attempts.
Solution:
Split "node record" entries to a separate table such that QuerySeeds doesn't do idle cycle hits.
UpdateFindFails/UpdateLastPingReceived/UpdateLastPongReceived events
are causing bursty DB commits (100 per minute).
This optimization throttles the disk writes to happen at most once in a few seconds,
because this info doesn't need to be persisted immediately.
This helps on HDD drives.
Encode and hash logic was duplicated in multiple places.
* Move encoding to p2p/discover/v4wire
* Move hashing to p2p/enode/idscheme
* Change newRandomLookup to create a proper random key on a curve.
If --nat extip:1.2.3.4 option is specified, the port mapping logic
(AddMapping/DeleteMapping) does nothing.
This optimization avoids running a goroutine for doing nothing.
Extract private key setup from the global config setup to make it reusable.
Return errors instead of panicing.
* NodeKey() method is removed - it is unused.
* use "log" for struct fields
* use "logger" for function parameters and local vars
This is a compromise between:
1) using logger := log.New() to avoid aliasing (log := log.New())
2) and keeping it short when logging e.g.: srv.log.Info(...)
* migrated consensus and chain config files for bsc support
* migrated more files from bsc
* fixed consensus crashing
* updated erigon lib for parlia snapshot prefix
* added staticpeers for bsc
* [+] added system contracts
[*] fixed bug with loading snapshot
[+] enabled gas bailout
[+] added fix to prevent syncing more than 1000 headers (for testing only)
[*] fixed bug with crashing sender recover sometimes
* migrated system contract calls
* [*] fixed bug with returning mutable balance object
[+] migrated lightclient contracts from bsc
[*] fixed parlia consensus config param
* [*] fixed tendermint deps
* [+] added some logs
* [+] enabled bsc forks
[*] fixed syscalls from coinbase
[*] more logging
* Fix call sys contract gas calculation
* [*] fixed executing system transactions
* [*] enabled receipt hash, gas and bloom filter checks
* [-] removed some logging scripts
[*] set header checkpoint to 10 million blocks (for testing forks)
* [*] fixed bug with commiting dirty inter block state state after system transaction execution
[-] removed some extra logs and comments
* [+] added chapel and rialto testnet support
* [*] fixed chapel allocs
* [-] removed 6 mil block limit for headers sync
* Fix hardforks on chapel and other testnets
* [*] fixed header sync issue after merge
* [*] tiny code cleanup
* [-] removed some comments
* [*] increased mdbx map size to 4 TB
* [*] increased max chaindata size to 6 tb
* [*] bring more compatibility with origin erigon and some code cleanup
* [+] added support of validator mode for BSC chain
* [*] enable private key load for bsc, rialto and chapel chains
* [*] fixed running BSC validator node
* Fix the branch list
* [*] tiny fixes for linter
* [*] formatted imports for core and parlia packages
* [*] fixed import rules in other files
* Revert "[*] formatted imports for core and parlia packages"
This reverts commit c764b58b34fedc2b14d69458583ba0dad114f227.
* [*] changed import rules in more packages
* [*] fixed type mismatch in hack command
* [*] fixed crash on new epoch, enabled bootstrap flags
* [*] fixed linter errors
* [*] fixed missing err check for syscalls
* [*] now BSC implementation is fully compatible with erigon original sources
* Revert "Add chain config and CLI changes for Binance Smart Chain support (#3131)"
This reverts commit 3d048b7f1a.
* Revert "Add Parlia consensus engine for Binance Smart Chain support (#3086)"
This reverts commit ee99f17fbe.
* [*] fixed several issues after merge
* [*] fixed integration compilation
* Revert "Fix the branch list"
This reverts commit 8150ca57e5f2707a84a9f6a1c5b809b7cc84547b.
* [-] removed receipt repair migration
* [*] fixed parlia fork numbers output
* [*] bring more devel compatibility, fixed bsc address list for access list calculation
* [*] fixed bug with commiting state transition for bad blocks in BSC
* [*] fixed bsc changes apply for integration command and updated config print for parlia
* [*] fixed bug with applying bsc forks for chapel and rialto testnet chains
[*] let's use finalize and assemble for mining to let consensus know for what it's finalizing block
* Fix compilation errors in hack.go
* Fix lint
* reset changes in erigon-snapshots to devel
* Remove unrelated changes
* Fix embed
* Remove more unrelated changes
* Remove more unrelated changes
* Restore clique and aura miner config
* Refactor interfaces not to use slice pointers
* Refactor parlia functions to return tx and receipt instead of dealing with slices
* Fix for header panic
* Fix lint, restore system contract addresses
* Remove more unrelated changes, unify GatherForks
Co-authored-by: Dmitry Ivanov <convexman18@gmail.com>
Co-authored-by: j75689 <j75689@gmail.com>
Co-authored-by: Alexey Sharp <alexeysharp@Alexeys-iMac.local>
Co-authored-by: Alex Sharp <alexsharp@Alexs-MacBook-Pro.local>
* save
* save
* save
* save
* save
* save
* save
* save
* save
* save
* save
* save
* save
* save
* save
* save
* save
* save
* save
* save
* save
* save
* save
* save
* save
* save
* save
* save
* save
* save
* save
* save
* save
* save
* save
* save
* save
* save
* Prevent frequent commits to the node DB in sentries
* Commit when btree goes over limit
* iterator for SeedQuery
* Fixing test
* Fix tests
Co-authored-by: Alexey Sharp <alexeysharp@Alexeys-iMac.local>
* implemented crash reporting for all goroutine panics that aren't handled explicitly
* implemented crash reporting for all goroutine panics that aren't handled explicitly
* changed node defaults back to originals after testing
* implemented panic handling for all goroutines that don't explicitly handle them, outputting the stack trace to a file in crashreports
* handling panics on all goroutines gracefully
* updated missing call
* error assignment
* implemented suggestions
* path.Join added
* implemented Evgeny's suggestions
* changed path.Join to filepath.Join for cross-platform
* added err check
* updated RecoverStackTrace to LogPanic
* updated closures
* removed call of common.Go to some goroutines
* updated scope capture
* removed testing files
* reverted back to original method, I feel like its less intrusive
* update filename for clarity
This PR implements the first one of the "lespay" UDP queries which
is already useful in itself: the capacity query. The server pool is making
use of this query by doing a cheap UDP query to determine whether it is
worth starting the more expensive TCP connection process.
# Conflicts:
# les/client.go
# les/clientpool.go
# les/clientpool_test.go
# les/enr_entry.go
# les/server.go
# les/vflux/client/serverpool.go
# les/vflux/client/serverpool_test.go
# les/vflux/server/balance.go
# les/vflux/server/balance_test.go
# les/vflux/server/prioritypool.go
# les/vflux/server/prioritypool_test.go
# p2p/nodestate/nodestate.go
In the random sync algorithm used by the DNS node iterator, we first pick a random
tree and then perform one sync action on that tree. This happens in a loop until any
node is found. If no trees contain any nodes, the iterator will enter a hot loop spinning
at 100% CPU.
The fix is complicated. The iterator now checks if a meaningful sync action can
be performed on any tree. If there is nothing to do, it waits for the next root record
recheck time to arrive and then tries again.
Fixes#22306
Prevents a situation where we (not running snap) connects with a peer running snap, and get stalled waiting for snap registration to succeed (which will never happen), which cause a waitgroup wait to halt shutdown