* p2p: new dial scheduler
This change replaces the peer-to-peer dial scheduler with a new and
improved implementation. The new code is better than the previous
implementation in two key aspects:
- The time between discovery of a node and dialing that node is
significantly lower in the new version. The old dialState kept
a buffer of nodes and launched a task to refill it whenever the buffer
became empty. This worked well with the discovery interface we used to
have, but doesn't really work with the new iterator-based discovery
API.
- Selection of static dial candidates (created by Server.AddPeer or
through static-nodes.json) performs much better for large amounts of
static peers. Connections to static nodes are now limited like dynanic
dials and can no longer overstep MaxPeers or the dial ratio.
* p2p/simulations/adapters: adapt to new NodeDialer interface
* p2p: re-add check for self in checkDial
* p2p: remove peersetCh
* p2p: allow static dials when discovery is disabled
* p2p: add test for dialScheduler.removeStatic
* p2p: remove blank line
* p2p: fix documentation of maxDialPeers
* p2p: change "ok" to "added" in static node log
* p2p: improve dialTask docs
Also increase log level for "Can't resolve node"
* p2p: ensure dial resolver is truly nil without discovery
* p2p: add "looking for peers" log message
* p2p: clean up Server.run comments
* p2p: fix maxDialedConns for maxpeers < dialRatio
Always allocate at least one dial slot unless dialing is disabled using
NoDial or MaxPeers == 0. Most importantly, this fixes MaxPeers == 1 to
dedicate the sole slot to dialing instead of listening.
* p2p: fix RemovePeer to disconnect the peer again
Also make RemovePeer synchronous and add a test.
* p2p: remove "Connection set up" log message
* p2p: clean up connection logging
We previously logged outgoing connection failures up to three times.
- in SetupConn() as "Setting up connection failed addr=..."
- in setupConn() with an error-specific message and "id=... addr=..."
- in dial() as "Dial error task=..."
This commit ensures a single log message is emitted per failure and adds
"id=... addr=... conn=..." everywhere (id= omitted when the ID isn't
known yet).
Also avoid printing a log message when a static dial fails but can't be
resolved because discv4 is disabled. The light client hit this case all
the time, increasing the message count to four lines per failed
connection.
* p2p: document that RemovePeer blocks
The feature update allows the GraphQL API endpoint to retrieve
transaction signature R,S,V parameters.
Co-authored-by: amitshah <amitshah0t7@gmail.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
For longer records and subtree entries, the deployer created two
separate TXT records. This doesn't work as intended because the client
will receive the two records in arbitrary order. The fix is to encode
longer values as "string1""string2" instead of "string1", "string2".
This encoding creates a single record on AWS Route53.
Adds the 'geth dumpgenesis' command, which writes the configured
genesis in JSON format to stdout. This provides a way to generate the
data (structure and content) that can then be used with the 'geth init'
command.
* core/vm/runtime: add test for blockhash
* core/evm: less iteration in blockhash
* core/vm/runtime: nitpickfix
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
This change makes the client attempt to reconnect when a write fails.
We already had reconnect support, but the reconnect would previously
happen on the next call after an error. Being more eager leads to a
smoother experience overall.
This replaces the JavaScript interpreter used by the console with goja,
which is actively maintained and a lot faster than otto. Clef still uses otto
and eth/tracers still uses duktape, so we are currently dependent on three
different JS interpreters. We're looking to replace the remaining uses of otto
soon though.
* log: delete RotatingFileHandler
We added this for the dashboard, which is gone now. The
handler never really worked well and had data race and file
handling issues.
* internal/debug: remove unused RotatingFileHandler setup code
* squash commits
* enable storage cache
* make linter happy
* fix subtree prefix len check
* save cahnges to test master
* remove restriction on prefix len
* fix comparison of last bits
* fix wrong alignment
* remove debug prints
* commit current state
* commit current state
* avoid changing state of resolver from multiwalk
* remove debug code
* remove debug code
* remove debug code
* remove unnecessary copy
* make code more readable
* reduce rebuildHashes initial resolution
* fix test after rebase to master
* make code more readable
* improve pruner
* pruner add IntermediateCache bucket
* fix panic in Walk on short keys
* reduce allocations for storage keys decompression by increasing default buffer size
* re-run CI
* fix iterator behaviour
* rename cache to hash for unification
* re-run ci
* avoid using underlying DB
* hash all subtree nodes before unload
* fix getNode method
* need to check node type, not parent - before put to hashBucket
* return back parent type check, doesn't work without it.
* don't recalculate hash again
* move unloadFunc from trie to pruner
* rename bucket to shorter name
* rename bucket to shorter name
* clean
* rebase to master
* Enable testGetNodeData by default
* Crude TestHashMapLeak
* nodeFlag ->nodeRef
* linter
* accountNode shouln't be in hashMap since only branch and short nodes are the standart ones
* Finalize hash eviction logic
* small changes to TestHashMapLeak
* Fix for the incorrect hash
Co-authored-by: ledgerwatch <akhounov@gmail.com>
* Use TrieDbState for tx pool
* Not initialise tx pool until state is loaded
* Add preimage
* Fix account
* Print codehash
* Print correct code hash
* Print incarnatin
* Print incarnatin
* Use proper incarnation
* Print dbValue
* Actually fix
* Actually fix
* Fix verifySnapshot
* readAccount to get code hash
* Next incarnation
* Print addrHashes with 0 incarnations
* Print storage history
* Print storage history
* Print storage history
* Print storage history
* Print all storage history
* print change set keys
* print change set keys
* print change set keys
* print change set keys
* Not print codebucket info
* Fixes
* Fix for incarnation
* Fix for storage history bucket
* Try to fix the leak
* Try to fix the leak
* Try to fix the leak
* Try to fix the leak
* Try to fix the leak
* Try to fix the leak
* Try to fix the leak
* Fix embedded nodes
* Hasher
* Fix
* Test fixes
* Add experimental debug flag
* Fix tx_pool_test
* Disable GetNodeData test unless in experiment
* Fix more tests
* Fix lint and revert some changes
* Fix lint
* Fix lint
* stateless: flush the partial witnesses DB for every saved witness.
it seems to fix to the out of memory error I had on blocks over 2.5M;
* fix linters
* simplify
* add context
* extract chain events
* run commit in goroutines
* mine only on canonical
* typo
* linters
* fmt
* mark unused methods
* restore stress test
* test single miner
* remove unsafe Trie storage
* remove locks from miner
* restore interrupt
* remove result goroutine
* remove unconfirmedBlocks
* cherry-pick 04a1d475ff1a36ad8f92fec80385df18c52bdc1f
* extract uncles
* one miner succeeded
* restore context cancel
* cleanup
* skip an unstable test
* remove pending state
* use context instead of interrupt func
* calculate sealHash only once
* comment out unstable test
* after merge
* fix after merge
Co-authored-by: ledgerwatch <akhounov@gmail.com>