This change makes the client attempt to reconnect when a write fails.
We already had reconnect support, but the reconnect would previously
happen on the next call after an error. Being more eager leads to a
smoother experience overall.
This replaces the JavaScript interpreter used by the console with goja,
which is actively maintained and a lot faster than otto. Clef still uses otto
and eth/tracers still uses duktape, so we are currently dependent on three
different JS interpreters. We're looking to replace the remaining uses of otto
soon though.
* log: delete RotatingFileHandler
We added this for the dashboard, which is gone now. The
handler never really worked well and had data race and file
handling issues.
* internal/debug: remove unused RotatingFileHandler setup code
* squash commits
* enable storage cache
* make linter happy
* fix subtree prefix len check
* save cahnges to test master
* remove restriction on prefix len
* fix comparison of last bits
* fix wrong alignment
* remove debug prints
* commit current state
* commit current state
* avoid changing state of resolver from multiwalk
* remove debug code
* remove debug code
* remove debug code
* remove unnecessary copy
* make code more readable
* reduce rebuildHashes initial resolution
* fix test after rebase to master
* make code more readable
* improve pruner
* pruner add IntermediateCache bucket
* fix panic in Walk on short keys
* reduce allocations for storage keys decompression by increasing default buffer size
* re-run CI
* fix iterator behaviour
* rename cache to hash for unification
* re-run ci
* avoid using underlying DB
* hash all subtree nodes before unload
* fix getNode method
* need to check node type, not parent - before put to hashBucket
* return back parent type check, doesn't work without it.
* don't recalculate hash again
* move unloadFunc from trie to pruner
* rename bucket to shorter name
* rename bucket to shorter name
* clean
* rebase to master
* Enable testGetNodeData by default
* Crude TestHashMapLeak
* nodeFlag ->nodeRef
* linter
* accountNode shouln't be in hashMap since only branch and short nodes are the standart ones
* Finalize hash eviction logic
* small changes to TestHashMapLeak
* Fix for the incorrect hash
Co-authored-by: ledgerwatch <akhounov@gmail.com>
* Use TrieDbState for tx pool
* Not initialise tx pool until state is loaded
* Add preimage
* Fix account
* Print codehash
* Print correct code hash
* Print incarnatin
* Print incarnatin
* Use proper incarnation
* Print dbValue
* Actually fix
* Actually fix
* Fix verifySnapshot
* readAccount to get code hash
* Next incarnation
* Print addrHashes with 0 incarnations
* Print storage history
* Print storage history
* Print storage history
* Print storage history
* Print all storage history
* print change set keys
* print change set keys
* print change set keys
* print change set keys
* Not print codebucket info
* Fixes
* Fix for incarnation
* Fix for storage history bucket
* Try to fix the leak
* Try to fix the leak
* Try to fix the leak
* Try to fix the leak
* Try to fix the leak
* Try to fix the leak
* Try to fix the leak
* Fix embedded nodes
* Hasher
* Fix
* Test fixes
* Add experimental debug flag
* Fix tx_pool_test
* Disable GetNodeData test unless in experiment
* Fix more tests
* Fix lint and revert some changes
* Fix lint
* Fix lint
* stateless: flush the partial witnesses DB for every saved witness.
it seems to fix to the out of memory error I had on blocks over 2.5M;
* fix linters
* simplify
* add context
* extract chain events
* run commit in goroutines
* mine only on canonical
* typo
* linters
* fmt
* mark unused methods
* restore stress test
* test single miner
* remove unsafe Trie storage
* remove locks from miner
* restore interrupt
* remove result goroutine
* remove unconfirmedBlocks
* cherry-pick 04a1d475ff1a36ad8f92fec80385df18c52bdc1f
* extract uncles
* one miner succeeded
* restore context cancel
* cleanup
* skip an unstable test
* remove pending state
* use context instead of interrupt func
* calculate sealHash only once
* comment out unstable test
* after merge
* fix after merge
Co-authored-by: ledgerwatch <akhounov@gmail.com>
* add env INTERMEDIATE_TRIE_CACHE
* try to use assert.New() pattern
* Fix "maligned" linter warnings to reduce space consumption of structs:
core/types/accounts/account.go:18:14: struct of size 136 bytes could be of size 128 bytes (maligned)
type Account struct {
--
trie/node.go:44:10: struct of size 80 bytes could be of size 72 bytes (maligned)
duoNode struct {
--
trie/resolve_set.go:28:17: struct of size 56 bytes could be of size 48 bytes (maligned)
type ResolveSet struct {
--
trie/resolver.go:34:15: struct of size 88 bytes could be of size 72 bytes (maligned)
type Resolver struct {
--
trie/visual.go:32:17: struct of size 104 bytes could be of size 96 bytes (maligned)
type VisualOpts struct {
This change works around the 32k RDATA character limit per change
request and fixes several issues in the deployer which prevented it from
working for our production trees.
This is a temporary fix for a problem which started happening when the
dialer was changed to read nodes from an enode.Iterator. Before the
iterator change, discovery queries would always return within a couple
seconds even if there was no Internet access. Since the iterator won't
return unless a node is actually found, discoverTask can take much
longer. This means that the 'emergency connect' logic might not execute
in time, leading to a stuck node.