For longer records and subtree entries, the deployer created two
separate TXT records. This doesn't work as intended because the client
will receive the two records in arbitrary order. The fix is to encode
longer values as "string1""string2" instead of "string1", "string2".
This encoding creates a single record on AWS Route53.
Adds the 'geth dumpgenesis' command, which writes the configured
genesis in JSON format to stdout. This provides a way to generate the
data (structure and content) that can then be used with the 'geth init'
command.
This replaces the JavaScript interpreter used by the console with goja,
which is actively maintained and a lot faster than otto. Clef still uses otto
and eth/tracers still uses duktape, so we are currently dependent on three
different JS interpreters. We're looking to replace the remaining uses of otto
soon though.
* log: delete RotatingFileHandler
We added this for the dashboard, which is gone now. The
handler never really worked well and had data race and file
handling issues.
* internal/debug: remove unused RotatingFileHandler setup code
* squash commits
* enable storage cache
* make linter happy
* fix subtree prefix len check
* save cahnges to test master
* remove restriction on prefix len
* fix comparison of last bits
* fix wrong alignment
* remove debug prints
* commit current state
* commit current state
* avoid changing state of resolver from multiwalk
* remove debug code
* remove debug code
* remove debug code
* remove unnecessary copy
* make code more readable
* reduce rebuildHashes initial resolution
* fix test after rebase to master
* make code more readable
* improve pruner
* pruner add IntermediateCache bucket
* fix panic in Walk on short keys
* reduce allocations for storage keys decompression by increasing default buffer size
* re-run CI
* fix iterator behaviour
* rename cache to hash for unification
* re-run ci
* avoid using underlying DB
* hash all subtree nodes before unload
* fix getNode method
* need to check node type, not parent - before put to hashBucket
* return back parent type check, doesn't work without it.
* don't recalculate hash again
* move unloadFunc from trie to pruner
* rename bucket to shorter name
* rename bucket to shorter name
* clean
* rebase to master
* Use TrieDbState for tx pool
* Not initialise tx pool until state is loaded
* Add preimage
* Fix account
* Print codehash
* Print correct code hash
* Print incarnatin
* Print incarnatin
* Use proper incarnation
* Print dbValue
* Actually fix
* Actually fix
* Fix verifySnapshot
* readAccount to get code hash
* Next incarnation
* Print addrHashes with 0 incarnations
* Print storage history
* Print storage history
* Print storage history
* Print storage history
* Print all storage history
* print change set keys
* print change set keys
* print change set keys
* print change set keys
* Not print codebucket info
* Fixes
* Fix for incarnation
* Fix for storage history bucket
* Try to fix the leak
* Try to fix the leak
* Try to fix the leak
* Try to fix the leak
* Try to fix the leak
* Try to fix the leak
* Try to fix the leak
* Fix embedded nodes
* Hasher
* Fix
* Test fixes
* Add experimental debug flag
* Fix tx_pool_test
* Disable GetNodeData test unless in experiment
* Fix more tests
* Fix lint and revert some changes
* Fix lint
* Fix lint
* stateless: flush the partial witnesses DB for every saved witness.
it seems to fix to the out of memory error I had on blocks over 2.5M;
* fix linters
* simplify
* add context
* extract chain events
* run commit in goroutines
* mine only on canonical
* typo
* linters
* fmt
* mark unused methods
* restore stress test
* test single miner
* remove unsafe Trie storage
* remove locks from miner
* restore interrupt
* remove result goroutine
* remove unconfirmedBlocks
* cherry-pick 04a1d475ff1a36ad8f92fec80385df18c52bdc1f
* extract uncles
* one miner succeeded
* restore context cancel
* cleanup
* skip an unstable test
* remove pending state
* use context instead of interrupt func
* calculate sealHash only once
* comment out unstable test
* after merge
* fix after merge
Co-authored-by: ledgerwatch <akhounov@gmail.com>
This change works around the 32k RDATA character limit per change
request and fixes several issues in the deployer which prevented it from
working for our production trees.
* p2p/dnsdisc: add support for enode.Iterator
This changes the dnsdisc.Client API to support the enode.Iterator
interface.
* p2p/dnsdisc: rate-limit DNS requests
* p2p/dnsdisc: preserve linked trees across root updates
This improves the way links are handled when the link root changes.
Previously, sync would simply remove all links from the current tree and
garbage-collect all unreachable trees before syncing the new list of
links.
This behavior isn't great in certain cases: Consider a structure where
trees A, B, and C reference each other and D links to A. If D's link
root changed, the sync code would first remove trees A, B and C, only to
re-sync them later when the link to A was found again.
The fix for this problem is to track the current set of links in each
clientTree and removing old links only AFTER all links are synced.
* p2p/dnsdisc: deflake iterator test
* cmd/devp2p: adapt dnsClient to new p2p/dnsdisc API
* p2p/dnsdisc: tiny comment fix
* added prefix tree to analyses to reduce memory usage
* make new partition every day
* merge concepts of reporter and snapshot
* tests for .FirstKey() and .NextKey()