* Bumping GOMAXPROCS for Badger
* fixes related to database size
* Schedule GC for Badger
* pacify linter
* Don't start GC for ephemeral Badger
* Don't log "Value log GC attempt didn't result in any cleanup"
* Start GC in backround
* Bump GC period and IdealBatchSize for Badger
* BadgerDatabase RewindData
* Boolean badger flag -> string database flag
* cosmetic change
* Fetching results of eth_getProof
* Dump 5 levels of the trie in a file for repeated runs
* Drill down to 6 levels of the trie
* Fix lint
* Fix lint
* Fix lint
* verifySnapshot to check accounts with emptyRoot
* Descend into short nodes
* Latest tool fixes
* Fix lint
* Fix state properly working
* all: freezer style syncing
core, eth, les, light: clean up freezer relative APIs
core, eth, les, trie, ethdb, light: clean a bit
core, eth, les, light: add unit tests
core, light: rewrite setHead function
core, eth: fix downloader unit tests
core: add receipt chain insertion test
core: use constant instead of hardcoding table name
core: fix rollback
core: fix setHead
core/rawdb: remove canonical block first and then iterate side chain
core/rawdb, ethdb: add hasAncient interface
eth/downloader: calculate ancient limit via cht first
core, eth, ethdb: lots of fixes
* eth/downloader: print ancient disable log only for fast sync
* core, eth, trie: bloom filter for trie node dedup during fast sync
* eth/downloader, trie: address review comments
* core, ethdb, trie: restart fast-sync bloom construction now and again
* eth/downloader: initialize fast sync bloom on startup
* eth: reenable eth/62 until we properly remove it
This PR is a more advanced form of the dirty-to-clean cacher (#18995),
where we reuse previous database write batches as datasets to uncache,
saving a dirty-trie-iteration and a dirty-trie-rlp-reencoding per block.
The current trie memory database/cache that we do pruning on stores
trie nodes as binary rlp encoded blobs, and also stores the node
relationships/references for GC purposes. However, most of the trie
nodes (everything apart from a value node) is in essence just a
collection of references.
This PR switches out the RLP encoded trie blobs with the
collapsed-but-not-serialized trie nodes. This permits most of the
references to be recovered from within the node data structure,
avoiding the need to track them a second time (expensive memory wise).