* - handle cursor.Prefix on server
- move state reports to KV interface
* add CmdCursorSeekKey
* tests for abstract_kv
* avoid reading configs of databases
* avoid reading configs of databases
* make linter happy
* make linter happy
* cleanup
* port badger features from original implementation
* try to fix test
* try to fix test
* .Close() don't return error anymore - defer friendly
* try to enable badger now
* try to enable badger now
* badger can't run on CI yet
* badger can't run on CI yet
* re-run ci
* skip ctx cancelation for badger
* Introduce NoValuesCursor. From() method is useless because can be replaced by Seek().`
* implement NoValueCursor interface
* use abstract db in restapi
* cleanup .md
* another way to check if account has storage
* cleanup
* v0 of walk by db version
* save progress, to switch to another task. Put tombstone is still not correct.
* place tombstone only if exists something to hide
* db-based implementation
* db-based implementation
* db-based implementation
* fix prop check
* improve prop check logic
* Need custom logic to skip subtree for account and storage buckets because storage bucket has incarnation in key
* rebase to master
* remove all tombstones when account deleted
* remove all tombstones when account deleted
* added db integrity check
* don't rely on account.Root because it valid only about last incarnation
* remove all tombstones when account deleted
* dial with incarnation in MultiWalk2
* dial with incarnation in MultiWalk2
* when fixedbytes=40 resolver did compare cacheKey with storageKey without removing incarnation
* rebase to master
* rebase to master
Co-authored-by: alex.sharov <alex.sharov@lazada.com>
* implement NoValueCursor interface
* cleanup
* fix tests
* add more stats data to ui
* can't display error
* re-open DB low-level net interface when changing db
* re-open DB low-level net interface when changing db
* fix problem with displaying errors
* run ci
* improve prop check logic
* storage tombstones integrity checks UI
* storage page
* make DB configurable
Co-authored-by: alex.sharov <alex.sharov@lazada.com>
* add env INTERMEDIATE_TRIE_CACHE
* try to use assert.New() pattern
* Fix "maligned" linter warnings to reduce space consumption of structs:
core/types/accounts/account.go:18:14: struct of size 136 bytes could be of size 128 bytes (maligned)
type Account struct {
--
trie/node.go:44:10: struct of size 80 bytes could be of size 72 bytes (maligned)
duoNode struct {
--
trie/resolve_set.go:28:17: struct of size 56 bytes could be of size 48 bytes (maligned)
type ResolveSet struct {
--
trie/resolver.go:34:15: struct of size 88 bytes could be of size 72 bytes (maligned)
type Resolver struct {
--
trie/visual.go:32:17: struct of size 104 bytes could be of size 96 bytes (maligned)
type VisualOpts struct {
* initial
* mining
* remove debug
* debug
* restore random seed in the mining tests
* green tests
* fix blockchain tests
* fix lint
* init miner only if asked
* linters
* do not store trie as singlton
* fmt
* new trieDbState constructor
* Bumping GOMAXPROCS for Badger
* fixes related to database size
* Schedule GC for Badger
* pacify linter
* Don't start GC for ephemeral Badger
* Don't log "Value log GC attempt didn't result in any cleanup"
* Start GC in backround
* Bump GC period and IdealBatchSize for Badger
* BadgerDatabase RewindData
* Boolean badger flag -> string database flag
* cosmetic change
* ethdb: add Putter interface and Has method
* ethdb: improve docs and add IdealBatchSize
* ethdb: remove memory batch lock
Batches are not safe for concurrent use.
* core: use ethdb.Putter for Write* functions
This covers the easy cases.
* core/state: simplify StateSync
* trie: optimize local node check
* ethdb: add ValueSize to Batch
* core: optimize HasHeader check
This avoids one random database read get the block number. For many uses
of HasHeader, the expectation is that it's actually there. Using Has
avoids a load + decode of the value.
* core: write fast sync block data in batches
Collect writes into batches up to the ideal size instead of issuing many
small, concurrent writes.
* eth/downloader: commit larger state batches
Collect nodes into a batch up to the ideal size instead of committing
whenever a node is received.
* core: optimize HasBlock check
This avoids a random database read to get the number.
* core: use numberCache in HasHeader
numberCache has higher capacity, increasing the odds of finding the
header without a database lookup.
* core: write imported block data using a batch
Restore batch writes of state and add blocks, tx entries, receipts to
the same batch. The change also simplifies the miner.
This commit also removes posting of logs when a forked block is imported.
* core: fix DB write error handling
* ethdb: use RLock for Has
* core: fix HasBlock comment