```
go run ./cmd/integration reset --chaindata=...
go run ./cmd/integration state_stages -h
go run ./cmd/integration state_stages --chaindata=... --verbosity=3 --block=2_000_000 --unwind=10 --unwind_every=1_000
```
Also, it inherits flags from geth:
```
--pprof.cpuprofile=./cpu.out // to file
--pprof --pprof.port=6060 // launch pprof server
--metrics // sends to prometheus
```
* fix case: if header stage is ehead of body stage
* fix case: if header stage is ehead of body stage
* fix case: if header stage is ehead of body stage
* allow stage3 to sync to given block
* allow stage3 to sync to given block
* TestWatchNoDir not to be parallel
* Print ks and inc
* Print addrHash
* Change the buffer
* Print loading
* Skip
* More logging
* Error out earlier
* Handle empty codes
* Handle empty codes
* Remove logging
* Compare states
* Not do stage5
* compareBucket
* Preimage
* Clearer errors
* No need to clean up contract code
* Restore stage6
* Printing
* Skip the skipping
* Print all
* Change buffer type
* Add limit to stage5
* Always fail
* Remove exessive logging
* Restore buffer type
* Revert
* Print when exception
* Reenable skipping
* Skip storage items for deleted contracts
* not shortcut
* Remove removeAccount
* Re-enable state hashing
* Default to plain state
* Disable hashing state
* Reenable reset5
* Print unfurl list
* Enable removingAccount
* No printing
* Reenable stage5 commit
* Swap order of stages
* Prevent backwards promotion, reset tx lookup
* reset finish
* Introduce storage item replacement
* See if unwind works
* Restore removingAccount
* Don't do removeAccount for unwinding
* Possible fix
* Proper(er) fix
* Don't exclude unwinding
* Remove unwinding flag
* Fix formatting
* Fix lint
* Not to ignore blocks if they cause reorg
* Fix test, separate stages again
* Fix TestUnwind
* Fix stage
* Swap unwinding
* Revert to unwinding flag
* Print unfurl list
* Print
* Print inside receive
* Print after
* No printing
* Cleanup
* Not use blockCache when doing GetBlock
* Handle bucket error
* Replace with 0
* SetMaxDBs
* Set MaxDb before opening'
* Merge stage5 and stage6
* Fix lint
* Make downloader tests not parallel
* remove unused slice from MultiPut
* mutation: reuse tuples slice and preallocate bucketPuts
* use bucketPool in kv_lmdb
* remove duplicated check of context status
* more benchmarks
* remove reusage of puts
* remove ctx from MustOpen
* remove ctx from Open. Stop goroutines on Close.
* remove ctx from Open. Stop goroutines on Close.
* remove ctx from remote open (we have DialTimeout field to manage connection timeouts)
* enable RawReads and add native implementation of Get/Has methods
* save state
* txlookup full results
* save state
* save state
* remove experiments
* some fix&lint
* add end key to txLookup and index generation
* change log message
* change log
* fix lint
* lint
* fix test
* Euphemerally -> Ephemerally
* Move StorageMode to ethdb and pass it to PrepareStagedSync
* linter
* Remove StorageModeThinHistory and move SetStorageModeIfNotExist into storage_mode.go
* Optionally write receipts in the execute stage
* memory profiler
* linter
* proper linter fix
* linter
* typo
* Merge stateDb with changeDb so that all-or-nothing is commited in stage_execute
use `s.UpdateWithStageData(db, <block number>, <key>)` to store the key
use `s.StageData` with `etl.NextKey` to restart ETL from where it was interrupted.
* Start from 9m7
* Regenerate IH + receipts
* Only stats for iH bucket
* Persist receipts
* Go all in
* Start from block 10m
* Convert DbState to use plain state
* Fix findHistory
* Hard-code export
* More fixes
* Fix test
* Fix formatting
* Introduce PlainDbState
* Actually return PlainDbState
* Fix formatting
* Fix name style lint
* Fix linters
* Fix history_test
* Fix blockchain_test
* Fix compile error
* Bucket stats from all buckets
* Lmdb internal objects reuse (built-in feature of lmdb):
- lmdb read transactions pool
- lmdb also support cursors reuse, but not implemented in this PR
And kv abstraction objects reuse:
- lmdbKV pool of all tx objects
- boltKV pool of all tx objects
- badgerKV pool of all tx objects
* switch makefile back to bolt
* switch makefile back to bolt
* lmdb test run
* switch makefile back to bolt
* cursors pool
* run lmdb tests
* make kv objects pool global
* switch makefile back to bolt
* remove badgers GOMAXPROC setup, because our app tunned for sequential read/writes, not for random throughput
* simplify code
* Query progress
* Run stage4 offline
* More thorough resetState
* Correct BlockNumber
* Fix formatting
* State loop
* do every 200k blocks
* Shift to 6.6m
* Close dbs in tests
* Stage2 with option of no reset
* every 100k blocks
* Reset state before stage5
* Introduce another stage
* Check compile errors
* Fix linter
* Fix linter
* Disable unreliable test
* Fix test
* Remove unreachable code
* resetIH from scratch if needed
* lmdb
* add AbstractKV to loader, added new Object accessor around AbstractKV
* add lmdb cli flag
* add requirement of k!=nil on error in docs
* add Size method for compatibility
* read after put tests
* fix multiput nils
* simplify loops
* increase mmap size
* better error messages
* better error messages
* fix tests
* better error messages
* cleanup
* avoid bolt usage in test
* move hardcoded bucket name to dbutils
* register more buckets
* register more buckets
* fix test
* db based version of PrefixByCumulativeWitnessSize
* db based version of PrefixByCumulativeWitnessSize
* retain all in Trie by default
* fix WitnessLen logic in calcTrie roots
* Rename IntermediateTrieWitnessLenBucket to IntermediateWitnessLenBucket
* handle corner cases in WL
* Use correct incarnation for IH bucket
* use name WitnessSize
* save progress towards db-only witness estimation
* results from trie and from db are still different
* less recursion
* correct incarnation in CumulativeSearch
* reuse results from previous Tick, separate concepts of parent and startKey
* experiment: if not including trie structure to WitnessSize will reduce cumulative error
* tool to generate all IH and tool to calculate assessment of cumulative error
* tool to generate all IH
* Calculate totalWitnessSize based on DB data - then schedule will not overrun state during MGR cycle
* better stats
* Calculate totalWitnessSize based on DB data - then schedule will not overrun state during MGR cycle
* Calculate totalWitnessSize based on DB data - then schedule will not overrun state during MGR cycle
* calculate ticks size distribution
* estimate cumulative error
* fix linter
* resetIH from scratch if needed
* cleanup
* fix test
* fix test
* Not hash, keep the files
* Calculate savings
* Fix
* Fix
* Fix
* Fix
* RestAPI to support local boltdb
* Not error on read-only db
* Changes so far
* Continue
* More
* Roll back a bit
* Restore newline
* something compiles
* Fix restapi
* Fix block number
* Fix reads
* Use plain writer
* Maps for storage reads and writes
* Clean up coersions
* Fix accounts/abi/bind
* Fix tests
* More fixes
* more fixes
* More fixes
* Fixes
* Fixed core/state
* Fixed eth tests
* Move code, fix linter
* Fix test
* Fix linter
* Fix linter
* Fix linter, badger_db to support AbstractKV
* Increase IdealBatchSize for badger
* Fix linter
* Fix linter
* save state
* add current index feature
* fix test
* remove logs
* Only execute 1000 blocks
* Reset history index
* Correct action
* Increase batch size
* Increase chunk size, print memory stats
* Fix linter
* Remove unused from
* Split into 2 staged
* Use storage history gen
* remove log
* Not to run tx_cacher in staged mode
* Not to recover during stage 2
* Not to recover during stage 2
* Remove counter
Co-authored-by: b00ris <b00ris@mail.ru>