* Revert "remove tombstones (#426)"
This reverts commit aa6bab40e8.
* tombstones don't hide storage or account anymore
* auto-format code by prettier (similar to gofmt)
* wow, it works.....
* small simplification, but need make it more clear
* rebase to master
* rebase to master
* rebase to master
* re-run ci
* clean test files
* Debugging the cached resolver
* make logic of comparison more explicit
* fix stuck on block 696817
* Remove tracing
* Fix CI
* Search for root hash
Co-authored-by: alex.sharov <AskAlexSharov@gmail.com>
* CheckChangeSets and thin history
* small code de-duplication
* small code clean-up
* Fix an error in mutation.getNoLock
* CheckChangeSets: truly make historyfile = chaindata by default
* use NoValues cursor where possible
* add ctx
* fix broken logs
* rebase master
* rebase master
* simplify generators
* hack to measure space distribution
* naive epoch and chunking implementation
* make stateless loop cancelable
* make stateless loop cancelable
* remove one rlp layer
* eth64 protocol support - add forkId to status message
* remove allocations related to "remove incarnation" actions
* invalidate startKeyNoInc when startKey changed
* dbutils.RemoveIncarnationFromKey - doesn't do allocation
* Introduce NoValuesCursor. From() method is useless because can be replaced by Seek().`
* implement NoValueCursor interface
* use abstract db in restapi
* cleanup .md
* move .Put out of .View and remove deletion of all tombstones when delete acc
* move .Put out of .View and remove deletion of all tombstones when delete acc
* introduce code node
* replace codeMap with code touches
* fix a comment
* fixups to tests
* fix compile error
* fix getnodedata tests
* add tests and test stubs
* add more test stubs
* add test method bodies
* add and fix more tests on trie for new codenode
* add test change code between blocks
* fix crash in stateless
* remove unneded files
* remove comment
* fix deleted account code
* fix resolve set builder for code nodes
* another way to check if account has storage
* cleanup
* v0 of walk by db version
* save progress, to switch to another task. Put tombstone is still not correct.
* place tombstone only if exists something to hide
* db-based implementation
* db-based implementation
* db-based implementation
* fix prop check
* improve prop check logic
* Need custom logic to skip subtree for account and storage buckets because storage bucket has incarnation in key
* rebase to master
* remove all tombstones when account deleted
* remove all tombstones when account deleted
* added db integrity check
* don't rely on account.Root because it valid only about last incarnation
* remove all tombstones when account deleted
* dial with incarnation in MultiWalk2
* dial with incarnation in MultiWalk2
* when fixedbytes=40 resolver did compare cacheKey with storageKey without removing incarnation
* rebase to master
* rebase to master
Co-authored-by: alex.sharov <alex.sharov@lazada.com>
* root was unused in BlockChain.StateAt
* TestDoubleAccountRemoval
* Preserve the original when a contract is self-destructed and then its address is touched in the same block (e.g. #
156634)
* Add revive and phoenix
* store enode address to file, then read it from tester
* store enode address to file, then read it from tester
* rebase master
* fix miss-type
* dbg p2p-sub-protocol, add self-destruct test case
* re-create blockFetcher
* exit syncer loop and start new one
* rebase to master
* use core.GenerateChain
* root miss-match
* introduce reduceComplexity flag
* fix transfer to 0 account
* cleanup
* test-case for intermediate cache
* clean
* clean
* clean
* fix handler panic
Co-authored-by: Alexey Akhunov <akhounov@gmail.com>
Co-authored-by: alex.sharov <alex.sharov@lazada.com>
* #remove debug prints
* remove storage-mode="i"
* minnet re-execute hack with checkpoints
* minnet re-execute hack with checkpoints
* rollback to master setup
* mainnet re-exec hack
* rollback some changes
* v0 of "push down" functionality
* move all logic to own functions
* handle case when re-created account already has some storage
* clear path for storage
* try to rely on tree structure (but maybe need to rely on DB because can be intra-block re-creations of account)
* fix some bugs with indexes, moving to tests
* tests added
* make linter happy
* make linter happy
* simplify logic
* adjust comparison of keys with and without incarnation
* test for keyIsBefore
* test for keyIsBefore
* better nibbles alignment
* better nibbles alignment
* cleanup
* continue work on tests
* simplify test
* check tombstone existence before pushing it down.
* put tombstone only when account deleted, not created
* put tombstone only when account has storage
* make linter happy
* test for storage resolver
* make fixedbytes work without incarnation
* fix panic on short keys
* use special comparison only when working with keys from cache
* add blockNr for better tracing
* fix: incorrect tombstone check
* fix: incorrect tombstone check
* trigger ci
* hack for problem block
* more test-cases
* add test case for too long keys
* speedup cached resolver by removing bucket creation transaction
* remove parent type check in pruning, remove unused copy from mutation.put
* dump resolving info on fail
* dump resolving info on fail
* set tombstone everytime for now to check if it will help
* on unload: check parent type, not type of node
* fix wrong order of checking node type
* fix wrong order of checking node type
* rebase to new master
* make linter happy
* rebase to new master
* place tombstone only if acc has storage
* rebase master
* rebase master
* rebase master
* rebase master
Co-authored-by: alex.sharov <alex.sharov@lazada.com>
* Fix download only
* Fix lint
* Reset references
* Only reset on error
* Potential fixes
* no NPE
* no NPE
* Not use multi-put
* Reduce ideal batch size for download only
* Handle tds == nil
* remove nested mutation
* Return multiput
* Better reporting
* Reduce batch size for download only
* Avoid extra copying
* Avoid extra copying
* IdealBatchSize
* Not write tx lookup entries
* Larger batches
* Go back to normal batch size
* Fix lint
* Gen tx lookup
* print progress
* Add filling up the lookup array
* Show tx count
* Introduce second round
* Add generating tx lookup
* Fix lint
* properly stop at specified block
* measure the duration of the last phase
* not to fail if the bucket is not found
* Fix lint
* Alternative tx generation
* Fix out of memory
* Fix out of memory
* Split in parts to conserve memory
* Copy keys
* Fix lint
* Fix lint
* squash commits
* enable storage cache
* make linter happy
* fix subtree prefix len check
* save cahnges to test master
* remove restriction on prefix len
* fix comparison of last bits
* fix wrong alignment
* remove debug prints
* commit current state
* commit current state
* avoid changing state of resolver from multiwalk
* remove debug code
* remove debug code
* remove debug code
* remove unnecessary copy
* make code more readable
* reduce rebuildHashes initial resolution
* fix test after rebase to master
* make code more readable
* improve pruner
* pruner add IntermediateCache bucket
* fix panic in Walk on short keys
* reduce allocations for storage keys decompression by increasing default buffer size
* re-run CI
* fix iterator behaviour
* rename cache to hash for unification
* re-run ci
* avoid using underlying DB
* hash all subtree nodes before unload
* fix getNode method
* need to check node type, not parent - before put to hashBucket
* return back parent type check, doesn't work without it.
* don't recalculate hash again
* move unloadFunc from trie to pruner
* rename bucket to shorter name
* rename bucket to shorter name
* clean
* rebase to master
* Use TrieDbState for tx pool
* Not initialise tx pool until state is loaded
* Add preimage
* Fix account
* Print codehash
* Print correct code hash
* Print incarnatin
* Print incarnatin
* Use proper incarnation
* Print dbValue
* Actually fix
* Actually fix
* Fix verifySnapshot
* readAccount to get code hash
* Next incarnation
* Print addrHashes with 0 incarnations
* Print storage history
* Print storage history
* Print storage history
* Print storage history
* Print all storage history
* print change set keys
* print change set keys
* print change set keys
* print change set keys
* Not print codebucket info
* Fixes
* Fix for incarnation
* Fix for storage history bucket
* Try to fix the leak
* Try to fix the leak
* Try to fix the leak
* Try to fix the leak
* Try to fix the leak
* Try to fix the leak
* Try to fix the leak
* Fix embedded nodes
* Hasher
* Fix
* Test fixes
* Add experimental debug flag
* Fix tx_pool_test
* Disable GetNodeData test unless in experiment
* Fix more tests
* Fix lint and revert some changes
* Fix lint
* Fix lint
* add context
* extract chain events
* run commit in goroutines
* mine only on canonical
* typo
* linters
* fmt
* mark unused methods
* restore stress test
* test single miner
* remove unsafe Trie storage
* remove locks from miner
* restore interrupt
* remove result goroutine
* remove unconfirmedBlocks
* cherry-pick 04a1d475ff1a36ad8f92fec80385df18c52bdc1f
* extract uncles
* one miner succeeded
* restore context cancel
* cleanup
* skip an unstable test
* remove pending state
* use context instead of interrupt func
* calculate sealHash only once
* comment out unstable test
* after merge
* fix after merge
Co-authored-by: ledgerwatch <akhounov@gmail.com>
* add env INTERMEDIATE_TRIE_CACHE
* try to use assert.New() pattern
* Fix "maligned" linter warnings to reduce space consumption of structs:
core/types/accounts/account.go:18:14: struct of size 136 bytes could be of size 128 bytes (maligned)
type Account struct {
--
trie/node.go:44:10: struct of size 80 bytes could be of size 72 bytes (maligned)
duoNode struct {
--
trie/resolve_set.go:28:17: struct of size 56 bytes could be of size 48 bytes (maligned)
type ResolveSet struct {
--
trie/resolver.go:34:15: struct of size 88 bytes could be of size 72 bytes (maligned)
type Resolver struct {
--
trie/visual.go:32:17: struct of size 104 bytes could be of size 96 bytes (maligned)
type VisualOpts struct {
* save state
* remove repair
* save state
* remove emptydb check
* save state
* add walkAsOf test
* add WalkAsOf and MultiWalkAsOf tests
* deployed contracts counter
* reference counter for contract code
* drop storage root&contract hash for changesets
* start incarnation is 1(save state)
* fix ReorgOverSelfDestruct test
* hack fix TestReorgOverSelfDestruct
* test benchmark
* cleanup
* remove useless debug
* remove print trie
* return remove subtrie call to updateTrieRoot
* save state
* add mutation test
* remove useless test
* fix
* added mutation commit test
* rename experiment to thin history
* thin history mutation commit test
* fix ethdb tests
* getAsOf test
* add test&fix history index
* fix test
* make test for index search
* compute trie root incarnation fix
* tests fixes
* done job in case of panic
* fix lint
* fix&test for bad incarnation
* fix initial incarnation for genesis
* fix lint
* fix changeset test
* fix storage ranges test
* fix lint
* move set incarnation to create contract
* add comment
Co-authored-by: ledgerwatch <akhounov@gmail.com>
Co-authored-by: Evgeny Danilenko <6655321@bk.ru>
* initial
* mining
* remove debug
* debug
* restore random seed in the mining tests
* green tests
* fix blockchain tests
* fix lint
* init miner only if asked
* linters
* do not store trie as singlton
* fmt
* new trieDbState constructor
* build: use golangci-lint
This changes build/ci.go to download and run golangci-lint instead
of gometalinter.
* core/state: fix unnecessary conversion
* p2p/simulations: fix lock copying (found by go vet)
* signer/core: fix unnecessary conversions
* crypto/ecies: remove unused function cmpPublic
* core/rawdb: remove unused function print
* core/state: remove unused function xTestFuzzCutter
* core/vm: disable TestWriteExpectedValues in a different way
* core/forkid: remove unused function checksum
* les: remove unused type proofsData
* cmd/utils: remove unused functions prefixedNames, prefixFor
* crypto/bn256: run goimports
* p2p/nat: fix goimports lint issue
* cmd/clef: avoid using unkeyed struct fields
* les: cancel context in testRequest
* rlp: delete unreachable code
* core: gofmt
* internal/build: simplify DownloadFile for Go 1.11 compatibility
* build: remove go test --short flag
* .travis.yml: disable build cache
* whisper/whisperv6: fix ineffectual assignment in TestWhisperIdentityManagement
* .golangci.yml: enable goconst and ineffassign linters
* build: print message when there are no lint issues
* internal/build: refactor download a bit
* graphql, internal/ethapi: extend eth_call
This PR offers the third option parameter for eth_call API.
Caller can specify a batch of contracts for overriding the
original account metadata(nonce, balance, code, state).
It has a few advantages:
* It's friendly for debugging
* It's can make on-chain contract lighter for getting rid of
state access functions
* core, internal: address comments
* core/state, cmd/geth: streaming json output dump cmd + optional code+storage
* dump: add option to continue even if preimages are missing
* core, evm: lint nits
* cmd: use local flags for dump, omit empty code/storage
* core/state: fix state dump test
* all: freezer style syncing
core, eth, les, light: clean up freezer relative APIs
core, eth, les, trie, ethdb, light: clean a bit
core, eth, les, light: add unit tests
core, light: rewrite setHead function
core, eth: fix downloader unit tests
core: add receipt chain insertion test
core: use constant instead of hardcoding table name
core: fix rollback
core: fix setHead
core/rawdb: remove canonical block first and then iterate side chain
core/rawdb, ethdb: add hasAncient interface
eth/downloader: calculate ancient limit via cht first
core, eth, ethdb: lots of fixes
* eth/downloader: print ancient disable log only for fast sync
* core, eth, trie: bloom filter for trie node dedup during fast sync
* eth/downloader, trie: address review comments
* core, ethdb, trie: restart fast-sync bloom construction now and again
* eth/downloader: initialize fast sync bloom on startup
* eth: reenable eth/62 until we properly remove it
This PR is a more advanced form of the dirty-to-clean cacher (#18995),
where we reuse previous database write batches as datasets to uncache,
saving a dirty-trie-iteration and a dirty-trie-rlp-reencoding per block.
* first impl of eth_getProof
* fixed docu
* added comments and refactored based on comments from holiman
* created structs
* handle errors correctly
* change Value to *hexutil.Big in order to have the same output as parity
* use ProofList as return type
The current trie memory database/cache that we do pruning on stores
trie nodes as binary rlp encoded blobs, and also stores the node
relationships/references for GC purposes. However, most of the trie
nodes (everything apart from a value node) is in essence just a
collection of references.
This PR switches out the RLP encoded trie blobs with the
collapsed-but-not-serialized trie nodes. This permits most of the
references to be recovered from within the node data structure,
avoiding the need to track them a second time (expensive memory wise).
This removes a golint warning: type name will be used as trie.TrieSync by
other packages, and that stutters; consider calling this Sync.
In hexToKeybytes len(hex) is even and (even+1)/2 == even/2, remove the +1.
* ethdb: add Putter interface and Has method
* ethdb: improve docs and add IdealBatchSize
* ethdb: remove memory batch lock
Batches are not safe for concurrent use.
* core: use ethdb.Putter for Write* functions
This covers the easy cases.
* core/state: simplify StateSync
* trie: optimize local node check
* ethdb: add ValueSize to Batch
* core: optimize HasHeader check
This avoids one random database read get the block number. For many uses
of HasHeader, the expectation is that it's actually there. Using Has
avoids a load + decode of the value.
* core: write fast sync block data in batches
Collect writes into batches up to the ideal size instead of issuing many
small, concurrent writes.
* eth/downloader: commit larger state batches
Collect nodes into a batch up to the ideal size instead of committing
whenever a node is received.
* core: optimize HasBlock check
This avoids a random database read to get the number.
* core: use numberCache in HasHeader
numberCache has higher capacity, increasing the odds of finding the
header without a database lookup.
* core: write imported block data using a batch
Restore batch writes of state and add blocks, tx entries, receipts to
the same batch. The change also simplifies the miner.
This commit also removes posting of logs when a forked block is imported.
* core: fix DB write error handling
* ethdb: use RLock for Has
* core: fix HasBlock comment
With this commit, core/state's access to the underlying key/value database is
mediated through an interface. Database errors are tracked in StateDB and
returned by CommitTo or the new Error method.
Motivation for this change: We can remove the light client's duplicated copy of
core/state. The light client now supports node iteration, so tracing and storage
enumeration can work with the light client (not implemented in this commit).
* eth/downloader: separate state sync from queue
Scheduling of state node downloads hogged the downloader queue lock when
new requests were scheduled. This caused timeouts for other requests.
With this change, state sync is fully independent of all other downloads
and doesn't involve the queue at all.
State sync is started and checked on in processContent. This is slightly
awkward because processContent doesn't have a select loop. Instead, the
queue is closed by an auxiliary goroutine when state sync fails. We
tried several alternatives to this but settled on the current approach
because it's the least amount of change overall.
Handling of the pivot block has changed slightly: the queue previously
prevented import of pivot block receipts before the state of the pivot
block was available. In this commit, the receipt will be imported before
the state. This causes an annoyance where the pivot block is committed
as fast block head even when state downloads fail. Stay tuned for more
updates in this area ;)
* eth/downloader: remove cancelTimeout channel
* eth/downloader: retry state requests on timeout
* eth/downloader: improve comment
* eth/downloader: mark peers idle when state sync is done
* eth/downloader: move pivot block splitting to processContent
This change also ensures that pivot block receipts aren't imported
before the pivot block itself.
* eth/downloader: limit state node retries
* eth/downloader: improve state node error handling and retry check
* eth/downloader: remove maxStateNodeRetries
It fails the sync too much.
* eth/downloader: remove last use of cancelCh in statesync.go
Fixes TestDeliverHeadersHang*Fast and (hopefully)
the weird cancellation behaviour at the end of fast sync.
* eth/downloader: fix leak in runStateSync
* eth/downloader: don't run processFullSyncContent in LightSync mode
* eth/downloader: improve comments
* eth/downloader: fix vet, megacheck
* eth/downloader: remove unrequested tasks anyway
* eth/downloader, trie: various polishes around duplicate items
This commit explicitly tracks duplicate and unexpected state
delieveries done against a trie Sync structure, also adding
there to import info logs.
The commit moves the db batch used to commit trie changes one
level deeper so its flushed after every node insertion. This
is needed to avoid a lot of duplicate retrievals caused by
inconsistencies between Sync internals and database. A better
approach is to track not-yet-written states in trie.Sync and
flush on commit, but I'm focuing on correctness first now.
The commit fixes a regression around pivot block fail count.
The counter previously was reset to 1 if and only if a sync
cycle progressed (inserted at least 1 entry to the database).
The current code reset it already if a node was delivered,
which is not stong enough, because unless it ends up written
to disk, an attacker can just loop and attack ad infinitum.
The commit also fixes a regression around state deliveries
and timeouts. The old downloader tracked if a delivery is
stale (none of the deliveries were requestedt), in which
case it didn't mark the node idle and did not send further
requests, since it signals a past timeout. The current code
did mark it idle even on stale deliveries, which eventually
caused two requests to be in flight at the same time, making
the deliveries always stale and mass duplicating retrievals
between multiple peers.
* eth/downloader: fix state request leak
This commit fixes the hang seen sometimes while doing the state
sync. The cause of the hang was a rare combination of events:
request state data from peer, peer drops and reconnects almost
immediately. This caused a new download task to be assigned to
the peer, overwriting the old one still waiting for a timeout,
which in turned leaked the requests out, never to be retried.
The fix is to ensure that a task assignment moves any pending
one back into the retry queue.
The commit also fixes a regression with peer dropping due to
stalls. The current code considered a peer stalling if they
timed out delivering 1 item. However, the downloader never
requests only one, the minimum is 2 (attempt to fine tune
estimated latency/bandwidth). The fix is simply to drop if
a timeout is detected at 2 items.
Apart from the above bugfixes, the commit contains some code
polishes I made while debugging the hang.
* core, eth, trie: support batched trie sync db writes
* trie: rename SyncMemCache to syncMemBatch
This commit is a preparation for the upcoming metropolis hardfork. It
prepares the state, core and vm packages such that integration with
metropolis becomes less of a hassle.
* Difficulty calculation requires header instead of individual
parameters
* statedb.StartRecord renamed to statedb.Prepare and added Finalise
method required by metropolis, which removes unwanted accounts from
the state (i.e. selfdestruct)
* State keeps record of destructed objects (in addition to dirty
objects)
* core/vm pre-compiles may now return errors
* core/vm pre-compiles gas check now take the full byte slice as argument
instead of just the size
* core/vm now keeps several hard-fork instruction tables instead of a
single instruction table and removes the need for hard-fork checks in
the instructions
* core/vm contains a empty restruction function which is added in
preparation of metropolis write-only mode operations
* Adds the bn256 curve
* Adds and sets the metropolis chain config block parameters (2^64-1)
The 'step' method is split into two parts, 'peek' and 'push'. peek
returns the next state but doesn't make it current.
The end of iteration was previously tracked by setting 'trie' to nil.
End of iteration is now tracked using the 'iteratorEnd' error, which is
slightly cleaner and requires less code.
Make it so each iterator has exactly one public constructor:
- NodeIterators can be created through a method.
- Iterators can be created through NewIterator on any NodeIterator.
In `touch` operation, only `touched` filed has been changed. Therefore
in the related undo function, only `touched` field should be reverted.
In addition, whether remove this obj from dirty map should depend on
prevDirty flag.
* common/math: optimize PaddedBigBytes, use it more
name old time/op new time/op delta
PaddedBigBytes-8 71.1ns ± 5% 46.1ns ± 1% -35.15% (p=0.000 n=20+19)
name old alloc/op new alloc/op delta
PaddedBigBytes-8 48.0B ± 0% 32.0B ± 0% -33.33% (p=0.000 n=20+20)
* all: unify big.Int zero checks
Various checks were in use. This commit replaces them all with Int.Sign,
which is cheaper and less code.
eg templates:
func before(x *big.Int) bool { return x.BitLen() == 0 }
func after(x *big.Int) bool { return x.Sign() == 0 }
func before(x *big.Int) bool { return x.BitLen() > 0 }
func after(x *big.Int) bool { return x.Sign() != 0 }
func before(x *big.Int) int { return x.Cmp(common.Big0) }
func after(x *big.Int) int { return x.Sign() }
* common/math, crypto/secp256k1: make ReadBits public in package math
This PR implements a differenceIterator, which allows iterating over trie nodes
that exist in one trie but not in another. This is a prerequisite for most GC
strategies, in order to find obsolete nodes.