This PR reimplements the light client server pool. It is also a first step
to move certain logic into a new lespay package. This package will contain
the implementation of the lespay token sale functions, the token buying and
selling logic and other components related to peer selection/prioritization
and service quality evaluation. Over the long term this package will be
reusable for incentivizing future protocols.
Since the LES peer logic is now based on enode.Iterator, it can now use
DNS-based fallback discovery to find servers.
This document describes the function of the new components:
https://gist.github.com/zsfelfoldi/3c7ace895234b7b345ab4f71dab102d4
# Conflicts:
# cmd/utils/flags.go
# core/forkid/forkid.go
# les/client.go
# les/client_handler.go
# les/commons.go
# les/distributor.go
# les/enr_entry.go
# les/fetcher.go
# les/lespay/client/valuetracker.go
# les/metrics.go
# les/peer.go
# les/protocol.go
# les/retrieve.go
# les/server.go
# les/serverpool.go
# les/test_helper.go
# les/utils/expiredvalue.go
# les/utils/weighted_select.go
# les/utils/weighted_select_test.go
# params/bootnodes.go
* abi/bind/backends: testcase for double-lock
* accounts: add blockByNumberNoLock to avoid double-lock
* backend/simulated: use stateroot, not blockhash for retrieveing state
Co-authored-by: Martin Holst Swende <martin@swende.se>
# Conflicts:
# accounts/abi/bind/backends/simulated.go
* accounts/abi: simplified reflection logic
* accounts/abi: simplified reflection logic
* accounts/abi: removed unpack
* accounts/abi: removed comments
* accounts/abi: removed uneccessary complications
* accounts/abi: minor changes in error messages
* accounts/abi: removed unnused code
* accounts/abi: fixed indexed argument unpacking
* accounts/abi: removed superfluous test cases
This commit removes two test cases. The first one is trivially invalid as we have the same
test cases as passing in packing_test.go L375. The second one passes now,
because we don't need the mapArgNamesToStructFields in unpack_atomic anymore.
Checking for purely underscored arg names generally should not be something we do
as the abi/contract is generally out of the control of the user.
* accounts/abi: removed comments, debug println
* accounts/abi: added commented out code
* accounts/abi: addressed comments
* accounts/abi: remove unnecessary dst.CanSet check
* accounts/abi: added dst.CanSet checks
# Conflicts:
# accounts/abi/reflect.go
* don't call initCursor on happy path
* don't call initCursor on happy path
* don't run stale reads goroutine for inMem mode
* don't call initCursor on happy path
* remove buffers from cursor object - they are useful only in Badger implementation
* commit kv benchmark
* remove buffers from cursor object - they are useful only in Badger implementation
* remove buffers from cursor object - they are useful only in Badger implementation
* cancel server before return pipe to pool
* try to fix test
* set field db in managed tx
* Fix body fetch
* Reduce spurious reorgs
* Exit the sync cycle after unwinds
* Fix out of range
* No stalling check for staged sync
* Disable failing tests
* Remove duplicate log message
* Fix UnwindTest and add assertions
* Fix formatting
* Cleanup
* Fix off by one error with bodies
* Remove rollback
* remove unused slice from MultiPut
* mutation: reuse tuples slice and preallocate bucketPuts
* use bucketPool in kv_lmdb
* remove duplicated check of context status
* more benchmarks
* remove reusage of puts
* Profile all stages
* Try to recover senders with 8 goroutines
* fix CPU profiling for stage_bodies
* fix out-of-index
* Try full DAG for verfication of header seals
* Try to unroll fnvHash for performance
* SSE2 assembly for fnvHash16
* fnvHash16AVX2
* Revert changes to state.go
* check we're on 64-bit in useAVX2
* Shave a move off fnvHash16AVX2
* asmdecl doesn't know about VMOVD
* disable linter in the right place
* Upgrade lmdb to version of 2019, which is compatible with https://github.com/jnwatson/py-lmdb (lmdb of 2020 year version - upgaded lock file version from 1 to 2 - and python binding can't open database if lockfile exists - if main app is running).
* remove ctx from MustOpen
* remove ctx from Open. Stop goroutines on Close.
* remove ctx from Open. Stop goroutines on Close.
* remove ctx from remote open (we have DialTimeout field to manage connection timeouts)
* enable RawReads and add native implementation of Get/Has methods
* save state
* txlookup full results
* save state
* save state
* remove experiments
* some fix&lint
* add end key to txLookup and index generation
* change log message
* change log
* fix lint
* lint
* fix test
* Euphemerally -> Ephemerally
* Move StorageMode to ethdb and pass it to PrepareStagedSync
* linter
* Remove StorageModeThinHistory and move SetStorageModeIfNotExist into storage_mode.go
* Optionally write receipts in the execute stage
* memory profiler
* linter
* proper linter fix
* linter
* typo
* Merge stateDb with changeDb so that all-or-nothing is commited in stage_execute
use `s.UpdateWithStageData(db, <block number>, <key>)` to store the key
use `s.StageData` with `etl.NextKey` to restart ETL from where it was interrupted.
* Start from 9m7
* Regenerate IH + receipts
* Only stats for iH bucket
* Persist receipts
* Go all in
* Start from block 10m
* Convert DbState to use plain state
* Fix findHistory
* Hard-code export
* More fixes
* Fix test
* Fix formatting
* Introduce PlainDbState
* Actually return PlainDbState
* Fix formatting
* Fix name style lint
* Fix linters
* Fix history_test
* Fix blockchain_test
* Fix compile error
* Bucket stats from all buckets
* Lmdb internal objects reuse (built-in feature of lmdb):
- lmdb read transactions pool
- lmdb also support cursors reuse, but not implemented in this PR
And kv abstraction objects reuse:
- lmdbKV pool of all tx objects
- boltKV pool of all tx objects
- badgerKV pool of all tx objects
* switch makefile back to bolt
* switch makefile back to bolt
* lmdb test run
* switch makefile back to bolt
* cursors pool
* run lmdb tests
* make kv objects pool global
* switch makefile back to bolt
* remove badgers GOMAXPROC setup, because our app tunned for sequential read/writes, not for random throughput
* simplify code
* Query progress
* Run stage4 offline
* More thorough resetState
* Correct BlockNumber
* Fix formatting
* State loop
* do every 200k blocks
* Shift to 6.6m
* Close dbs in tests
* Stage2 with option of no reset
* every 100k blocks
* Reset state before stage5
* Introduce another stage
* Check compile errors
* Fix linter
* Fix linter
* Disable unreliable test
* Fix test
* Remove unreachable code