Here is my case:
writeIndex function takes about 39% in pprof flame graph. I dive into it
and find out that we use POA consensus, therefore mining addresses have
a big bitmap.
```
func writeIndex(blocknum uint64, changes *historyv2.ChangeSet, bucket string, changeDb kv.RwTx) error {
buf := bytes.NewBuffer(nil)
for _, change := range changes.Changes {
k := dbutils.CompositeKeyWithoutIncarnation(change.Key)
index, err := bitmapdb.Get64(changeDb, bucket, k, math.MaxUint32, math.MaxUint32)
if err != nil {
return fmt.Errorf("find chunk failed: %w", err)
}
index.Add(blocknum)
if err = bitmapdb.WalkChunkWithKeys64(k, index, bitmapdb.ChunkLimit, func(chunkKey []byte, chunk *roaring64.Bitmap) error {
buf.Reset()
if _, err = chunk.WriteTo(buf); err != nil {
return err
}
return changeDb.Put(bucket, chunkKey, common.CopyBytes(buf.Bytes()))
}); err != nil {
return err
}
}
return nil
}
```
assume that a mining address has a big bitmap (Cardinality 400000), then
"writeIndex" will split it into 300 small bitmaps according to
”ChunkLimit“ and next time "writeIndex" will union all small bitmaps
into a big bitmap, then use binary search to split it into 300 small
bitmaps. over and over again.
I think blockNum > index.Maximum(), so that we only need get the last
one of 300 small bitmaps .
Co-authored-by: mengweifu <75886642@qq.com>
* squash
* squash
* squash
* squash
* squash
* squash
* squash
* squash
* squash
* squash
* squash
* squash
* squash
* squash
* squash
* squash
* squash
* squash
* squash
* squash
* squash
* SE
* SE
* SE
* SE
* SE
* SE
* SE
* SE
* SE
* SE
* SE
* SE
* SE
* SE
* SE
* Bad unwind of recreation of CREATE2 contract - unit test and potential fix
* Fix unit test
* Fix lint
* Fix lint
* Fixup hack
Co-authored-by: Alexey Sharp <alexeysharp@Alexeys-iMac.local>
* State cache init
* More code
* Fix lint
* More tests
* More tests
* More tests
* Fix test
* Transformations
* remove writeQueue, before fixing the tests
* Fix tests
* Add more tests, incarnation to the code items
* Fix lint
* Fix lint
* Remove shards prototype, add incarnation to the state reader code
* Clean up and replace cache in call_traces stage
* fix flaky test
* Save changes
* Readers to use addrHash, writes - addresses
* Fix lint
* Fix lint
* More accurate tracking of size
* Optimise for smaller write batches
* Attempt to integrate state cache into Execution stage
* cacheSize to default flags
* Print correct cache sizes and batch sizes
* cacheSize in the integration
* Fix tests
* Fix lint
* Remove print
* Fix exec stage
* Fix test
* Refresh sequence on write
* No double increment
* heap.Remove
* Try to fix alignment
* Refactoring, adding hashItems
* More changes
* Fix compile errors
* Fix lint
* Wrapping cached reader
* Wrap writer into cached writer
* Turn state cache off by default
* Fix plain state writer
* Fix for code/storage mixup
* Fix tests
* Fix clique test
* Better fix for the tests
* Add test and fix some more
* Fix compile error|
* More functions
* Fixes
* Fix for the tests
* sepatate DeletedFlag and AbsentFlag
* Minor fixes
* Test refactoring
* More changes
* Fix some tests
* More test fixes
* More test fixes
* Fix lint
* Move blockchain_test to be able to use stagedsync
* More fixes
* Fixes and cleanup
* Fix tests in turbo/stages
* Fix lint
* Fix lint
* Intemediate
* Fix tests
* Intemediate
* More fixes
* Compilation fixes
* More fixes
* Fix compile errors
* More test fixes
* More fixes
* More test fixes
* Fix compile error
* Fixes
* Fix
* Fix
* More fixes
* Fixes
* More fixes and cleanup
* Further fix
* Check gas used and bloom with header
Co-authored-by: Alexey Sharp <alexeysharp@Alexeys-iMac.local>
* more compact append implementation
* do appned for dupsort buckets
* do appned for dupsort buckets
* do appned for dupsort buckets
* do appned for dupsort buckets
* do appned for dupsort buckets
* fix tests
* fix tests
* change_set_dup
* change_set_dup
* change_set_dup
* change_set_dup
* change_set_dup
* change_set_dup
* change_set_dup
* change_set_dup
* change_set_dup
* change_set_dup
* change_set_dup
* change_set_dup
* change_set_dup
* change_set_dup
* change_set_dup
* change_set_dup
* change_set_dup
* change_set_dup
* change_set_dup
* change_set_dup
* change_set_dup
* working version
* working version
* working version
* working version
* working version
* working version
* working version
* working version
* working version
* working version
* working version
* working version
* working version
* working version
* working version
* working version
* working version
* working version
* working version
* working version
* working version
* working version
* working version
* working version
* working version
* working version
* working version
* working version
* working version
* working version
* working version
* working version
* working version
* working version
* working version
* working version
* working version
* working version
* working version
* working version
* working version
* working version
* working version
* working version
* aa
* aa
* aa
* aa
* aa
* aa
* aa
* aa
* aa
* aa
* aa
* aa
* aa
* aa
* aa
* squash
* squash
* fix
* fix
* fix
* fix
* fix
* fix
* fix
* fix
* fix
* fix
* fix
* history_early_stop
* history_early_stop
* vmConfig with ReadOnly false
* auto_increment
* auto_increment
* rebase master
Co-authored-by: Alexey Akhunov <akhounov@gmail.com>
* add logging to loader
* use pure tx in etl loading, logs in mutation commit
* clean
* bletter logging and more cleanup
* bletter logging and more cleanup
* increase batch size to 500M
* better batch commit logging
* async fsync
* sync fsync
* sync fsync
* unify logging
* fix corner-case when etl can use empty bucket name
* fix tests
* better logging
* better logging
* rebase master
* remove lmdb.NoMetaSync flag for now
* consistent walk and multi-walk
* clean
* sub tx
* add consistent multi-put
* implement dupsort support in one new cursor method
* clear
* Euphemerally -> Ephemerally
* Move StorageMode to ethdb and pass it to PrepareStagedSync
* linter
* Remove StorageModeThinHistory and move SetStorageModeIfNotExist into storage_mode.go
* Optionally write receipts in the execute stage
* memory profiler
* linter
* proper linter fix
* linter
* typo
* Merge stateDb with changeDb so that all-or-nothing is commited in stage_execute
* Produce less garbage in GetState
* Still playing with mem allocation in GetCommittedState
* Pass key by pointer in GetState as well
* linter
* Avoid a memory allocation in opSload
* Use uint256.Int rather than common.Hash for storage values to reduce memory allocation in opSload & opSstore
* linter
* linters
* small clean up
* introduce PlainStateReader with fallbacks
* no 10.000 changes in tests
* even less iterations
* remove even more iterations
* add `go run ./cmd/geth --syncmode staged --plainstate` flag
* fix serialization calls
* make a more sensible file default
doesn’t affect anything, because this flag is always overriden when parsing CLI. but still.
* GetCurrentAccountIncarnation
* Incarnation should be read by StateReader, not StateWriter
* Use GetHistoricalAccountIncarnation in DbState
* RemoteReader ReadAccountIncarnation
* Handle the case where a contract has self-destructed, then Eth sent to it, then it got recreated again
* extract functions to different files
* post-rebase fixups
* stage v0
* add a log
* better log entries
* fixes in log messages
* fix some stuff
* review fixes
* fix linters
* pw as a variable
* fix a test
* add Byzantium check
* batch save progress too