Here is my case:
writeIndex function takes about 39% in pprof flame graph. I dive into it
and find out that we use POA consensus, therefore mining addresses have
a big bitmap.
```
func writeIndex(blocknum uint64, changes *historyv2.ChangeSet, bucket string, changeDb kv.RwTx) error {
buf := bytes.NewBuffer(nil)
for _, change := range changes.Changes {
k := dbutils.CompositeKeyWithoutIncarnation(change.Key)
index, err := bitmapdb.Get64(changeDb, bucket, k, math.MaxUint32, math.MaxUint32)
if err != nil {
return fmt.Errorf("find chunk failed: %w", err)
}
index.Add(blocknum)
if err = bitmapdb.WalkChunkWithKeys64(k, index, bitmapdb.ChunkLimit, func(chunkKey []byte, chunk *roaring64.Bitmap) error {
buf.Reset()
if _, err = chunk.WriteTo(buf); err != nil {
return err
}
return changeDb.Put(bucket, chunkKey, common.CopyBytes(buf.Bytes()))
}); err != nil {
return err
}
}
return nil
}
```
assume that a mining address has a big bitmap (Cardinality 400000), then
"writeIndex" will split it into 300 small bitmaps according to
”ChunkLimit“ and next time "writeIndex" will union all small bitmaps
into a big bitmap, then use binary search to split it into 300 small
bitmaps. over and over again.
I think blockNum > index.Maximum(), so that we only need get the last
one of 300 small bitmaps .
Co-authored-by: mengweifu <75886642@qq.com>
An update to the devnet to introduce a local heimdall to facilitate
multiple validators without the need for an external process, and hence
validator registration/staking etc.
In this initial release only span generation is supported.
It has the following changes:
* Introduction of a local grpc heimdall interface
* Allocation of accounts via a devnet account generator ()
* Introduction on 'Services' for the network config
"--chain bor-devnet --bor.localheimdall" will run a 2 validator network
with a local service
"--chain bor-devnet --bor.withoutheimdall" will sun a single validator
with no heimdall service as before
---------
Co-authored-by: Alex Sharp <alexsharp@Alexs-MacBook-Pro-2.local>
This request implements the insertion of Bor ephemeral transactions into
snapshot indexes.
I does this by taking the block hash from the header index and passing
it to the transaction indexer to add an additional index entry per block
into the transaction hash -> block index.
The passed entries are currently contained in an in memory array which
is (32 * number of blocks / sprint size) bytes.
In addition to the functional code there is also an update to the
`dump_test.go` so that it runs `DumpBlocks` to exercise the indexing
code. To facilitate this the `InsertChain` method in `mock_sentry` has
been modified so that it can process >128 blocks.
The code in this request also includes additional bor/consensus code
with the following functions:
`CalculateSprint`
`CalculateSprintCount`
The first function is a modification of the code in erigon-lib so that
the sprints are numerically rather than lexically ordered. This code
should be migrated to erigon-lib and should have its sprint set
calculated once from its underlying map rather than this process being
repeated every calculation.
---------
Co-authored-by: Alex Sharp <alexsharp@Alexs-MacBook-Pro-2.local>
Co-authored-by: ledgerwatch <akhounov@gmail.com>
Co-authored-by: Enrique Jose Avila Asapche <eavilaasapche@gmail.com>
Co-authored-by: Giulio <giulio.rebuffo@gmail.com>
Changes summary:
- Continue with the gasLimit check skip in ``verifyHeader`` of
``merge.go`` for unless pre-merge block and blockGasLimitContract
present
- Refactor ``aura.go`` a bit
- Have ``sysCall`` method customized to be able to call state (contract)
at a parent (or any other) header state
- breaks dependency from staged_sync to package with block_reader
implementation
- breaks dependency from snap_sync to package with block_reader
implementation
- breaks dependency from mining to txpool implementation
I've added a non root logger to bor.ValidatorSet validator set. This
creates a signature change on a number of calling functions to propagate
the logger. This is mostly constrained to the bor package but impacts a
number of tests and utilities which call the validators set.
- allow store non-canonical blocks/senders
- optimize re-org: don't update/delete most of data
- allow mark chain as `Bad` - will be not visible by eth_getBlockByHash,
but can read if have hash+num
- stage_senders: don't re-calc existing senders
- stage_tx_lookup: prune less blocks per iteration - because
random-deletes are expensive. pruning must not slow-down sync.
- prune data even if --snap.stop is set
- "prune as-much-as-possible at startup" is not very good idea: at
initialCycle machine can be cold and prune will cause big downtime, no
reason to produce much freelist in 1 tx. People may also restart erigon
- because of some bug - and it will cause unexpected downtime (usually
Erigon startup very fast). So, I just remove all `initialSync`-related
logic in pruning.
- fix lost metrics about disk write byte/sec
Blob transactions are SSZ encoded, so it had to be added to decoding.
There are 2 encoding forms: `network` and `minimal` (usual). Network
encoded blob transactions include "wrapper data" which are `kzgs`,
`blobs` and `proofs`, and decoded by `DecodeWrappedTransaction`. For
previous types of transactions the network encoding is no different.
Execution-payloads / blocks use the minimal encoding of transactions. In
the transaction-pool and local transaction-journal the network encoding
is used.
Concerns:
1. Possible performance reduction caused by these changes, not sure if
streams are better then slices. Go slices in this modifications are
read-only, so they should be referred to the same underlying array and
passed by a reference.
2. If `DecodeWrappedTransaction` and `DecodeTransaction` will create
confusion and should be merged into one function.
Logic to compute fees for data blobs as well as additional check that
verifies if user was willing to pay the current `data_gas` price.
Updated `FakeExponential` function to work with uint256.
This is the beginning of the series of changes to make it possible to
run multiple instances of erigon inside a single process (as devnet tool
does), with the logging from these processes going to respective log
files correctly.
This is the first part where the initial infrastructure is being
established
---------
Co-authored-by: Alex Sharp <alexsharp@Alexs-MacBook-Pro-2.local>