# Background
Erigon currently uses a combination of Victoria Metrics and Prometheus
client for providing metrics.
We want to rationalize this and use only the Prometheus client library,
but we want to maintain the simplified Victoria Metrics methods for
constructing metrics.
This task is currently partly complete and needs to be finished to a
stage where we can remove the Victoria Metrics module from the Erigon
code base.
# Summary of changes
- Adds missing `NewCounter`, `NewSummary`, `NewHistogram`,
`GetOrCreateHistogram` functions to `erigon-lib/metrics` similar to the
interface VictoriaMetrics lib provides
- Minor tidy up for consistency inside `erigon-lib/metrics/set.go`
around return types (panic vs err consistency for funcs inside the
file), error messages, comments
- Replace all remaining usages of `github.com/VictoriaMetrics/metrics`
with `github.com/ledgerwatch/erigon-lib/metrics` - seamless (only import
changes) since interfaces match
Here is my case:
writeIndex function takes about 39% in pprof flame graph. I dive into it
and find out that we use POA consensus, therefore mining addresses have
a big bitmap.
```
func writeIndex(blocknum uint64, changes *historyv2.ChangeSet, bucket string, changeDb kv.RwTx) error {
buf := bytes.NewBuffer(nil)
for _, change := range changes.Changes {
k := dbutils.CompositeKeyWithoutIncarnation(change.Key)
index, err := bitmapdb.Get64(changeDb, bucket, k, math.MaxUint32, math.MaxUint32)
if err != nil {
return fmt.Errorf("find chunk failed: %w", err)
}
index.Add(blocknum)
if err = bitmapdb.WalkChunkWithKeys64(k, index, bitmapdb.ChunkLimit, func(chunkKey []byte, chunk *roaring64.Bitmap) error {
buf.Reset()
if _, err = chunk.WriteTo(buf); err != nil {
return err
}
return changeDb.Put(bucket, chunkKey, common.CopyBytes(buf.Bytes()))
}); err != nil {
return err
}
}
return nil
}
```
assume that a mining address has a big bitmap (Cardinality 400000), then
"writeIndex" will split it into 300 small bitmaps according to
”ChunkLimit“ and next time "writeIndex" will union all small bitmaps
into a big bitmap, then use binary search to split it into 300 small
bitmaps. over and over again.
I think blockNum > index.Maximum(), so that we only need get the last
one of 300 small bitmaps .
Co-authored-by: mengweifu <75886642@qq.com>
This request implements the insertion of Bor ephemeral transactions into
snapshot indexes.
I does this by taking the block hash from the header index and passing
it to the transaction indexer to add an additional index entry per block
into the transaction hash -> block index.
The passed entries are currently contained in an in memory array which
is (32 * number of blocks / sprint size) bytes.
In addition to the functional code there is also an update to the
`dump_test.go` so that it runs `DumpBlocks` to exercise the indexing
code. To facilitate this the `InsertChain` method in `mock_sentry` has
been modified so that it can process >128 blocks.
The code in this request also includes additional bor/consensus code
with the following functions:
`CalculateSprint`
`CalculateSprintCount`
The first function is a modification of the code in erigon-lib so that
the sprints are numerically rather than lexically ordered. This code
should be migrated to erigon-lib and should have its sprint set
calculated once from its underlying map rather than this process being
repeated every calculation.
---------
Co-authored-by: Alex Sharp <alexsharp@Alexs-MacBook-Pro-2.local>
Co-authored-by: ledgerwatch <akhounov@gmail.com>
Co-authored-by: Enrique Jose Avila Asapche <eavilaasapche@gmail.com>
Co-authored-by: Giulio <giulio.rebuffo@gmail.com>
Changes summary:
- Continue with the gasLimit check skip in ``verifyHeader`` of
``merge.go`` for unless pre-merge block and blockGasLimitContract
present
- Refactor ``aura.go`` a bit
- Have ``sysCall`` method customized to be able to call state (contract)
at a parent (or any other) header state
This is the beginning of the series of changes to make it possible to
run multiple instances of erigon inside a single process (as devnet tool
does), with the logging from these processes going to respective log
files correctly.
This is the first part where the initial infrastructure is being
established
---------
Co-authored-by: Alex Sharp <alexsharp@Alexs-MacBook-Pro-2.local>