Commit Graph

7 Commits

Author SHA1 Message Date
Alex Sharov
329d18ef6f
snapshots: reduce merge limit of blocks to 100K (#8614)
Reason: 
- produce and seed snapshots earlier on chain tip. reduce depnedency on
"good peers with history" at p2p-network.
Some networks have no much archive peers, also ConsensusLayer clients
are not-good(not-incentivised) at serving history.
- avoiding having too much files:
more files(shards) - means "more metadata", "more lookups for
non-indexed queries", "more dictionaries", "more bittorrent
connections", ...
less files - means small files will be removed after merge (no peers for
this files).


ToDo:
[x] Recent 500K - merge up to 100K 
[x] Older than 500K - merge up to 500K 
[x] Start seeding 100k files
[x] Stop seeding 100k files after merge (right before delete)

In next PR: 
[] Old version of Erigon must be able download recent hashes. To achieve
it - at first start erigon will download preverified hashes .toml from
s3 - if it's newer that what we have (build-in) - use it.
2023-11-01 23:22:35 +07:00
Alex Sharov
c23e5a1abf
downloader: preparations for reducing blocks merge limit (#8612) 2023-10-30 13:46:35 +07:00
Alex Sharov
3ac9f493b6
move chainname and snapcfg packages to erigon-lib (#8508) 2023-10-18 13:37:39 +07:00
Alex Sharov
2318138c6c
Downloader: don't fail when see unusual file, skip it (backward/forward compatibility) (#8170) 2023-09-10 15:46:30 +07:00
Alex Sharov
d2bde8c096
Recsplit: cancelable build (#7994) 2023-08-11 11:54:59 +06:00
Mark Holt
bd9896bf4b
added bor tx indexing with tests (#7826)
This request implements the insertion of Bor ephemeral transactions into
snapshot indexes.

I does this by taking the block hash from the header index and passing
it to the transaction indexer to add an additional index entry per block
into the transaction hash -> block index.

The passed entries are currently contained in an in memory array which
is (32 * number of blocks / sprint size) bytes.

In addition to the functional code there is also an update to the
`dump_test.go` so that it runs `DumpBlocks` to exercise the indexing
code. To facilitate this the `InsertChain` method in `mock_sentry` has
been modified so that it can process >128 blocks.

The code in this request also includes additional bor/consensus code
with the following functions:

`CalculateSprint`
`CalculateSprintCount`

The first function is a modification of the code in erigon-lib so that
the sprints are numerically rather than lexically ordered. This code
should be migrated to erigon-lib and should have its sprint set
calculated once from its underlying map rather than this process being
repeated every calculation.

---------

Co-authored-by: Alex Sharp <alexsharp@Alexs-MacBook-Pro-2.local>
Co-authored-by: ledgerwatch <akhounov@gmail.com>
Co-authored-by: Enrique Jose  Avila Asapche <eavilaasapche@gmail.com>
Co-authored-by: Giulio <giulio.rebuffo@gmail.com>
2023-07-12 23:31:38 +01:00
Alex Sharov
e5023775aa
Enforce blockReader interface (#7737)
- breaks dependency from staged_sync to package with block_reader
implementation
- breaks dependency from snap_sync to package with block_reader
implementation
- breaks dependency from mining to txpool implementation
2023-06-15 13:11:51 +07:00