erigon-pulse/cmd/integration
Mark Holt 79ed8cad35
E2 snapshot uploading (#9056)
This change introduces additional processes to manage snapshot uploading
for E2 snapshots:

## erigon snapshots upload

The `snapshots uploader` command starts a version of erigon customized
for uploading snapshot files to
a remote location.  

It breaks the stage execution process after the senders stage and then
uses the snapshot stage to send
uploaded headers, bodies and (in the case of polygon) bor spans and
events to snapshot files. Because
this process avoids execution in run signifigantly faster than a
standard erigon configuration.

The uploader uses rclone to send seedable (100K or 500K blocks) to a
remote storage location specified
in the rclone config file.

The **uploader** is configured to minimize disk usage by doing the
following:

* It removes snapshots once they are loaded
* It aggressively prunes the database once entities are transferred to
snapshots

in addition to this it has the following performance related features:

* maximizes the workers allocated to snapshot processing to improve
throughput
* Can be started from scratch by downloading the latest snapshots from
the remote location to seed processing

## snapshots command

Is a stand alone command for managing remote snapshots it has the
following sub commands

* **cmp** - compare snapshots
* **copy** - copy snapshots
* **verify** - verify snapshots
* **manifest** - manage the manifest file in the root of remote snapshot
locations
* **torrent** - manage snapshot torrent files
2023-12-27 22:05:09 +00:00
..
commands E2 snapshot uploading (#9056) 2023-12-27 22:05:09 +00:00
main.go Move cli root context to erigon-lib (#3294) 2022-01-19 10:49:07 +07:00
Readme.md save 2023-07-29 12:08:38 +07:00

Integration - tool to run Erigon stages in custom way: run/reset single stage, run all stages but reorg every X blocks, etc...

Examples

All commands require parameter --datadir=<datadir> - I will skip it for readability.

integration --help
integration print_stages

# Run single stage 
integration stage_senders 
integration stage_exec  
integration stage_exec --block=1_000_000 # stop at 1M block
integration stage_hash_state 
integration stage_trie
integration stage_history
integration stage_tx_lookup

# Unwind single stage 10 blocks backward
integration stage_exec --unwind=10

# Drop data of single stage 
integration stage_exec --reset     
integration stage_history --reset

# Unwind single stage N blocks backward
integration stage_exec --unwind=N 
integration stage_history --unwind=N

# Run stage prune to block N
integration stage_exec --prune.to=N     
integration stage_history --prune.to=N

# Exec blocks, but don't commit changes (loose them)
integration stage_exec --no-commit
...

# Run tx replay with domains [requires 6th stage to be done before run]
integration state_domains --chain goerli --last-step=4 # stop replay when 4th step is merged
integration read_domains --chain goerli account <addr> <addr> ... # read values for given accounts 

# hack which allows to force clear unwind stack of all stages
clear_unwind_stack

For testing run all stages in "N blocks forward M blocks re-org" loop

Pre-requirements of state_stages command:

  • Headers/Bodies must be downloaded
  • TxSenders stage must be executed
make all
./build/bin/integration state_stages --datadir=<datadir> --unwind=10 --unwind.every=20 --pprof
integration reset_state # drops all stages after Senders stage (including it's db tables DB tables)

For example:

--unwind=1 --unwind.every=10  # 10 blocks forward, 1 block back, 10 blocks forward, ...
--unwind=10 --unwind.every=1  # 1 block forward, 10 blocks back, 1 blocks forward, ...
--unwind=10  # 10 blocks back, then stop
--integrity.fast=false --integrity.slow=false # it performs DB integrity checks each step. You can disable slow or fast checks.
--block # stop at exact blocks
--chaindata.reference # When finish all cycles, does comparison to this db file.

"Wrong trie root" problem - temporary solution

make all
./build/bin/integration stage_hash_state --datadir=<datadir> --reset
./build/bin/integration stage_trie --datadir=<datadir> --reset
# Then run TurobGeth as usually. It will take 2-3 hours to re-calculate dropped db tables

Copy data to another db

1. Stop Erigon
2. Create new db, by starting erigon in new directory: with option --datadir /path/to/copy-to/
(set new --db.pagesize option if need)
3. Stop Erigon again after about 1 minute (Steps 2 and 3 create a new empty db in /path/to/copy-to/chaindata )
4. Build integration: cd erigon; make integration
5. Run: ./build/bin/integration mdbx_to_mdbx --chaindata /existing/erigon/path/chaindata/ --chaindata.to /path/to/copy-to/chaindata/
6. cp -R /existing/erigon/path/snapshots /path/to/copy-to/snapshots
7. start erigon in new datadir as usualy

Clear bad blocks markers table in the case some block was marked as invalid after some error

It allows to process this blocks again

1. ./build/bin/integration clear_bad_blocks --datadir=<datadir>