Testing of new releases of Turbo-Geth should ideally include these checks.
## Incremental Sync
This check requires having the turbo-geth database synced previously. Lets assume (for command line examples) it is in the directory `~/mainnet/tg/chaindata`.
Using `git pull` or `git checkout`, update the code to the version that is to be released (or very close to it). Then, build turbo-geth executable:
```
make tg
```
Now run turbo-geth as usually, to try to catch up with the mainnet
```
./build/bin/tg --datadir ~/mainnet
```
It is useful to pipe the output into a text file for later inspection, for example:
```
./build/bin/tg --datadir ~/mainnet 2>&1 | tee tg.log
```
Wait until turbo-geth catches up with the network. This can be determined by looking at the output and seeing that sync cycles usually go through just a single block,
and that block is recent (can be verified on etherscan.io, for example). For example, output may look like this:
```
INFO [03-24|13:41:14.101] [1/14 Headers] Imported new block headers count=1 elapsed=1.621ms number=12101885 hash="8fe088…b877ee"
INFO [03-24|13:41:14.213] [3/14 Bodies] Downloading block bodies count=1
INFO [03-24|13:41:17.789] [3/14 Bodies] Imported new chain segment blocks=1 txs=175 elapsed=1.045ms number=12101885 hash="8fe088…b877ee"
INFO [03-24|13:41:17.793] [4/14 Senders] Started from=12101884 to=12101885
INFO [03-24|13:41:17.793] [4/14 Senders] Read canonical hashes amount=1
INFO [03-24|13:41:17.829] [5/14 Execution] Blocks execution from=12101884 to=12101885
INFO [03-24|13:41:17.905] [5/14 Execution] Completed on block=12101885
INFO [03-24|13:41:17.905] [6/14 HashState] Promoting plain state from=12101884 to=12101885
INFO [03-24|13:41:17.905] [6/14 HashState] Incremental promotion started from=12101884 to=12101885 codes=true csbucket=PLAIN-ACS
INFO [03-24|13:41:17.912] [6/14 HashState] Incremental promotion started from=12101884 to=12101885 codes=false csbucket=PLAIN-ACS
INFO [03-24|13:41:17.935] [6/14 HashState] Incremental promotion started from=12101884 to=12101885 codes=false csbucket=PLAIN-SCS
INFO [03-24|13:41:17.961] [7/14 IntermediateHashes] Generating intermediate hashes from=12101884 to=12101885
INFO [03-24|13:41:18.064] [7/14 IntermediateHashes] Trie root hash=0x7937f8c3881e27e1696a36b0a5f84e83792cff7966e97b68059442dadd404368 in=99.621957ms
INFO [03-24|13:41:18.162] [11/14 CallTraces] disabled. Work In Progress
INFO [03-24|13:41:18.225] [13/14 TxPool] Read canonical hashes hashes=1
INFO [03-24|13:41:18.227] [13/14 TxPool] Transaction stats pending=3909 queued=1036
INFO [03-24|13:41:18.227] [14/14 Finish] Update current block for the RPC API to=12101885
INFO [03-24|13:41:18.227] Memory alloc=572.23MiB sys=757.02MiB
INFO [03-24|13:41:20.391] Commit cycle in=2.163782297s
```
Here we see that the sync cycle went through all the stages for a single block `12101885`.
After that, it is useful to wait more until an Unwind is encoutered and check that turbo-geth handled it without errors. Usually, errors occur at the stage
`[7/14 IntermediateHashes]` and manifest in the wrong trie root. Here is an example of processing an unwind without errors (look for the word "Unwind" in the log):
```
INFO [03-24|13:41:34.318] [1/14 Headers] Imported new block headers count=2 elapsed=1.855ms number=12101886 hash="20d688…fdaaa6" reorg=true forkBlockNumber=12101884
INFO [03-24|13:41:34.318] UnwindTo block=12101884
INFO [03-24|13:41:34.332] Unwinding... stage=TxLookup
INFO [03-24|13:41:34.341] Unwinding... stage=LogIndex
INFO [03-24|13:41:34.371] Unwinding... stage=StorageHistoryIndex
INFO [03-24|13:41:34.428] Unwinding... stage=AccountHistoryIndex
INFO [03-24|13:41:34.462] Unwinding... stage=HashState
INFO [03-24|13:41:34.462] [6/14 HashState] Unwinding started from=12101885 to=12101884 storage=false codes=true
INFO [03-24|13:41:34.464] [6/14 HashState] Unwinding started from=12101885 to=12101884 storage=false codes=false
INFO [03-24|13:41:34.467] [6/14 HashState] Unwinding started from=12101885 to=12101884 storage=true codes=false
INFO [03-24|13:41:34.472] Unwinding... stage=IntermediateHashes
INFO [03-24|13:41:34.472] [7/14 IntermediateHashes] Unwinding of trie hashes from=12101885 to=12101884 csbucket=PLAIN-ACS
INFO [03-24|13:41:34.474] [7/14 IntermediateHashes] Unwinding of trie hashes from=12101885 to=12101884 csbucket=PLAIN-SCS
INFO [03-24|13:41:34.511] [7/14 IntermediateHashes] Trie root hash=0xa06f932426150e3a8c78607fc873bb15259c57fc2a1b8136ed4065073b9ee6b6 in=33.835208ms
INFO [03-24|13:41:34.518] Unwinding... stage=Execution
INFO [03-24|13:41:34.518] [5/14 Execution] Unwind Execution from=12101885 to=12101884
INFO [03-24|13:41:34.522] Unwinding... stage=Senders
INFO [03-24|13:41:34.522] Unwinding... stage=TxPool
INFO [03-24|13:41:34.537] [13/14 TxPool] Read canonical hashes hashes=1
INFO [03-24|13:41:34.537] [13/14 TxPool] Injecting txs into the pool number=0
INFO [03-24|13:41:34.537] [13/14 TxPool] Injection complete
INFO [03-24|13:41:34.538] [13/14 TxPool] Transaction stats pending=4090 queued=1023
INFO [03-24|13:41:34.538] Commit cycle
INFO [03-24|13:41:36.643] Unwinding... stage=Bodies
INFO [03-24|13:41:36.645] Unwinding... stage=BlockHashes
INFO [03-24|13:41:36.647] Unwinding... stage=Headers
INFO [03-24|13:41:36.727] [3/14 Bodies] Downloading block bodies count=2
INFO [03-24|13:41:39.443] Commit cycle in=2.090509095s
```
In this example, the Unwind starts with the stage `[1/14 Headers]` reporting the reorg: `reorg=true`
and also showing how back do we need to rewind to perform the reorg: `forkBlockNumber=12101884`. After that, all the stages are unwound in the "Unwinding order",
which is almost the reverse order of stages, but with some exceptions (like `TxPool` stage). After unwinding all the stages to the `forkBlockNumber`, turbo-geth
applies the new chain branch, in the example above it is two new blocks. In the `Timings` log output one can see the timings of unwinding stages as well as timings
of normal stage operation when the chain branch is applied.
### Errors to ignore for now
There are couple of types of errors that are encountered during this check, which need to be ignored for now, until we handle them better or fix the underlying
issues.
The first error is probably a result of some data race in the code, and this code will be replaced by the new sentry/downloader design, therefore we are not keen
on investigating/fixing it.
```
ERROR[03-24|13:49:53.343] Ethereum peer removal failed peer=bfa4a38e err="peer not registered"
```
The second error happens during the unwinding the `TxPool` stage. It has been reported in this issue: https://github.com/ledgerwatch/turbo-geth/issues/848
The line above shows how much time each stage of processing takes.
```
INFO [03-24|13:41:20.391] Commit cycle in=2.163782297s
```
The line above shows how long was commit cycle. We saw in the past that after some changes the commit time dramatically increases, and these
regressions need to be investigaged. We expect "commit cycle" on Linux with NVMe drive to usually take less than a second. For other operating
systems and devices the typical time may vary, but it should significantly increase from one release to another.
Perhaps we need to log some extra information in the log to make it easier for the tester to filter out the log statements after the sync "stabilised".
## Fresh sync (only when there are serious changes in the data model)
The way to perform this check is almost the same as the Incremental Sync, but starting with an empty directory for the database. This process takes much longer
(can take 2-3 days on good hardware), that is why it should be done perhaps weekly.
## Executing historical transactions
Having up-to-date database, and having shut down the turbo-geth node (it will work without shutting down, but it will lead to bloating of the database file),
Please note the difference in notation when referring to the database. Turbo-geth command uses `--datadir` which points to `~mainnet`, and it looks for the
actual database directory under `tg/chaindata`, but `checkChangeSets` need to be given slightly different path, pointing directly to the database directory.
Parameter `--block` is used to specify from which historical block the execution needs to start.
Normally, this command reports the progress after every 1000 blocks, and if there are no errors after few thousand blocks, this check can be regarded as complete.
We might add another option to this command to specify at which block number to stop, so that this check can be automated.
And in another terminal window (or in other way to launch separate process), RPC daemon connected to it (it can also be launched on a different computer)
Note that if turbo-geth and RPC daemon are running on the same computer, they can also communicate via loopback (`localhost`, `127.0.0.1`) interface. To
make this happen, pass `--private.api.addr localhost:9090` or `--private.api.addr 127.0.0.1:9090` to both turbo-geth and RPC daemon. Also note that
choice of the port `9090` is arbitrary, it can be any port free port number, as long as the value matches in turbo-geth and RPC daemon.
On the second computer (or on the same computer, but using different directories and port numbers), the same combination of processes is launched,
but from the `master` branch or the tag that is being tested:
In the example above, the `bench8` command is used. RPC test utility has a few of such "benches". These benches automatically generate JSON RPC
requests for certain RPC methods, using hits provided by options `--blockFrom` and `--blockTo`. Currently the most useful ones are:
1.`bench8` tests `eth_getLogs` RPC method (compatibility with go-ethereum and OpenEthereum)
2.`bench11` tests `trace_call` RPC method (compatiblity with OpenEthereum tracing)
3.`bench12` tests `debug_traceCall` RPC method (compatibility with go-ethereum tracing)
4.`bench13` tests `trace_callMany` RPC method (compability with OpenEthereum tracing)
Options `--tgUrl` and `--gethUrl` specify HTTP endpoints that needs to be tested against each other. Despite its name, `--gethUrl` option does not have to
point to go-ethereum node, it can point to anything that it supposed to be "correct" for the purpose of the test (go-ethereum node, OpenEthereum node,
or turbo-geth RPC daemon & turbo-geth node built from the previous release code).
Option `--needCompare` triggers the comparison of JSON RPC responses. If omitted, requests to `--gethUrl` are not done. When comparison is turned on,
the utility stops at the first occurrence of mistmatch.
## RPC test recording and replay
In order to facilitate the automation of testing, we are adding the ability to record JSON RPC requests generated by RPC test utility, and responses
received from turbo-geth. Once these are recorded in a file, they can be later replayed, without the need of having the second RPC daemon present.
To turn on recording, option `--recordFile <filename>` needs to be added. Currently, only `bench8`, `bench11` and `bench13` support recording.
Only queries and responses for which the comparison produced the match, are recorded. If `--needCompare` is not specified, but `--recordFile` is,
then all generated queries and responses are recorded. This can be used to separate the testing into 2 parts in time.