Fix: typos (#7710)

Fix: typos
This commit is contained in:
omahs 2023-06-12 09:39:52 +02:00 committed by GitHub
parent 63006611ec
commit c21d77aa20
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
9 changed files with 36 additions and 36 deletions

View File

@ -11,7 +11,7 @@ make erigon
```
## 2. Build RPC deamon
## 2. Build RPC daemon
On the same terminal folder you can build the RPC daemon.
```bash
@ -43,7 +43,7 @@ Now save the enode information generated in the logs, we will use this in a minu
enode://d30d079163d7b69fcb261c0538c0c3faba4fb4429652970e60fa25deb02a789b4811e98b468726ba0be63b9dc925a019f433177eb6b45c23bb78892f786d8f7a@127.0.0.1:53171
```
## 4. Start RPC deamon
## 4. Start RPC daemon
Open terminal 2 and navigate to erigon/build/bin folder. Here type the following command
@ -68,7 +68,7 @@ Open terminal 3 and navigate to erigon/build/bin folder. Paste in the following
--nodiscover
```
To check if the nodes are connected, you can go to the log of both the nodes and look for the line
To check if the nodes are connected, you can go to the log of both nodes and look for the line
``` [p2p] GoodPeers eth66=1 ```

View File

@ -15,7 +15,7 @@ If there are any transactions where code bitmap was useful, warning messages lik
````
WARN [08-01|14:54:27.778] Code Bitmap used for detecting invalid jump tx=0x86e55d1818b5355424975de9633a57c40789ca08552297b726333a9433949c92 block number=6426298
````
In such cases (unless there are too many instances), all block numbers need to be excluded in the `SkipAnalysis` function, and comment to it. The constant `MainnetNotCheckedFrom` needs to be update to the first block number we have not checked. The value can be taken from the output of the `checkChangeSets`
In such cases (unless there are too many instances), all block numbers need to be excluded in the `SkipAnalysis` function, and comment to it. The constant `MainnetNotCheckedFrom` needs to be updated to the first block number we have not checked. The value can be taken from the output of the `checkChangeSets`
utility before it exits, like this:
````
INFO [08-01|15:36:04.282] Checked blocks=10573804 next time specify --block=10573804 duration=36m54.789025062s
@ -23,11 +23,11 @@ INFO [08-01|15:36:04.282] Checked blocks=105738
## Update DB Schema version if required
In the file `common/dbutils/bucket.go` there is variable `DBSchemaVersion` that needs to be update if there are any changes in the database schema, leading to data migrations.
In the file `common/dbutils/bucket.go` there is variable `DBSchemaVersion` that needs to be updated if there are any changes in the database schema, leading to data migrations.
In most cases, it is enough to bump minor version.
## Update remote KV version if required
In the file `ethdb/remote/remotedbserver/server.go` there is variable `KvServiceAPIVersion` that needs to be update if there are any changes in the remote KV interface, or
In the file `ethdb/remote/remotedbserver/server.go` there is variable `KvServiceAPIVersion` that needs to be updated if there are any changes in the remote KV interface, or
database schema, leading to data migrations.
In most cases, it is enough to bump minor version. It is best to change both DB schema version and remove KV version together.

View File

@ -53,7 +53,7 @@ INFO [03-24|13:41:20.391] Commit cycle in=2.16378229
Here we see that the sync cycle went through all the stages for a single block `12101885`.
After that, it is useful to wait more until an Unwind is encoutered and check that Erigon handled it without errors.
After that, it is useful to wait more until an Unwind is encountered and check that Erigon handled it without errors.
Usually, errors occur at the stage
`[7/14 IntermediateHashes]` and manifest in the wrong trie root. Here is an example of processing an unwind without
errors (look for the word "Unwind" in the log):
@ -148,7 +148,7 @@ ERROR[08-01|14:30:38.299] Demoting invalidated transaction hash="859191
ERROR[08-01|14:30:38.299] Demoting invalidated transaction hash="25ee67…e73153"
```
this is also likely to disappered after the introduction of new downloader/sentry design
this is also likely to disappear after the introduction of new downloader/sentry design
### Assessing relative performance of sync
@ -172,7 +172,7 @@ INFO [03-24|13:41:20.391] Commit cycle in=2.16378229
The line above shows how long was commit cycle. We saw in the past that after some changes the commit time dramatically
increases, and these
regressions need to be investigaged. We expect "commit cycle" on Linux with NVMe drive to usually take less than a
regressions need to be investigated. We expect "commit cycle" on Linux with NVMe drive to usually take less than a
second. For other operating
systems and devices the typical time may vary, but it should significantly increase from one release to another.
Perhaps we need to log some extra information in the log to make it easier for the tester to filter out the log
@ -274,11 +274,11 @@ requests for certain RPC methods, using hits provided by options `--blockFrom` a
useful ones are:
1. `bench8` tests `eth_getLogs` RPC method (compatibility with go-ethereum and OpenEthereum)
2. `bench11` tests `trace_call` RPC method (compatiblity with OpenEthereum tracing)
2. `bench11` tests `trace_call` RPC method (compatibility with OpenEthereum tracing)
3. `bench12` tests `debug_traceCall` RPC method (compatibility with go-ethereum tracing)
4. `bench13` tests `trace_callMany` RPC method (compability with OpenEthereum tracing)
4. `bench13` tests `trace_callMany` RPC method (compatibility with OpenEthereum tracing)
Options `--erigonUrl` and `--gethUrl` specify HTTP endpoints that needs to be tested against each other. Despite its
Options `--erigonUrl` and `--gethUrl` specify HTTP endpoints that need to be tested against each other. Despite its
name, `--gethUrl` option does not have to
point to go-ethereum node, it can point to anything that it supposed to be "correct" for the purpose of the test (
go-ethereum node, OpenEthereum node,
@ -286,7 +286,7 @@ or Erigon RPC daemon & Erigon node built from the previous release code).
Option `--needCompare` triggers the comparison of JSON RPC responses. If omitted, requests to `--gethUrl` are not done.
When comparison is turned on,
the utility stops at the first occurrence of mistmatch.
the utility stops at the first occurrence of mismatch.
## RPC test recording and replay

View File

@ -81,7 +81,7 @@ Erigon does:
- wait for download of all snapshots
- when .seg available - automatically create .idx files - secondary indices, for example to find block by hash
- then switch to normal staged sync (which doesn't require connection to Downloader)
- ensure that snapshot dwnloading happening only once: even if new Erigon version does include new pre-verified snapshot
- ensure that snapshot downloading happens only once: even if new Erigon version does include new pre-verified snapshot
hashes, Erigon will not download them (to avoid unpredictable downtime) - but Erigon may produce them by self.
Downloader does:
@ -95,7 +95,7 @@ Technical details:
- To prevent attack - .idx creation using random Seed - all nodes will have different .idx file (and same .seg files)
- If you add/remove any .seg file manually, also need remove `<your_datadir>/snapshots/db` folder
## How to verify that .seg files have same checksum withch current .torrent files
## How to verify that .seg files have the same checksum as current .torrent files
```
# Use it if you see weird behavior, bugs, bans, hardware issues, etc...

View File

@ -19,7 +19,7 @@
## Introduction
Erigon's `rpcdaemon` runs in its own seperate process.
Erigon's `rpcdaemon` runs in its own separate process.
This brings many benefits including easier development, the ability to run multiple daemons at once, and the ability to
run the daemon remotely. It is possible to run the daemon locally as well (read-only) if both processes have access to
@ -92,7 +92,7 @@ Configuration of the health check is sent as POST body of the method.
Not adding a check disables that.
**`min_peer_count`** -- checks for mimimum of healthy node peers. Requires
**`min_peer_count`** -- checks for minimum of healthy node peers. Requires
`net` namespace to be listed in `http.api`.
**`known_block`** -- sets up the block that node has to know about. Requires
@ -184,7 +184,7 @@ Next options available (by `--prune` flag):
By default data pruned after 90K blocks, can change it by flags like `--prune.history.after=100_000`
Some methods, if not found historical data in DB, can fallback to old blocks re-execution - but it require `h`.
Some methods, if not found historical data in DB, can fallback to old blocks re-execution - but it requires `h`.
### RPC Implementation Status
@ -354,7 +354,7 @@ Erigon and RPC daemon nodes that are supposed to work together):
counterparts.
3. For each Erigon instance and each RPC daemon instance, generate a key pair. If you are lazy, you can generate one
pair for all Erigon nodes, and one pair for all RPC daemons, and copy these keys around.
4. Using the CA private key, create cerificate file for each public key generated on the previous step. This
4. Using the CA private key, create certificate file for each public key generated on the previous step. This
effectively "inducts" these keys into the "cluster of trust".
5. On each instance, deploy 3 files - CA certificate, instance key, and certificate signed by CA for this instance key.
@ -428,8 +428,8 @@ daemon needs to be started with these extra options:
```
**WARNING** Normally, the "client side" (which in our case is RPC daemon), verifies that the host name of the server
matches the "Common Name" attribute of the "server" cerificate. At this stage, this verification is turned off, and it
will be turned on again once we have updated the instruction above on how to properly generate cerificates with "Common
matches the "Common Name" attribute of the "server" certificate. At this stage, this verification is turned off, and it
will be turned on again once we have updated the instruction above on how to properly generate certificates with "Common
Name".
When running Erigon instance in the Google Cloud, for example, you need to specify the **Internal IP** in

View File

@ -1,10 +1,10 @@
Corresponding code is in the folder `semantics`.
## EVM without opcodes (Ether transfers only)
We start with looking at a very restricted version of EVM, having no opcodes. That means only Ether transfers are possible.
We start by looking at a very restricted version of EVM, having no opcodes. That means only Ether transfers are possible.
Even that seemingly simple case already has a relatively complex semantics.
First, we would like to define what kind semantics we are looking for. Ethereum is a state machine, which has a global state, and
First, we would like to define what kind of semantics we are looking for. Ethereum is a state machine, which has a global state, and
transactions that trigger some state transition. There are also some state transitions that happen at the end of each block. These are
related to miner and ommer rewards. The transition from one state to another is deterministic. Given the initial state, some extra
environment (recent header hashes, current block number, timestamp of the current block), and the transaction object, there is only
@ -22,7 +22,7 @@ the initial state, environment, and the transaction object. So they look more li
`EXPRESSION(STATE_init, env, tx, STATE_end) == true?`
Moreover, this representation allows for some non determinsm, which means that there could be some extra "oracle" input that helps the
Moreover, this representation allows for some non determinism, which means that there could be some extra "oracle" input that helps the
evaluation:
`EXPRESSION(STATE_init, env, tx, STATE_end, ORACLE_input) == true?`

View File

@ -20,7 +20,7 @@ always, as shown on the picture below.
![cycles-and-ticks](mgr-sync-1.png)
If chain reorgs occur, and the timings of recent Ethereum blocks change as a result, we can accept these rules to prevent
the reorgs to be used to disrupt the sync. Imaginge the tick started at block A (height H), and then due to reorg, block A
the reorgs to be used to disrupt the sync. Imagine the tick started at block A (height H), and then due to reorg, block A
was replaced by block B (also height H).
* If timestamp(B) < timestamp(A), the tick does not shorten, but proceeds until timestamp(A) + tick_duration.
@ -29,7 +29,7 @@ was replaced by block B (also height H).
As one would guess, we try to distribute the entire Ethereum state into as many pieces as many ticks there are in one cycle.
Each piece would be exchanged over the duration of one tick. Obviously, we would like to make the distribution as even as possible.
Therefore, there is still a concern about situations when the blocks are coming in quick sucession, and the ticks corresponding
Therefore, there is still a concern about situations when the blocks are coming in quick succession, and the ticks corresponding
to those blocks would largely overlap.
## Sync schedule
@ -37,7 +37,7 @@ was replaced by block B (also height H).
When we split the entire Ethereum state into pieces and plan to exchange each piece during one tick, we are creating a sync
schedule. Sync schedule is a mapping from the tick number (which can be derived from the block number) to the piece of state.
These pieces need to be efficient to extract from the State Database (for a seeder), and add to the State Database (for a leecher).
Probably the most convinient way of specifying such a piece of state is a pair of bounds - lower bound and upper bound.
Probably the most convenient way of specifying such a piece of state is a pair of bounds - lower bound and upper bound.
Each of the bounds would correspond to either Keccak256 hash of an address, or to a combination of Keccak256 hash of an address,
and Keccak256 hash of a storage location in some contract. In other words, there could be four types of specification for a piece
of state:
@ -51,5 +51,5 @@ In the last type, addresses `address1` and `address2` may mean the same address.
## How will seeders produce sync schedule
Seeders should have the ability to generate the sync schedule by the virtue of having the entire Ethreum state available. No
Seeders should have the ability to generate the sync schedule by the virtue of having the entire Ethereum state available. No
extra coordination should be necessary.

View File

@ -2,7 +2,7 @@
Staged Sync is a version of [Go-Ethereum](https://github.com/ethereum/go-ethereum)'s Full Sync that was rearchitected for better performance.
It is I/O intensive and even though we have a goal on being able to sync the node on an HDD, we still recommend using fast SSDs.
It is I/O intensive and even though we have a goal of being able to sync the node on an HDD, we still recommend using fast SSDs.
Staged Sync, as its name suggests, consists of 10 stages that are executed in order, one after another.
@ -14,7 +14,7 @@ The first stage (downloading headers) sets the local HEAD block.
Each stage is executed in order and a stage N does not stop until the local head is reached for it.
That mean, that in the ideal scenario (no network interruptions, the app isn't restarted, etc), for the full initial sync, each stage will be executed exactly once.
That means, that in the ideal scenario (no network interruptions, the app isn't restarted, etc), for the full initial sync, each stage will be executed exactly once.
After the last stage is finished, the process starts from the beginning, by looking for the new headers to download.
@ -65,7 +65,7 @@ In the Proof-of-Stake world staged sync becomes somewhat more complicated, as th
## Stages (for the up to date list see [`stages.go`](/eth/stagedsync/stages/stages.go) and [`stagebuilder.go`](/eth/stagedsync/stagebuilder.go)):
Each stage consists of 2 functions `ExecFunc` that progesses the stage forward and `UnwindFunc` that unwinds the stage backwards.
Each stage consists of 2 functions `ExecFunc` that progresses the stage forward and `UnwindFunc` that unwinds the stage backwards.
Most of the stages can work offline though it isn't implemented in the current version.
@ -136,7 +136,7 @@ This stage build the Merkle trie and checks the root hash for the current state.
It also builds Intermediate Hashes along the way and stores them into the database.
If there were no intermediate hashes stored before (that could happend during the first initial sync), it builds the full Merkle Trie and its root hash.
If there were no intermediate hashes stored before (that could happen during the first initial sync), it builds the full Merkle Trie and its root hash.
If there are intermediate hashes in the database, it uses the block history to figure out which ones are outdated and which ones are still up to date. Then it builds a partial Merkle trie using the up-to-date hashes and only rebuilding the outdated ones.

View File

@ -4,7 +4,7 @@ Words "KV" and "DB" have special meaning here:
- KV - key-value-style API to access data: let developer manage transactions, stateful cursors.
- DB - object-oriented-style API to access data: Get/Put/Delete/WalkOverTable/MultiPut, managing transactions internally.
So, DB abstraction fits 95% times and leads to more maintainable code - because it's looks stateless.
So, DB abstraction fits 95% times and leads to more maintainable code - because it looks stateless.
About "key-value-style": Modern key-value databases don't provide Get/Put/Delete methods,
because it's very hard-drive-unfriendly - it pushes developers do random-disk-access which is [order of magnitude slower than sequential read](https://www.seagate.com/sg/en/tech-insights/lies-damn-lies-and-ssd-benchmark-master-ti/).
@ -72,14 +72,14 @@ About "key-value-style": Modern key-value databases don't provide Get/Put/Delete
- MultipleDatabases, Customization: `NewMDBX().Path(path).WithBucketsConfig(config).Open()`
- 1 Transaction object can be used only withing 1 goroutine.
- 1 Transaction object can be used only within 1 goroutine.
- Only 1 write transaction can be active at a time (other will wait).
- Unlimited read transactions can be active concurrently (not blocked by write transaction).
- Methods db.Update, db.View - can be used to open and close short transaction.
- Methods Begin/Commit/Rollback - for long transaction.
- it's safe to call .Rollback() after .Commit(), multiple rollbacks are also safe. Common transaction patter:
- it's safe to call .Rollback() after .Commit(), multiple rollbacks are also safe. Common transaction pattern:
```
tx, err := db.Begin(true, ethdb.RW)
@ -127,7 +127,7 @@ for k, v, err := c.First(); k != nil; k, v, err = c.Next() {
- method Begin DOESN'T create new TxDb object, it means this object can be passed into other objects by pointer,
and high-level app code can start/commit transactions when it needs without re-creating all objects which holds
TxDb pointer.
- This is reason why txDb.CommitAndBegin() method works: inside it creating new transaction object, pinter to TxDb stays valid.
- This is the reason why txDb.CommitAndBegin() method works: inside it creating new transaction object, pinter to TxDb stays valid.
## How to dump/load table