erigon-pulse/node/node_test.go
Igor Mandrigin 8c3d19fd4c
geth 1.9.13 (#469)
* core: initial version of state snapshots

* core/state: lazy sorting, snapshot invalidation

* core/state/snapshot: extract and split cap method, cover corners

* snapshot: iteration and buffering optimizations

* core/state/snapshot: unlink snapshots from blocks, quad->linear cleanup

* 123

* core/rawdb, core/state/snapshot: runtime snapshot generation

* core/state/snapshot: fix difflayer origin-initalization after flatten

* add "to merge"

* core/state/snapshot: implement snapshot layer iteration

* core/state/snapshot: node behavioural difference on bloom content

* core: journal the snapshot inside leveldb, not a flat file

* core/state/snapshot: bloom, metrics and prefetcher fixes

* core/state/snapshot: move iterator out into its own files

* core/state/snapshot: implement iterator priority for fast direct data lookup

* core/state/snapshot: full featured account iteration

* core/state/snapshot: faster account iteration, CLI integration

* core: fix broken tests due to API changes + linter

* core/state: fix an account resurrection issue

* core/tests: test for destroy+recreate contract with storage

* squashme

* core/state/snapshot, tests: sync snap gen + snaps in consensus tests

* core/state: extend snapshotter to handle account resurrections

* core/state: fix account root hash update point

* core/state: fix resurrection state clearing and access

* core/state/snapshot: handle deleted accounts in fast iterator

* core: more blockchain tests

* core/state/snapshot: fix various iteration issues due to destruct set

* core: fix two snapshot iterator flaws, decollide snap storage prefix

* core/state/snapshot/iterator: fix two disk iterator flaws

* core/rawdb: change SnapshotStoragePrefix to avoid prefix collision with preimagePrefix

* params: begin v1.9.13 release cycle

* cmd/checkpoint-admin: add some documentation (#20697)

* go.mod: update duktape to fix sprintf warnings (#20777)

This revision of go-duktype fixes the following warning

```
duk_logging.c: In function ‘duk__logger_prototype_log_shared’:
duk_logging.c:184:64: warning: ‘Z’ directive writing 1 byte into a region of size between 0 and 9 [-Wformat-overflow=]
  184 |  sprintf((char *) date_buf, "%04d-%02d-%02dT%02d:%02d:%02d.%03dZ",
      |                                                                ^
In file included from /usr/include/stdio.h:867,
                 from duk_logging.c:5:
/usr/include/x86_64-linux-gnu/bits/stdio2.h:36:10: note: ‘__builtin___sprintf_chk’ output between 25 and 85 bytes into a destination of size 32
   36 |   return __builtin___sprintf_chk (__s, __USE_FORTIFY_LEVEL - 1,
      |          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   37 |       __bos (__s), __fmt, __va_arg_pack ());
      |       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
```

* core/rawdb: fix freezer table test error check

Fixes: Condition is always 'false' because 'err' is always 'nil'

* core/rawdb: improve table database (#20703)

This PR fixes issues in TableDatabase.

TableDatabase is a wrapper of underlying ethdb.Database with an additional prefix.
The prefix is applied to all entries it maintains. However when we try to retrieve entries
from it we don't handle the key properly. In theory the prefix should be truncated and
only user key is returned. But we don't do it in some cases, e.g. the iterator and batch
replayer created from it. So this PR is the fix to these issues.

* eth: when triggering a sync, check the head header TD, not block

* internal/web3ext: fix clique console apis to work on missing arguments

* rpc: dont log an error if user configures --rpcapi=rpc... (#20776)

This just prevents a false negative ERROR warning when, for some unknown
reason, a user attempts to turn on the module rpc even though it's already going
to be on.

* node, cmd/clef: report actual port used for http rpc (#20789)

* internal/ethapi: don't set sender-balance to maxuint, fixes #16999 (#20783)

Prior to this change, eth_call changed the balance of the sender account in the
EVM environment to 2^256 wei to cover the gas cost of the call execution.
We've had this behavior for a long time even though it's super confusing.

This commit sets the default call gasprice to zero instead of updating the balance,
which is better because it makes eth_call semantics less surprising. Removing
the built-in balance assignment also makes balance overrides work as expected.

* metrics: disable CPU stats (gosigar) on iOS

* cmd/devp2p: tweak DNS TTLs (#20801)

* cmd/devp2p: tweak DNS TTLs

* cmd/devp2p: bump treeNodeTTL to four weeks

* cmd/devp2p: lower route53 change limit again (#20819)

* cmd/devp2p: be very correct about route53 change splitting (#20820)

Turns out the way RDATA limits work is documented after all,
I just didn't search right. The trick to make it work is to
count UPSERTs twice.

This also adds an additional check to ensure TTL changes are
applied on existing records.

* graphql, node, rpc: fix typos in comments (#20824)

* eth: improve shutdown synchronization (#20695)

* eth: improve shutdown synchronization

Most goroutines started by eth.Ethereum didn't have any shutdown sync at
all, which lead to weird error messages when quitting the client.

This change improves the clean shutdown path by stopping all internal
components in dependency order and waiting for them to actually be
stopped before shutdown is considered done. In particular, we now stop
everything related to peers before stopping 'resident' parts such as
core.BlockChain.

* eth: rewrite sync controller

* eth: remove sync start debug message

* eth: notify chainSyncer about new peers after handshake

* eth: move downloader.Cancel call into chainSyncer

* eth: make post-sync block broadcast synchronous

* eth: add comments

* core: change blockchain stop message

* eth: change closeBloomHandler channel type

* eth/filters: fix typo on unindexedLogs function's comment (#20827)

* core: bump txpool tx max size to 128KB

* snapshotter/tests: verify snapdb post-state against trie (#20812)

* core/state/snapshot: basic trie-to-hash implementation

* tests: validate snapshot after test

* core/state/snapshot: fix review concerns

* cmd, consensus: add option to disable mmap for DAG caches/datasets (#20484)

* cmd, consensus: add option to disable mmap for DAG caches/datasets

* consensus: add benchmarks for mmap with/with lock

* cmd/clef: add newaccount command (#20782)

* cmd/clef: add newaccount command

* cmd/clef: document clef_New, update API versioning

* Update cmd/clef/intapi_changelog.md

Co-Authored-By: ligi <ligi@ligi.de>

* Update signer/core/uiapi.go

Co-Authored-By: ligi <ligi@ligi.de>

Co-authored-by: ligi <ligi@ligi.de>

* eth: add debug_accountRange API (#19645)

This new API allows reading accounts and their content by address range.

Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: Felix Lange <fjl@twurst.com>

* travis: allow cocoapods deploy to fail (#20833)

* metrics: improve TestTimerFunc (#20818)

The test failed due to what appears to be fluctuations in time.Sleep, which is
not the actual method under test. This change modifies it so we compare the
metered Max to the actual time instead of the desired time.

* README: update private network genesis spec with istanbul (#20841)

* add istanbul and muirGlacier to genesis states in README

* remove muirGlacier, relocate istanbul

* cmd/evm: Rework execution stats (#20792)

- Dump stats also for --bench flag.
- From memory stats only show number and size of allocations. This is what `test -bench` shows. I doubt others like number of GC runs are any useful, but can be added if requested.
- Now the mem stats are for single execution in case of --bench.

* cmd/devp2p, cmd/wnode, whisper: add missing calls to Timer.Stop (#20843)

* p2p/server: add UDP port mapping goroutine to wait group (#20846)

* accounts/abi faster unpacking of int256 (#20850)

* p2p/discv5: add missing Timer.Stop calls (#20853)

* miner/worker: add missing timer.Stop call (#20857)

* cmd/geth: fix bad genesis test (#20860)

* eth/filters: add missing Ticker.Stop call (#20862)

* eth/fetcher: add missing timer.Stop calls (#20861)

* event: add missing timer.Stop call in TestFeed (#20868)

* metrics: add missing calls to Ticker.Stop in tests (#20866)

* ethstats: add missing Ticker.Stop call (#20867)

* p2p/discv5, p2p/testing: add missing Timer.Stop calls in tests (#20869)

* core: add missing Timer.Stop call in TestLogReorgs (#20870)

* rpc: add missing timer.Stop calls in websocket tests (#20863)

* crypto/ecies: improve concatKDF (#20836)

This removes a bunch of weird code around the counter overflow check in
concatKDF and makes it actually work for different hash output sizes.

The overflow check worked as follows: concatKDF applies the hash function N
times, where N is roundup(kdLen, hashsize) / hashsize. N should not
overflow 32 bits because that would lead to a repetition in the KDF output.

A couple issues with the overflow check:

- It used the hash.BlockSize, which is wrong because the
  block size is about the input of the hash function. Luckily, all standard
  hash functions have a block size that's greater than the output size, so
  concatKDF didn't crash, it just generated too much key material.
- The check used big.Int to compare against 2^32-1.
- The calculation could still overflow before reaching the check.

The new code in concatKDF doesn't check for overflow. Instead, there is a
new check on ECIESParams which ensures that params.KeyLen is < 512. This
removes any possibility of overflow.

There are a couple of miscellaneous improvements bundled in with this
change:

- The key buffer is pre-allocated instead of appending the hash output
  to an initially empty slice.
- The code that uses concatKDF to derive keys is now shared between Encrypt
  and Decrypt.
- There was a redundant invocation of IsOnCurve in Decrypt. This is now removed
  because elliptic.Unmarshal already checks whether the input is a valid curve
  point since Go 1.5.

Co-authored-by: Felix Lange <fjl@twurst.com>

* rpc: metrics for JSON-RPC method calls (#20847)

This adds a couple of metrics for tracking the timing
and frequency of method calls:

- rpc/requests gauge counts all requests
- rpc/success gauge counts requests which return err == nil
- rpc/failure gauge counts requests which return err != nil
- rpc/duration/all timer tracks timing of all requests
- rpc/duration/<method>/<success/failure> tracks per-method timing

* mobile: use bind.NewKeyedTransactor instead of duplicating (#20888)

It's better to reuse the existing code to create a keyed transactor
than to rewrite the logic again.

* internal/ethapi: add CallArgs.ToMessage method (#20854)

ToMessage is used to convert between ethapi.CallArgs and types.Message.
It reduces the length of the DoCall method by about half by abstracting out
the conversion between the CallArgs and the Message. This should improve the
code's maintainability and reusability.

* eth, les: fix flaky tests (#20897)

* les: fix flaky test

* eth: fix flaky test

* cmd/geth: enable metrics for geth import command (#20738)

* cmd/geth: enable metrics for geth import command

* cmd/geth: enable metrics-flags for import command

* core/vm: use a callcontext struct (#20761)

* core/vm: use a callcontext struct

* core/vm: fix tests

* core/vm/runtime: benchmark

* core/vm: make intpool push inlineable, unexpose callcontext

* docs/audits: add discv5 protocol audits from LA and C53 (#20898)

* .github: change gitter reference to discord link in issue template (#20896)

* couple of fixes to docs in clef (#20900)

* p2p/discover: add initial discovery v5 implementation (#20750)This adds an implementation of the current discovery v5 spec.There is full integration with cmd/devp2p and enode.Iterator in thisversion. In theory we could enable the new protocol as a replacement ofdiscovery v4 at any time. In practice, there will likely be a few morechanges to the spec and implementation before this can happen.

* build: upgrade to golangci-lint 1.24.0 (#20901)

* accounts/scwallet: remove unnecessary uses of fmt.Sprintf

* cmd/puppeth: remove unnecessary uses of fmt.Sprintf

* p2p/discv5: remove unnecessary use of fmt.Sprintf

* whisper/mailserver: remove unnecessary uses of fmt.Sprintf

* core: goimports -w tx_pool_test.go

* eth/downloader: goimports -w downloader_test.go

* build: upgrade to golangci-lint 1.24.0

* accounts/abi/bind: Refactored topics  (#20851)

* accounts/abi/bind: refactored topics

* accounts/abi/bind: use store function to remove code duplication

* accounts/abi/bind: removed unused type defs

* accounts/abi/bind: error on tuples in topics

* Cosmetic changes to restart travis build

Co-authored-by: Guillaume Ballet <gballet@gmail.com>

* node: allow websocket and HTTP on the same port (#20810)

This change makes it possible to run geth with JSON-RPC over HTTP and
WebSocket on the same TCP port. The default port for WebSocket
is still 8546.

    geth --rpc --rpcport 8545 --ws --wsport 8545

This also removes a lot of deprecated API surface from package rpc.
The rpc package is now purely about serving JSON-RPC and no longer
provides a way to start an HTTP server.

* crypto: improve error messages in LoadECDSA (#20718)

This improves error messages when the file is too short or too long.
Also rewrite the test for SaveECDSA because LoadECDSA has its own
test now.

Co-authored-by: Felix Lange <fjl@twurst.com>

* changed date of rpcstack.go since new file (#20904)

* accounts/abi/bind: fixed erroneous filtering of negative ints (#20865)

* accounts/abi/bind: fixed erroneous packing of negative ints

* accounts/abi/bind: added test cases for negative ints in topics

* accounts/abi/bind: fixed genIntType for go 1.12

* accounts/abi: minor  nitpick

* cmd: deprecate --testnet, use named networks instead (#20852)

* cmd/utils: make goerli the default testnet

* cmd/geth: explicitly rename testnet to ropsten

* core: explicitly rename testnet to ropsten

* params: explicitly rename testnet to ropsten

* cmd: explicitly rename testnet to ropsten

* miner: explicitly rename testnet to ropsten

* mobile: allow for returning the goerli spec

* tests: explicitly rename testnet to ropsten

* docs: update readme to reflect changes to the default testnet

* mobile: allow for configuring goerli and rinkeby nodes

* cmd/geth: revert --testnet back to ropsten and mark as legacy

* cmd/util: mark --testnet flag as deprecated

* docs: update readme to properly reflect the 3 testnets

* cmd/utils: add an explicit deprecation warning on startup

* cmd/utils: swap goerli and ropsten in usage

* cmd/geth: swap goerli and ropsten in usage

* cmd/geth: if running a known preset, log it for convenience

* docs: improve readme on usage of ropsten's testnet datadir

* cmd/utils: check if legacy `testnet` datadir exists for ropsten

* cmd/geth: check for legacy testnet path in console command

* cmd/geth: use switch statement for complex conditions in main

* cmd/geth: move known preset log statement to the very top

* cmd/utils: create new ropsten configurations in the ropsten datadir

* cmd/utils: makedatadir should check for existing testnet dir

* cmd/geth: add legacy testnet flag to the copy db command

* cmd/geth: add legacy testnet flag to the inspect command

* les, les/lespay/client: add service value statistics and API (#20837)

This PR adds service value measurement statistics to the light client. It
also adds a private API that makes these statistics accessible. A follow-up
PR will add the new server pool which uses these statistics to select
servers with good performance.

This document describes the function of the new components:
https://gist.github.com/zsfelfoldi/3c7ace895234b7b345ab4f71dab102d4

Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>

* README: update min go version to 1.13 (#20911)

* travis, appveyor, build, Dockerfile: bump Go to 1.14.2 (#20913)

* travis, appveyor, build, Dockerfile: bump Go to 1.14.2

* travis, appveyor: force GO111MODULE=on for every build

* core/rawdb: fix data race between Retrieve and Close (#20919)

* core/rawdb: fixed data race between retrieve and close

closes https://github.com/ethereum/go-ethereum/issues/20420

* core/rawdb: use non-atomic load while holding mutex

* all: simplify and fix database iteration with prefix/start (#20808)

* core/state/snapshot: start fixing disk iterator seek

* ethdb, rawdb, leveldb, memorydb: implement iterators with prefix and start

* les, core/state/snapshot: iterator fixes

* all: remove two iterator methods

* all: rename Iteratee.NewIteratorWith -> NewIterator

* ethdb: fix review concerns

* params: update CHTs for the 1.9.13 release

* params: release Geth v1.9.13

* added some missing files

* post-rebase fixups

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: gary rong <garyrong0905@gmail.com>
Co-authored-by: Alex Willmer <alex@moreati.org.uk>
Co-authored-by: meowsbits <45600330+meowsbits@users.noreply.github.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
Co-authored-by: rene <41963722+renaynay@users.noreply.github.com>
Co-authored-by: Ha ĐANG <dvietha@gmail.com>
Co-authored-by: Hanjiang Yu <42531996+de1acr0ix@users.noreply.github.com>
Co-authored-by: ligi <ligi@ligi.de>
Co-authored-by: Wenbiao Zheng <delweng@gmail.com>
Co-authored-by: Adam Schmideg <adamschmideg@users.noreply.github.com>
Co-authored-by: Jeff Wentworth <jeff@curvegrid.com>
Co-authored-by: Paweł Bylica <chfast@gmail.com>
Co-authored-by: ucwong <ucwong@126.com>
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
Co-authored-by: Luke Champine <luke.champine@gmail.com>
Co-authored-by: Boqin Qin <Bobbqqin@gmail.com>
Co-authored-by: William Morriss <wjmelements@gmail.com>
Co-authored-by: Guillaume Ballet <gballet@gmail.com>
Co-authored-by: Raw Pong Ghmoa <58883403+q9f@users.noreply.github.com>
Co-authored-by: Felföldi Zsolt <zsfelfoldi@gmail.com>
2020-04-19 18:31:47 +01:00

658 lines
20 KiB
Go

// Copyright 2015 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package node
import (
"errors"
"io/ioutil"
"net/http"
"os"
"reflect"
"testing"
"time"
"github.com/ledgerwatch/turbo-geth/crypto"
"github.com/ledgerwatch/turbo-geth/p2p"
"github.com/ledgerwatch/turbo-geth/rpc"
"github.com/stretchr/testify/assert"
)
var (
testNodeKey, _ = crypto.GenerateKey()
)
func testNodeConfig() *Config {
return &Config{
Name: "test node",
P2P: p2p.Config{PrivateKey: testNodeKey},
}
}
// Tests that an empty protocol stack can be started, restarted and stopped.
func TestNodeLifeCycle(t *testing.T) {
stack, err := New(testNodeConfig())
if err != nil {
t.Fatalf("failed to create protocol stack: %v", err)
}
defer stack.Close()
// Ensure that a stopped node can be stopped again
for i := 0; i < 3; i++ {
if err := stack.Stop(); err != ErrNodeStopped {
t.Fatalf("iter %d: stop failure mismatch: have %v, want %v", i, err, ErrNodeStopped)
}
}
// Ensure that a node can be successfully started, but only once
if err := stack.Start(); err != nil {
t.Fatalf("failed to start node: %v", err)
}
if err := stack.Start(); err != ErrNodeRunning {
t.Fatalf("start failure mismatch: have %v, want %v ", err, ErrNodeRunning)
}
// Ensure that a node can be restarted arbitrarily many times
for i := 0; i < 3; i++ {
if err := stack.Restart(); err != nil {
t.Fatalf("iter %d: failed to restart node: %v", i, err)
}
}
// Ensure that a node can be stopped, but only once
if err := stack.Stop(); err != nil {
t.Fatalf("failed to stop node: %v", err)
}
if err := stack.Stop(); err != ErrNodeStopped {
t.Fatalf("stop failure mismatch: have %v, want %v ", err, ErrNodeStopped)
}
}
// Tests that if the data dir is already in use, an appropriate error is returned.
func TestNodeUsedDataDir(t *testing.T) {
// Create a temporary folder to use as the data directory
dir, err := ioutil.TempDir("", "")
if err != nil {
t.Fatalf("failed to create temporary data directory: %v", err)
}
defer os.RemoveAll(dir)
// Create a new node based on the data directory
original, err := New(&Config{DataDir: dir})
if err != nil {
t.Fatalf("failed to create original protocol stack: %v", err)
}
defer original.Close()
if err := original.Start(); err != nil {
t.Fatalf("failed to start original protocol stack: %v", err)
}
defer original.Stop()
// Create a second node based on the same data directory and ensure failure
duplicate, err := New(&Config{DataDir: dir})
if err != nil {
t.Fatalf("failed to create duplicate protocol stack: %v", err)
}
defer duplicate.Close()
if err := duplicate.Start(); err != ErrDatadirUsed {
t.Fatalf("duplicate datadir failure mismatch: have %v, want %v", err, ErrDatadirUsed)
}
}
// Tests whether services can be registered and duplicates caught.
func TestServiceRegistry(t *testing.T) {
stack, err := New(testNodeConfig())
if err != nil {
t.Fatalf("failed to create protocol stack: %v", err)
}
defer stack.Close()
// Register a batch of unique services and ensure they start successfully
services := []ServiceConstructor{NewNoopServiceA, NewNoopServiceB, NewNoopServiceC}
for i, constructor := range services {
if err := stack.Register(constructor); err != nil {
t.Fatalf("service #%d: registration failed: %v", i, err)
}
}
if err := stack.Start(); err != nil {
t.Fatalf("failed to start original service stack: %v", err)
}
if err := stack.Stop(); err != nil {
t.Fatalf("failed to stop original service stack: %v", err)
}
// Duplicate one of the services and retry starting the node
if err := stack.Register(NewNoopServiceB); err != nil {
t.Fatalf("duplicate registration failed: %v", err)
}
if err := stack.Start(); err == nil {
t.Fatalf("duplicate service started")
} else {
if _, ok := err.(*DuplicateServiceError); !ok {
t.Fatalf("duplicate error mismatch: have %v, want %v", err, DuplicateServiceError{})
}
}
}
// Tests that registered services get started and stopped correctly.
func TestServiceLifeCycle(t *testing.T) {
stack, err := New(testNodeConfig())
if err != nil {
t.Fatalf("failed to create protocol stack: %v", err)
}
defer stack.Close()
// Register a batch of life-cycle instrumented services
services := map[string]InstrumentingWrapper{
"A": InstrumentedServiceMakerA,
"B": InstrumentedServiceMakerB,
"C": InstrumentedServiceMakerC,
}
started := make(map[string]bool)
stopped := make(map[string]bool)
for id, maker := range services {
id := id // Closure for the constructor
constructor := func(*ServiceContext) (Service, error) {
return &InstrumentedService{
startHook: func(*p2p.Server) { started[id] = true },
stopHook: func() { stopped[id] = true },
}, nil
}
if err := stack.Register(maker(constructor)); err != nil {
t.Fatalf("service %s: registration failed: %v", id, err)
}
}
// Start the node and check that all services are running
if err := stack.Start(); err != nil {
t.Fatalf("failed to start protocol stack: %v", err)
}
for id := range services {
if !started[id] {
t.Fatalf("service %s: freshly started service not running", id)
}
if stopped[id] {
t.Fatalf("service %s: freshly started service already stopped", id)
}
}
// Stop the node and check that all services have been stopped
if err := stack.Stop(); err != nil {
t.Fatalf("failed to stop protocol stack: %v", err)
}
for id := range services {
if !stopped[id] {
t.Fatalf("service %s: freshly terminated service still running", id)
}
}
}
// Tests that services are restarted cleanly as new instances.
func TestServiceRestarts(t *testing.T) {
stack, err := New(testNodeConfig())
if err != nil {
t.Fatalf("failed to create protocol stack: %v", err)
}
defer stack.Close()
// Define a service that does not support restarts
var (
running bool
started int
)
constructor := func(*ServiceContext) (Service, error) {
running = false
return &InstrumentedService{
startHook: func(*p2p.Server) {
if running {
panic("already running")
}
running = true
started++
},
}, nil
}
// Register the service and start the protocol stack
if err := stack.Register(constructor); err != nil {
t.Fatalf("failed to register the service: %v", err)
}
if err := stack.Start(); err != nil {
t.Fatalf("failed to start protocol stack: %v", err)
}
defer stack.Stop()
if !running || started != 1 {
t.Fatalf("running/started mismatch: have %v/%d, want true/1", running, started)
}
// Restart the stack a few times and check successful service restarts
for i := 0; i < 3; i++ {
if err := stack.Restart(); err != nil {
t.Fatalf("iter %d: failed to restart stack: %v", i, err)
}
}
if !running || started != 4 {
t.Fatalf("running/started mismatch: have %v/%d, want true/4", running, started)
}
}
// Tests that if a service fails to initialize itself, none of the other services
// will be allowed to even start.
func TestServiceConstructionAbortion(t *testing.T) {
stack, err := New(testNodeConfig())
if err != nil {
t.Fatalf("failed to create protocol stack: %v", err)
}
defer stack.Close()
// Define a batch of good services
services := map[string]InstrumentingWrapper{
"A": InstrumentedServiceMakerA,
"B": InstrumentedServiceMakerB,
"C": InstrumentedServiceMakerC,
}
started := make(map[string]bool)
for id, maker := range services {
id := id // Closure for the constructor
constructor := func(*ServiceContext) (Service, error) {
return &InstrumentedService{
startHook: func(*p2p.Server) { started[id] = true },
}, nil
}
if err := stack.Register(maker(constructor)); err != nil {
t.Fatalf("service %s: registration failed: %v", id, err)
}
}
// Register a service that fails to construct itself
failure := errors.New("fail")
failer := func(*ServiceContext) (Service, error) {
return nil, failure
}
if err := stack.Register(failer); err != nil {
t.Fatalf("failer registration failed: %v", err)
}
// Start the protocol stack and ensure none of the services get started
for i := 0; i < 100; i++ {
if err := stack.Start(); err != failure {
t.Fatalf("iter %d: stack startup failure mismatch: have %v, want %v", i, err, failure)
}
for id := range services {
if started[id] {
t.Fatalf("service %s: started should not have", id)
}
delete(started, id)
}
}
}
// Tests that if a service fails to start, all others started before it will be
// shut down.
func TestServiceStartupAbortion(t *testing.T) {
stack, err := New(testNodeConfig())
if err != nil {
t.Fatalf("failed to create protocol stack: %v", err)
}
defer stack.Close()
// Register a batch of good services
services := map[string]InstrumentingWrapper{
"A": InstrumentedServiceMakerA,
"B": InstrumentedServiceMakerB,
"C": InstrumentedServiceMakerC,
}
started := make(map[string]bool)
stopped := make(map[string]bool)
for id, maker := range services {
id := id // Closure for the constructor
constructor := func(*ServiceContext) (Service, error) {
return &InstrumentedService{
startHook: func(*p2p.Server) { started[id] = true },
stopHook: func() { stopped[id] = true },
}, nil
}
if err := stack.Register(maker(constructor)); err != nil {
t.Fatalf("service %s: registration failed: %v", id, err)
}
}
// Register a service that fails to start
failure := errors.New("fail")
failer := func(*ServiceContext) (Service, error) {
return &InstrumentedService{
start: failure,
}, nil
}
if err := stack.Register(failer); err != nil {
t.Fatalf("failer registration failed: %v", err)
}
// Start the protocol stack and ensure all started services stop
for i := 0; i < 100; i++ {
if err := stack.Start(); err != failure {
t.Fatalf("iter %d: stack startup failure mismatch: have %v, want %v", i, err, failure)
}
for id := range services {
if started[id] && !stopped[id] {
t.Fatalf("service %s: started but not stopped", id)
}
delete(started, id)
delete(stopped, id)
}
}
}
// Tests that even if a registered service fails to shut down cleanly, it does
// not influece the rest of the shutdown invocations.
func TestServiceTerminationGuarantee(t *testing.T) {
stack, err := New(testNodeConfig())
if err != nil {
t.Fatalf("failed to create protocol stack: %v", err)
}
defer stack.Close()
// Register a batch of good services
services := map[string]InstrumentingWrapper{
"A": InstrumentedServiceMakerA,
"B": InstrumentedServiceMakerB,
"C": InstrumentedServiceMakerC,
}
started := make(map[string]bool)
stopped := make(map[string]bool)
for id, maker := range services {
id := id // Closure for the constructor
constructor := func(*ServiceContext) (Service, error) {
return &InstrumentedService{
startHook: func(*p2p.Server) { started[id] = true },
stopHook: func() { stopped[id] = true },
}, nil
}
if err := stack.Register(maker(constructor)); err != nil {
t.Fatalf("service %s: registration failed: %v", id, err)
}
}
// Register a service that fails to shot down cleanly
failure := errors.New("fail")
failer := func(*ServiceContext) (Service, error) {
return &InstrumentedService{
stop: failure,
}, nil
}
if err := stack.Register(failer); err != nil {
t.Fatalf("failer registration failed: %v", err)
}
// Start the protocol stack, and ensure that a failing shut down terminates all
for i := 0; i < 100; i++ {
// Start the stack and make sure all is online
if err := stack.Start(); err != nil {
t.Fatalf("iter %d: failed to start protocol stack: %v", i, err)
}
for id := range services {
if !started[id] {
t.Fatalf("iter %d, service %s: service not running", i, id)
}
if stopped[id] {
t.Fatalf("iter %d, service %s: service already stopped", i, id)
}
}
// Stop the stack, verify failure and check all terminations
err := stack.Stop()
if err, ok := err.(*StopError); !ok {
t.Fatalf("iter %d: termination failure mismatch: have %v, want StopError", i, err)
} else {
failer := reflect.TypeOf(&InstrumentedService{})
if err.Services[failer] != failure {
t.Fatalf("iter %d: failer termination failure mismatch: have %v, want %v", i, err.Services[failer], failure)
}
if len(err.Services) != 1 {
t.Fatalf("iter %d: failure count mismatch: have %d, want %d", i, len(err.Services), 1)
}
}
for id := range services {
if !stopped[id] {
t.Fatalf("iter %d, service %s: service not terminated", i, id)
}
delete(started, id)
delete(stopped, id)
}
}
}
// TestServiceRetrieval tests that individual services can be retrieved.
func TestServiceRetrieval(t *testing.T) {
// Create a simple stack and register two service types
stack, err := New(testNodeConfig())
if err != nil {
t.Fatalf("failed to create protocol stack: %v", err)
}
defer stack.Close()
if err := stack.Register(NewNoopService); err != nil {
t.Fatalf("noop service registration failed: %v", err)
}
if err := stack.Register(NewInstrumentedService); err != nil {
t.Fatalf("instrumented service registration failed: %v", err)
}
// Make sure none of the services can be retrieved until started
var noopServ *NoopService
if err := stack.Service(&noopServ); err != ErrNodeStopped {
t.Fatalf("noop service retrieval mismatch: have %v, want %v", err, ErrNodeStopped)
}
var instServ *InstrumentedService
if err := stack.Service(&instServ); err != ErrNodeStopped {
t.Fatalf("instrumented service retrieval mismatch: have %v, want %v", err, ErrNodeStopped)
}
// Start the stack and ensure everything is retrievable now
if err := stack.Start(); err != nil {
t.Fatalf("failed to start stack: %v", err)
}
defer stack.Stop()
if err := stack.Service(&noopServ); err != nil {
t.Fatalf("noop service retrieval mismatch: have %v, want %v", err, nil)
}
if err := stack.Service(&instServ); err != nil {
t.Fatalf("instrumented service retrieval mismatch: have %v, want %v", err, nil)
}
}
// Tests that all protocols defined by individual services get launched.
func TestProtocolGather(t *testing.T) {
stack, err := New(testNodeConfig())
if err != nil {
t.Fatalf("failed to create protocol stack: %v", err)
}
defer stack.Close()
// Register a batch of services with some configured number of protocols
services := map[string]struct {
Count int
Maker InstrumentingWrapper
}{
"zero": {0, InstrumentedServiceMakerA},
"one": {1, InstrumentedServiceMakerB},
"many": {10, InstrumentedServiceMakerC},
}
for id, config := range services {
protocols := make([]p2p.Protocol, config.Count)
for i := 0; i < len(protocols); i++ {
protocols[i].Name = id
protocols[i].Version = uint(i)
}
constructor := func(*ServiceContext) (Service, error) {
return &InstrumentedService{
protocols: protocols,
}, nil
}
if err := stack.Register(config.Maker(constructor)); err != nil {
t.Fatalf("service %s: registration failed: %v", id, err)
}
}
// Start the services and ensure all protocols start successfully
if err := stack.Start(); err != nil {
t.Fatalf("failed to start protocol stack: %v", err)
}
defer stack.Stop()
protocols := stack.Server().Protocols
if len(protocols) != 11 {
t.Fatalf("mismatching number of protocols launched: have %d, want %d", len(protocols), 26)
}
for id, config := range services {
for ver := 0; ver < config.Count; ver++ {
launched := false
for i := 0; i < len(protocols); i++ {
if protocols[i].Name == id && protocols[i].Version == uint(ver) {
launched = true
break
}
}
if !launched {
t.Errorf("configured protocol not launched: %s v%d", id, ver)
}
}
}
}
// Tests that all APIs defined by individual services get exposed.
func TestAPIGather(t *testing.T) {
stack, err := New(testNodeConfig())
if err != nil {
t.Fatalf("failed to create protocol stack: %v", err)
}
defer stack.Close()
// Register a batch of services with some configured APIs
calls := make(chan string, 1)
makeAPI := func(result string) *OneMethodAPI {
return &OneMethodAPI{fun: func() { calls <- result }}
}
services := map[string]struct {
APIs []rpc.API
Maker InstrumentingWrapper
}{
"Zero APIs": {
[]rpc.API{}, InstrumentedServiceMakerA},
"Single API": {
[]rpc.API{
{Namespace: "single", Version: "1", Service: makeAPI("single.v1"), Public: true},
}, InstrumentedServiceMakerB},
"Many APIs": {
[]rpc.API{
{Namespace: "multi", Version: "1", Service: makeAPI("multi.v1"), Public: true},
{Namespace: "multi.v2", Version: "2", Service: makeAPI("multi.v2"), Public: true},
{Namespace: "multi.v2.nested", Version: "2", Service: makeAPI("multi.v2.nested"), Public: true},
}, InstrumentedServiceMakerC},
}
for id, config := range services {
config := config
constructor := func(*ServiceContext) (Service, error) {
return &InstrumentedService{apis: config.APIs}, nil
}
if err := stack.Register(config.Maker(constructor)); err != nil {
t.Fatalf("service %s: registration failed: %v", id, err)
}
}
// Start the services and ensure all API start successfully
if err := stack.Start(); err != nil {
t.Fatalf("failed to start protocol stack: %v", err)
}
defer stack.Stop()
// Connect to the RPC server and verify the various registered endpoints
client, err := stack.Attach()
if err != nil {
t.Fatalf("failed to connect to the inproc API server: %v", err)
}
defer client.Close()
tests := []struct {
Method string
Result string
}{
{"single_theOneMethod", "single.v1"},
{"multi_theOneMethod", "multi.v1"},
{"multi.v2_theOneMethod", "multi.v2"},
{"multi.v2.nested_theOneMethod", "multi.v2.nested"},
}
for i, test := range tests {
if err := client.Call(nil, test.Method); err != nil {
t.Errorf("test %d: API request failed: %v", i, err)
}
select {
case result := <-calls:
if result != test.Result {
t.Errorf("test %d: result mismatch: have %s, want %s", i, result, test.Result)
}
case <-time.After(time.Second):
t.Fatalf("test %d: rpc execution timeout", i)
}
}
}
func TestWebsocketHTTPOnSamePort_WebsocketRequest(t *testing.T) {
node := startHTTP(t)
defer node.stopHTTP()
wsReq, err := http.NewRequest(http.MethodGet, "http://127.0.0.1:7453", nil)
if err != nil {
t.Error("could not issue new http request ", err)
}
wsReq.Header.Set("Connection", "upgrade")
wsReq.Header.Set("Upgrade", "websocket")
wsReq.Header.Set("Sec-WebSocket-Version", "13")
wsReq.Header.Set("Sec-Websocket-Key", "SGVsbG8sIHdvcmxkIQ==")
resp := doHTTPRequest(t, wsReq)
assert.Equal(t, "websocket", resp.Header.Get("Upgrade"))
}
func TestWebsocketHTTPOnSamePort_HTTPRequest(t *testing.T) {
node := startHTTP(t)
defer node.stopHTTP()
httpReq, err := http.NewRequest(http.MethodGet, "http://127.0.0.1:7453", nil)
if err != nil {
t.Error("could not issue new http request ", err)
}
httpReq.Header.Set("Accept-Encoding", "gzip")
resp := doHTTPRequest(t, httpReq)
assert.Equal(t, "gzip", resp.Header.Get("Content-Encoding"))
}
func startHTTP(t *testing.T) *Node {
conf := &Config{HTTPPort: 7453, WSPort: 7453}
node, err := New(conf)
if err != nil {
t.Error("could not create a new node ", err)
}
err = node.startHTTP("127.0.0.1:7453", []rpc.API{}, []string{}, []string{}, []string{}, rpc.HTTPTimeouts{}, []string{})
if err != nil {
t.Error("could not start http service on node ", err)
}
return node
}
func doHTTPRequest(t *testing.T, req *http.Request) *http.Response {
client := &http.Client{}
resp, err := client.Do(req)
if err != nil {
t.Error("could not issue a GET request to the given endpoint", err)
}
return resp
}