prysm-pulse/beacon-chain/db/kv/blocks.go
terence tsao 6d2a2ebadf Update run time to v0.9.3 (#4154)
* Remove custody (#3986)

* Update proto fields

* Updated block operations

* Fixed all block operation tests

* Fixed tests part 1

* Fixed tests part 1

* All tests pass

* Clean up

* Skip spec test

* Fixed ssz test

* Skip ssz test

* Skip mainnet tests

* Update beacon-chain/operations/attestation.go

* Update beacon-chain/operations/attestation.go
* Decoy flip flop check (#3987)
* Bounce attack check (#3989)

* New store values

* Update process block

* Update process attestation

* Update tests

* Helper

* Fixed blockchain package tests

* Update beacon-chain/blockchain/forkchoice/process_block.go
* Conflict
* Unskip mainnet spec tests (#3998)

* Starting

* Fixed attestation mainnet test

* Unskip ssz static and block processing tests

* Fixed workspace

* fixed workspace

* fixed workspace

* Update beacon-chain/core/blocks/block_operations.go
* Unskip minimal spec tests (#3999)

* Starting

* Fixed attestation mainnet test

* Unskip ssz static and block processing tests

* Fixed workspace

* fixed workspace

* fixed workspace

* Update workspace

* Unskip all minimal spec tests

* Update workspace for general test
* Unskip test (#4001)
* Update minimal seconds per slot to 6 (#3978)
* Bounce attack tests (#3993)

* New store values

* Update process block

* Update process attestation

* Update tests

* Helper

* Fixed blockchain package tests

* Slots since epoch starts tests

* Update justified checkpt tests

* Conflict

* Fixed logic

* Update process_block.go

* Use helper
* Conflict
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into v0.9.1
* Conflict
* Fixed failed tests
* Lower MinGenesisActiveValidatorCount to 16384 (#4100)
* Fork choice beacon block checks (#4107)

* Prevent future blocks check and test

* Removed old code
* Update aggregation proto (#4121)

* Update def
* Update spec test
* Conflict
* Update workspace
* patch
* Resolve conflict
* Patch
* Change workspace
* Update ethereumapis to a forked branch at commit 6eb1193e47
* Fixed all the tests
* Merge branch 'v0.9.2' of https://github.com/prysmaticlabs/prysm into conflict
* fix patch
* Need to regenerate test data
* Merge branch 'master' into v0.9.2
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into v0.9.2
* Enable snappy compression for all (#4157)

* enable snappy compression for all
* enable snappy compression for all
* enable snappy compression for all
* enable snappy compression for all
* Validate aggregate and proof subscriber (#4159)
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into v0.9.2
* Conflict
* Update workspace
* Conflict
* Conflict
* Conflict
* Merge branch 'master' into v0.9.2
* Merge branch 'master' into v0.9.2
* Conflict
* Merge branch 'v0.9.2' of https://github.com/prysmaticlabs/prysm into v0.9.2
* Remove migrate to snappy  (#4205)
* Feature flag: Deprecate --prune-states, release to all (#4204)

* Deprecated prune-states, release to all

* imports

* remote unused import

* remove unused import

* Rm prune state test

* gaz
* Refactoring for dynamic pubsub subscriptions for non-aggregated attestations (#4189)

* checkpoint progress

* chkpt

* checkpoint progress

* put pipeline in its own file

* remove unused imports

* add test, it's failing though

* fix test

* remove head state issue

* add clear db flag to e2e

* add some more error handling, debug logging

* skip processing if chain has not started

* fix test

* wrap in go routine to see if anything breaks

* remove duplicated topic

* Add a regression test. Thanks @nisdas for finding the original problem. May it never happen again *fingers crossed*

* Comments

* gofmt

* comment out with TODO
* Sync with master
* Sync with master
* RPC servers use attestation pool (#4223)
* Merge branch 'master' of https://github.com/prysmaticlabs/prysm into v0.9.2
* Refactor RPC to Fully Utilize Ethereum APIs (#4243)

* include attester as a file in the validator server

* remove old proposer server impl

* include new patch and properly sync changes

* align with public pbs

* ensure matches rpc def

* fix up status tests

* resolve all broken test files in the validator rpc package

* gazelle include

* fix up the duties implementation

* fixed up all get duties functions

* all tests pass

* utilize new ethereum apis

* amend validator client to use the new beacon node validator rpc client

* fix up most of validator items

* added in mock

* fix up test

* readd test

* add chain serv mock

* fix a few more validator methods

* all validator tests passingggg

* fix broken test

* resolve even more broken tests

* all tests passsssss

* fix lint

* try PR

* fix up test

* resolve broken other tests
* Sync with master
* Merge branch 'v0.9.2' of https://github.com/prysmaticlabs/prysm into v0.9.2
* Aggregate and proof subscriber (#4240)

* Added subscribers

* Fixed conflict

* Tests

* fix up patch

* Use upstream pb

* include latest patch

* Fmt

* Save state before head block
* skip tests (#4275)
* Delete block attestations from the pool (#4241)

* Added subscribers
* Clean up
* Fixed conflict
* Delete atts in pool in validate pipeline
* Moved it to subscriber
* Merge branch 'v0.9.2' of https://github.com/prysmaticlabs/prysm into use-att-pool-3
* Test
* Fixed test
* Initial work on voluntary exit (#4207)

* Initial implementation of voluntary exit: RPC call

* Update for recent merges

* Break out validation logic for voluntary exits to core module

* RequestExit -> ProposeExit

* Decrease exit package visibility

* Move to operation feed

* Wrap errors
* Fix critical proposer selection bug #4259 (#4265)

* fix critical proposer selection bug #4259

* gofmt

* add 1 more validator to make it 5

* more tests

* Fixed archivedProposerIndex

* Fixed TestFilterAttestation_OK

* Refactor ComputeProposerIndex, add regression test for potential out of range panic

* handle case of nil validator

* Update validators_test.go
* Merge branch 'master' of github.com:prysmaticlabs/prysm into v0.9.2
* Leftover merge files, oops
* gaz
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* Merge branch 'master' of https://github.com/prysmaticlabs/geth-sharding into v0.9.2
* Fixes Duplicate Validator Bug (#4322)

* Update dict

* Test helper

* Regression test

* Comment

* Reset test cache
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* fixes after PR #4328
* Complete attestation pool for run time (#4286)

* Added subscribers

* Fixed conflict

* Delete atts in pool in validate pipeline

* Moved it to subscriber

* Test

* Fixed test

* New curl for forkchoice attestations

* Starting att pool service for fork choice

* Update pool interface

* Update pool interface

* Update sync and node

* Lint

* Gazelle

* Updated servers, filled in missing functionalities

* RPC working with 1 beacon node 64 validators

* Started writing tests. Yay

* Test to aggregate and save multiple fork choice atts

* Tests for BatchAttestations for fork choice

* Fixed exisiting tests

* Minor fixes

* Fmt

* Added batch saves

* Lint

* Mo tests yay

* Delete test

* Fmt

* Update interval

* Fixed aggregation broadcast

* Clean up based on design review comment

* Fixed setupBeaconChain

* Raul's feedback. s/error/err
* resolve conflicts
* Merge branch 'v0.9.2' of github.com:prysmaticlabs/prysm into v0.9.2
* Merge branch 'master' of github.com:prysmaticlabs/prysm into v0.9.2
* Merge refs/heads/master into v0.9.2
* Removed old protos and fixed tests (#4336)
* Merge refs/heads/master into v0.9.2
* Disallow duplicated indices and test (#4339)
* Explicit use of GENESIS_SLOT in fork choice (#4343)
* Update from 2 to 3 (#4345)
* Remove verify unaggregated attestation when aggregating (#4347)
* use slot ticker instead of run every (#4348)
* Add context check for unbounded loop work (#4346)
* Revert "Explicit use of GENESIS_SLOT in fork choice (#4343)" (#4349)

This reverts commit d3f6753c77f8f733563d00ab649c5159b2c2926f.
* Refactor Powchain Service (#4306)

* add data structures

* generate proto

* add in new fields

* add comments

* add new mock state

* add new mock state

* add new methods

* some more changes

* check genesis time properly

* lint

* fix refs

* fix tests

* lint

* lint

* lint

* gaz

* fix lint

* raul's comments

* use one method

* fix test

* raul's comment

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Ensure best better-justification is stored for fork choice (#4342)

* Ensure best better-justification is stored. Minor refactor
* Tests
* Merge refs/heads/v0.9.2 into better-best-justified
* Merge refs/heads/v0.9.2 into better-best-justified
* Ensure that epoch of attestation slot matches the target epoch (#4341)

* Disallow duplicated indices and test
* Add slot to target epoch check to on_attestation
* Add slot to target epoch check to process_attestation
* Merge branch 'v0.9.2' of git+ssh://github.com/prysmaticlabs/prysm into no-dup-att-indices
* Fixed TestProcessAttestations_PrevEpochFFGDataMismatches
* Merge refs/heads/v0.9.2 into no-dup-att-indices
* Merge refs/heads/v0.9.2 into no-dup-att-indices
* Merge refs/heads/v0.9.2 into no-dup-att-indices
* Merge refs/heads/v0.9.2 into no-dup-att-indices
* Merge refs/heads/v0.9.2 into no-dup-att-indices
* Merge refs/heads/v0.9.2 into no-dup-att-indices
* Merge refs/heads/v0.9.2 into no-dup-att-indices
* Update beacon-chain/blockchain/forkchoice/process_attestation_test.go

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>
* Merge refs/heads/v0.9.2 into no-dup-att-indices
* Filter viable branches in fork choice (#4355)
* Only activate upon finality (#4359)

* Updated functions
* Tests
* Merge branch 'v0.9.2' of git+ssh://github.com/prysmaticlabs/prysm into queue-fix-on-finality
* Comment
* Merge refs/heads/v0.9.2 into queue-fix-on-finality
* Fixed failing test from 4359 (#4360)

* Fixed
* Skip registry spec tests
* Wait for state to be initialized at least once before running slot ticker based on genesis time (#4364)
* Sync with master
* Fix checkpoint root to  use genesis block root (#4368)
* Return an error on nil head state in fork choice (#4369)

* Return error if nil head state

* Fixed tests. Saved childen blocks state

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
* Update metrics every epoch (#4367)
* return empty slice if state is nil (#4365)
* Merge branch 'master' of github.com:prysmaticlabs/prysm into v0.9.2
* Merge refs/heads/master into v0.9.2
* Pubsub: Broadcast attestations to committee based subnets (#4316)

* Working on un-aggregated pubsub topics

* update subscriber to call pool

* checkpointing

* fix

* untested message validation

* minor fixes

* rename slotsSinceGenesis to slotsSince

* some progress on a unit test, subscribe is not being called still...

* dont change topic

* need to set the data on the message

* restore topic

* fixes

* some helpful parameter changes for mainnet operations

* lint

* Terence feedback

* unskip e2e

* Unit test for validate committee index beacon attestation

* PR feedbacK

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Merge branch 'master' of https://github.com/prysmaticlabs/geth-sharding into resolveConflicts
* remove condition
* Remove unused operation pool (#4361)
* Merge refs/heads/master into v0.9.2
* Aggregate attestations periodically  (#4376)
* Persist ETH1 Data to Disk (#4329)

* add data structures

* generate proto

* add in new fields

* add comments

* add new mock state

* add new mock state

* add new methods

* some more changes

* check genesis time properly

* lint

* fix refs

* fix tests

* lint

* lint

* lint

* gaz

* adding in new proto message

* remove outdated vars

* add new changes

* remove latest eth1data

* continue refactoring

* finally works

* lint

* fix test

* fix all tests

* fix all tests again

* fix build

* change back

* add full eth1 test

* fix logs and test

* add constant

* changes

* fix bug

* lint

* fix another bug

* change back

* Apply suggestions from code review

Co-Authored-By: Raul Jordan <raul@prysmaticlabs.com>

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
* Fixed VerifyIndexedAttestation (#4382)
* rm signing root (#4381)

* rm signing root

* Fixed VerifyIndexedAttestation

* Check proposer slashed status inside ProcessBlockHeaderNoVerify

* Fixed TestUpdateJustified_CouldUpdateBest

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Remove Redundant Trie Generation (#4383)

* remove trie generation
* remove deposit hashes
* Merge branch 'master' of https://github.com/prysmaticlabs/geth-sharding into v0.9.2
* fix build
* Conflict
* Implement StreamAttestations RPC Endpoint (#4390)

* started attestation stream

* stream attestations test

* on slot tick test passing

* imports

* gaz

* Update beacon-chain/rpc/beacon/attestations_test.go

Co-Authored-By: shayzluf <thezluf@gmail.com>

Co-authored-by: shayzluf <thezluf@gmail.com>
* Fixed goimport (#4394)
* Use custom stateutil ssz for ssz HTR spec tests (#4396)

* Use custom stateutil ssz for ssz HTR spec tests

* gofmt
* Merge branch 'master' of github.com:prysmaticlabs/prysm into v0.9.2
* Merge refs/heads/master into v0.9.2
* set mainnet to be the default for build and run (#4398)

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Merge branch 'master' of github.com:prysmaticlabs/prysm into v0.9.2
* Merge branch 'master' of github.com:prysmaticlabs/prysm into v0.9.2
* gracefully handle deduplicated registration of topic validators (#4399)

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* SSZ: temporarily disable roots cache until cache issues can be resolved (#4407)

* temporarily disable roots cache until cache issues can be resolved

* Also use custom ssz for spectests

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Remove process block attestations as separate routine (#4408)

* Removed old save/process block atts

* Fixed tests

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Save Deposit Cache to Disk (#4384)

* change to protos

* fix build

* glue everything together

* fix test

* raul's review

* preston's comments

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Fix activation queue sorting (#4409)

* Removed old save/process block atts

* Fixed tests

* Proper sorting by eligibility epoch then by indices

* Deleted old colde
* Merge branch 'master' of github.com:prysmaticlabs/prysm into v0.9.2
* Merge branch 'master' into v0.9.2
* Merge refs/heads/master into v0.9.2
* stop recursive lookup if context is cancelled (#4420)
* Fix proposal bug (#4419)
* Add Pending Deposits Safely (#4422)

* safely prune cache

* use proper method

* preston's,terence's reviews and comments

* revert change to build files

* use as feature config instead
* Release custom state ssz (#4421)

* Release custom state ssz, change all HTR of beacon state to use custom method

* typo

* use mainnet config

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Merge branch 'master' of github.com:prysmaticlabs/prysm into v0.9.2
* Update initial sync save justified to align with v0.9.3 (#4432)
* Merge refs/heads/master into v0.9.2
* Merge refs/heads/master into v0.9.2
* fix build
* don't blacklist on pubsub (#4435)
* Fix Flakey Slot Ticker Test (#4434)

* use interface instead for the slot ticker

* fixed up flakey tests

* add gen time

* get duties comment

* fix lifecycle test

* more fixes
* Configurable min genesis delay (#4437)

* Configurable min genesis delay based on https://github.com/ethereum/eth2.0-specs/pull/1557

* remove feature flag for genesis delay

* fix

* demo config feedback
* patch readme
* save keys unencrypted for validators (#4439)
* Add new demo configuration targeting mainnet scale (#4397)

* Add new demo configuration targeting mainnet, with 1/10th of the deposit value

* reduce quotant by 1/10th. Use 1/10th mainnet values

* only change the inactivity quotant

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Save justified checkpoint state (#4433)

* Save justified checkpoint state

* Lint

* Feedback

* Fixed test

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
* Update shared/testutil/deposits.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update proto/testing/ssz_regression_test.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/core/epoch/epoch_processing.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/operations/attestations/kv/forkchoice.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/operations/attestations/pool.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/operations/attestations/pool.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/powchain/log_processing_test.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/rpc/validator/exit_test.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/operations/attestations/service.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/sync/subscriber_beacon_blocks_test.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/sync/subscriber_beacon_blocks_test.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/sync/subscriber.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/sync/subscriber.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/rpc/validator/proposer.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/rpc/validator/exit_test.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/rpc/validator/exit_test.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/operations/attestations/prepare_forkchoice.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/powchain/log_processing_test.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/operations/attestations/pool.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/powchain/log_processing_test.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/rpc/aggregator/server.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/rpc/validator/exit.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/rpc/validator/exit_test.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/rpc/validator/exit_test.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>
* Update beacon-chain/cache/depositcache/pending_deposits.go
* Update beacon-chain/cache/depositcache/pending_deposits_test.go
* Update beacon-chain/rpc/validator/proposer.go
* Merge refs/heads/master into v0.9.2
* Fix e2e genesis delay issues (#4442)

* fix e2e genesis delay issues

* register flag

* typo

* Update shared/featureconfig/config.go

Co-Authored-By: Nishant Das <nishdas93@gmail.com>

* Apply suggestions from code review

Co-Authored-By: Nishant Das <nishdas93@gmail.com>

* skip demo e2e

* fix validator

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: Nishant Das <nish1993@hotmail.com>
Co-authored-by: Ivan Martinez <ivanthegreatdev@gmail.com>
* Batch Eth1 RPC Calls (#4392)

* add new methods

* get it working

* optimize past deposit logs processing

* revert change

* fix all tests

* use mock

* lint

* lint

* check for nil

* stop panics

* Apply suggestions from code review

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* Terence's Review

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2020-01-07 18:47:39 +00:00

450 lines
15 KiB
Go

package kv
import (
"bytes"
"context"
"fmt"
"github.com/boltdb/bolt"
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/db/filters"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/sliceutil"
"go.opencensus.io/trace"
)
// Block retrieval by root.
func (k *Store) Block(ctx context.Context, blockRoot [32]byte) (*ethpb.SignedBeaconBlock, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.Block")
defer span.End()
// Return block from cache if it exists.
if v, ok := k.blockCache.Get(string(blockRoot[:])); v != nil && ok {
return v.(*ethpb.SignedBeaconBlock), nil
}
var block *ethpb.SignedBeaconBlock
err := k.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(blocksBucket)
enc := bkt.Get(blockRoot[:])
if enc == nil {
return nil
}
block = &ethpb.SignedBeaconBlock{}
return decode(enc, block)
})
return block, err
}
// HeadBlock returns the latest canonical block in eth2.
func (k *Store) HeadBlock(ctx context.Context) (*ethpb.SignedBeaconBlock, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.HeadBlock")
defer span.End()
var headBlock *ethpb.SignedBeaconBlock
err := k.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(blocksBucket)
headRoot := bkt.Get(headBlockRootKey)
if headRoot == nil {
return nil
}
enc := bkt.Get(headRoot)
if enc == nil {
return nil
}
headBlock = &ethpb.SignedBeaconBlock{}
return decode(enc, headBlock)
})
return headBlock, err
}
// Blocks retrieves a list of beacon blocks by filter criteria.
func (k *Store) Blocks(ctx context.Context, f *filters.QueryFilter) ([]*ethpb.SignedBeaconBlock, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.Blocks")
defer span.End()
blocks := make([]*ethpb.SignedBeaconBlock, 0)
err := k.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(blocksBucket)
// If no filter criteria are specified, return an error.
if f == nil {
return errors.New("must specify a filter criteria for retrieving blocks")
}
// Creates a list of indices from the passed in filter values, such as:
// []byte("0x2093923") in the parent root indices bucket to be used for looking up
// block roots that were stored under each of those indices for O(1) lookup.
indicesByBucket, err := createBlockIndicesFromFilters(f)
if err != nil {
return errors.Wrap(err, "could not determine lookup indices")
}
// We retrieve block roots that match a filter criteria of slot ranges, if specified.
filtersMap := f.Filters()
rootsBySlotRange := fetchBlockRootsBySlotRange(
tx.Bucket(blockSlotIndicesBucket),
filtersMap[filters.StartSlot],
filtersMap[filters.EndSlot],
filtersMap[filters.StartEpoch],
filtersMap[filters.EndEpoch],
)
// Once we have a list of block roots that correspond to each
// lookup index, we find the intersection across all of them and use
// that list of roots to lookup the block. These block will
// meet the filter criteria.
indices := lookupValuesForIndices(indicesByBucket, tx)
keys := rootsBySlotRange
if len(indices) > 0 {
// If we have found indices that meet the filter criteria, and there are also
// block roots that meet the slot range filter criteria, we find the intersection
// between these two sets of roots.
if len(rootsBySlotRange) > 0 {
joined := append([][][]byte{keys}, indices...)
keys = sliceutil.IntersectionByteSlices(joined...)
} else {
// If we have found indices that meet the filter criteria, but there are no block roots
// that meet the slot range filter criteria, we find the intersection
// of the regular filter indices.
keys = sliceutil.IntersectionByteSlices(indices...)
}
}
for i := 0; i < len(keys); i++ {
encoded := bkt.Get(keys[i])
block := &ethpb.SignedBeaconBlock{}
if err := decode(encoded, block); err != nil {
return err
}
blocks = append(blocks, block)
}
return nil
})
return blocks, err
}
// BlockRoots retrieves a list of beacon block roots by filter criteria.
func (k *Store) BlockRoots(ctx context.Context, f *filters.QueryFilter) ([][32]byte, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.BlockRoots")
defer span.End()
blockRoots := make([][32]byte, 0)
err := k.db.View(func(tx *bolt.Tx) error {
// If no filter criteria are specified, return an error.
if f == nil {
return errors.New("must specify a filter criteria for retrieving block roots")
}
// Creates a list of indices from the passed in filter values, such as:
// []byte("0x2093923") in the parent root indices bucket to be used for looking up
// block roots that were stored under each of those indices for O(1) lookup.
indicesByBucket, err := createBlockIndicesFromFilters(f)
if err != nil {
return errors.Wrap(err, "could not determine lookup indices")
}
// We retrieve block roots that match a filter criteria of slot ranges, if specified.
filtersMap := f.Filters()
rootsBySlotRange := fetchBlockRootsBySlotRange(
tx.Bucket(blockSlotIndicesBucket),
filtersMap[filters.StartSlot],
filtersMap[filters.EndSlot],
filtersMap[filters.StartEpoch],
filtersMap[filters.EndEpoch],
)
// Once we have a list of block roots that correspond to each
// lookup index, we find the intersection across all of them.
indices := lookupValuesForIndices(indicesByBucket, tx)
keys := rootsBySlotRange
if len(indices) > 0 {
// If we have found indices that meet the filter criteria, and there are also
// block roots that meet the slot range filter criteria, we find the intersection
// between these two sets of roots.
if len(rootsBySlotRange) > 0 {
joined := append([][][]byte{keys}, indices...)
keys = sliceutil.IntersectionByteSlices(joined...)
} else {
// If we have found indices that meet the filter criteria, but there are no block roots
// that meet the slot range filter criteria, we find the intersection
// of the regular filter indices.
keys = sliceutil.IntersectionByteSlices(indices...)
}
}
for i := 0; i < len(keys); i++ {
blockRoots = append(blockRoots, bytesutil.ToBytes32(keys[i]))
}
return nil
})
if err != nil {
return nil, errors.Wrap(err, "could not retrieve block roots")
}
return blockRoots, nil
}
// HasBlock checks if a block by root exists in the db.
func (k *Store) HasBlock(ctx context.Context, blockRoot [32]byte) bool {
ctx, span := trace.StartSpan(ctx, "BeaconDB.HasBlock")
defer span.End()
if v, ok := k.blockCache.Get(string(blockRoot[:])); v != nil && ok {
return true
}
exists := false
// #nosec G104. Always returns nil.
k.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(blocksBucket)
exists = bkt.Get(blockRoot[:]) != nil
return nil
})
return exists
}
// DeleteBlock by block root.
func (k *Store) DeleteBlock(ctx context.Context, blockRoot [32]byte) error {
ctx, span := trace.StartSpan(ctx, "BeaconDB.DeleteBlock")
defer span.End()
return k.db.Batch(func(tx *bolt.Tx) error {
bkt := tx.Bucket(blocksBucket)
enc := bkt.Get(blockRoot[:])
if enc == nil {
return nil
}
block := &ethpb.SignedBeaconBlock{}
if err := decode(enc, block); err != nil {
return err
}
indicesByBucket := createBlockIndicesFromBlock(block.Block)
if err := deleteValueForIndices(indicesByBucket, blockRoot[:], tx); err != nil {
return errors.Wrap(err, "could not delete root for DB indices")
}
k.blockCache.Del(string(blockRoot[:]))
return bkt.Delete(blockRoot[:])
})
}
// DeleteBlocks by block roots.
func (k *Store) DeleteBlocks(ctx context.Context, blockRoots [][32]byte) error {
ctx, span := trace.StartSpan(ctx, "BeaconDB.DeleteBlocks")
defer span.End()
return k.db.Update(func(tx *bolt.Tx) error {
bkt := tx.Bucket(blocksBucket)
for _, blockRoot := range blockRoots {
enc := bkt.Get(blockRoot[:])
if enc == nil {
return nil
}
block := &ethpb.SignedBeaconBlock{}
if err := decode(enc, block); err != nil {
return err
}
indicesByBucket := createBlockIndicesFromBlock(block.Block)
if err := deleteValueForIndices(indicesByBucket, blockRoot[:], tx); err != nil {
return errors.Wrap(err, "could not delete root for DB indices")
}
k.blockCache.Del(string(blockRoot[:]))
if err := bkt.Delete(blockRoot[:]); err != nil {
return err
}
}
return nil
})
}
// SaveBlock to the db.
func (k *Store) SaveBlock(ctx context.Context, signed *ethpb.SignedBeaconBlock) error {
ctx, span := trace.StartSpan(ctx, "BeaconDB.SaveBlock")
defer span.End()
blockRoot, err := ssz.HashTreeRoot(signed.Block)
if err != nil {
return err
}
if v, ok := k.blockCache.Get(string(blockRoot[:])); v != nil && ok {
return nil
}
return k.db.Update(func(tx *bolt.Tx) error {
bkt := tx.Bucket(blocksBucket)
if existingBlock := bkt.Get(blockRoot[:]); existingBlock != nil {
return nil
}
enc, err := encode(signed)
if err != nil {
return err
}
indicesByBucket := createBlockIndicesFromBlock(signed.Block)
if err := updateValueForIndices(indicesByBucket, blockRoot[:], tx); err != nil {
return errors.Wrap(err, "could not update DB indices")
}
k.blockCache.Set(string(blockRoot[:]), signed, int64(len(enc)))
return bkt.Put(blockRoot[:], enc)
})
}
// SaveBlocks via bulk updates to the db.
func (k *Store) SaveBlocks(ctx context.Context, blocks []*ethpb.SignedBeaconBlock) error {
ctx, span := trace.StartSpan(ctx, "BeaconDB.SaveBlocks")
defer span.End()
return k.db.Update(func(tx *bolt.Tx) error {
for _, block := range blocks {
blockRoot, err := ssz.HashTreeRoot(block.Block)
if err != nil {
return err
}
bkt := tx.Bucket(blocksBucket)
if existingBlock := bkt.Get(blockRoot[:]); existingBlock != nil {
return nil
}
enc, err := encode(block)
if err != nil {
return err
}
indicesByBucket := createBlockIndicesFromBlock(block.Block)
if err := updateValueForIndices(indicesByBucket, blockRoot[:], tx); err != nil {
return errors.Wrap(err, "could not update DB indices")
}
k.blockCache.Set(string(blockRoot[:]), block, int64(len(enc)))
if err := bkt.Put(blockRoot[:], enc); err != nil {
return err
}
}
return nil
})
}
// SaveHeadBlockRoot to the db.
func (k *Store) SaveHeadBlockRoot(ctx context.Context, blockRoot [32]byte) error {
ctx, span := trace.StartSpan(ctx, "BeaconDB.SaveHeadBlockRoot")
defer span.End()
return k.db.Update(func(tx *bolt.Tx) error {
if tx.Bucket(stateBucket).Get(blockRoot[:]) == nil {
return errors.New("no state found with head block root")
}
bucket := tx.Bucket(blocksBucket)
return bucket.Put(headBlockRootKey, blockRoot[:])
})
}
// GenesisBlock retrieves the genesis block of the beacon chain.
func (k *Store) GenesisBlock(ctx context.Context) (*ethpb.SignedBeaconBlock, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.GenesisBlock")
defer span.End()
var block *ethpb.SignedBeaconBlock
err := k.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(blocksBucket)
root := bkt.Get(genesisBlockRootKey)
enc := bkt.Get(root)
if enc == nil {
return nil
}
block = &ethpb.SignedBeaconBlock{}
return decode(enc, block)
})
return block, err
}
// SaveGenesisBlockRoot to the db.
func (k *Store) SaveGenesisBlockRoot(ctx context.Context, blockRoot [32]byte) error {
ctx, span := trace.StartSpan(ctx, "BeaconDB.SaveGenesisBlockRoot")
defer span.End()
return k.db.Update(func(tx *bolt.Tx) error {
bucket := tx.Bucket(blocksBucket)
return bucket.Put(genesisBlockRootKey, blockRoot[:])
})
}
// fetchBlockRootsBySlotRange looks into a boltDB bucket and performs a binary search
// range scan using sorted left-padded byte keys using a start slot and an end slot.
// If both the start and end slot are the same, and are 0, the function returns nil.
func fetchBlockRootsBySlotRange(
bkt *bolt.Bucket,
startSlotEncoded interface{},
endSlotEncoded interface{},
startEpochEncoded interface{},
endEpochEncoded interface{},
) [][]byte {
var startSlot, endSlot uint64
var ok bool
if startSlot, ok = startSlotEncoded.(uint64); !ok {
startSlot = 0
}
if endSlot, ok = endSlotEncoded.(uint64); !ok {
endSlot = 0
}
startEpoch, startEpochOk := startEpochEncoded.(uint64)
endEpoch, endEpochOk := endEpochEncoded.(uint64)
if startEpochOk && endEpochOk {
startSlot = helpers.StartSlot(startEpoch)
endSlot = helpers.StartSlot(endEpoch) + params.BeaconConfig().SlotsPerEpoch - 1
}
min := []byte(fmt.Sprintf("%07d", startSlot))
max := []byte(fmt.Sprintf("%07d", endSlot))
var conditional func(key, max []byte) bool
if endSlot == 0 {
conditional = func(k, max []byte) bool {
return k != nil
}
} else {
conditional = func(k, max []byte) bool {
return k != nil && bytes.Compare(k, max) <= 0
}
}
roots := make([][]byte, 0)
c := bkt.Cursor()
for k, v := c.Seek(min); conditional(k, max); k, v = c.Next() {
splitRoots := make([][]byte, 0)
for i := 0; i < len(v); i += 32 {
splitRoots = append(splitRoots, v[i:i+32])
}
roots = append(roots, splitRoots...)
}
return roots
}
// createBlockIndicesFromBlock takes in a beacon block and returns
// a map of bolt DB index buckets corresponding to each particular key for indices for
// data, such as (shard indices bucket -> shard 5).
func createBlockIndicesFromBlock(block *ethpb.BeaconBlock) map[string][]byte {
indicesByBucket := make(map[string][]byte)
// Every index has a unique bucket for fast, binary-search
// range scans for filtering across keys.
buckets := [][]byte{
blockSlotIndicesBucket,
}
indices := [][]byte{
[]byte(fmt.Sprintf("%07d", block.Slot)),
}
if block.ParentRoot != nil && len(block.ParentRoot) > 0 {
buckets = append(buckets, blockParentRootIndicesBucket)
indices = append(indices, block.ParentRoot)
}
for i := 0; i < len(buckets); i++ {
indicesByBucket[string(buckets[i])] = indices[i]
}
return indicesByBucket
}
// createBlockFiltersFromIndices takes in filter criteria and returns
// a list of of byte keys used to retrieve the values stored
// for the indices from the DB.
//
// For blocks, these are list of signing roots of block
// objects. If a certain filter criterion does not apply to
// blocks, an appropriate error is returned.
func createBlockIndicesFromFilters(f *filters.QueryFilter) (map[string][]byte, error) {
indicesByBucket := make(map[string][]byte)
for k, v := range f.Filters() {
switch k {
case filters.ParentRoot:
parentRoot := v.([]byte)
indicesByBucket[string(blockParentRootIndicesBucket)] = parentRoot
case filters.StartSlot:
case filters.EndSlot:
case filters.StartEpoch:
case filters.EndEpoch:
default:
return nil, fmt.Errorf("filter criterion %v not supported for blocks", k)
}
}
return indicesByBucket, nil
}