prysm-pulse/validator/db/kv/migration_optimal_attester_protection.go
Raul Jordan 92932ae58e
[Feature] - Slashing Interchange Support (#8024)
* Change LowestSignedProposal to Also Return a Boolean for Slashing Protection (#8020)

* amend to use bools

* ineff assign

* comment

* Update `LowestSignedTargetEpoch` to include exists (#8004)

* Replace highest with lowerest

* Update validator/db/kv/attestation_history_v2.go

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>

* Update validator/db/kv/attestation_history_v2.go

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>

* Invert equality for saveLowestSourceTargetToDB

* Add eip checks to ensure epochs cant be lower than db ones

* Should be less than equal to

* Check if epoch exists in DB getters

* Revert run time checks

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>

* Export Attesting History for Slashing Interchange Standard (#8027)

* added in att history checks

* logic for export

* export return nil

* test for export atts

* round trip passes first try!

* rem println

* fix up tests

* pass test

* Validate Proposers Are Not Slashable With Regard to Data Within Slasher Interchange JSON (#8031)

* filter slashable blocks and atts in same json stub

* add filter blocks func

* add test for filtering out the bad public keys

* Export Slashing Protection History Via CLI (#8040)

* include cli entrypoint for history exports

* builds properly

* test to confirm we export the data as expected

* abstract helpers properly

* full test suite

* gaz

* better errors

* marshal ident

* Add the additional eip-3076 attestation checks (#7966)

* Replace highest with lowerest

* Update validator/db/kv/attestation_history_v2.go

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>

* Update validator/db/kv/attestation_history_v2.go

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>

* Invert equality for saveLowestSourceTargetToDB

* Add eip checks to ensure epochs cant be lower than db ones

* Should be less than equal to

* Check if epoch exists in DB getters

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>

* Add EIP-3076 Invariants for Proposer Slashing Protection (#8067)

* add invariant for proposer protection

* write different test cases

* pass tests

* Add EIP-3076 Interchange JSON CLI command to validator (#7880)

* Import JSON CLI

* CLI impotr

* f

* Begin adding new commands in slashing protection

* Move testing helpers to separate packae

* Add command for importing slashing protection JSONs

* fix import cycle

* fix test

* Undo cleaning changes

* Improvements

* Add better prompts

* Fix prompt

* Fix

* Fix

* Fix

* Fix conflict

* Fix

* Fixes

* Fixes

* Fix exported func

* test func

* Fixes

* fix test

* simplify import and standardize with export

* add round trip test

* true integration test works

* fix up comments

* logrus

* better error

* fix build

* build fix

* Update validator/slashing-protection/cli_export.go

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

* Update validator/slashing-protection/cli_import.go

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

* fmt

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

* Filter Slashable Attester Public Keys in Slashing Interchange Import (#8051)

* filter slashable attesters from the same JSON

* builds

* fix up initially broken test

* circular dep

* import fix

* giz

* added in attesting history package

* add test for filter slashable attester keys

* pass tests

* Save Slashable Keys to Disk in the Validator Client (#8082)

* begin db funcs

* add in test and bucket

* gaz

* rem changes to import

* ineff assign

* add godoc

* Properly Handle Duplicate Public Key Entries in Slashing Interchange Imports (#8089)

* Prevent Blacklisted Public Keys from Slashing Protection Imports from Having Duties at Runtime (#8084)

* tests on update duties

* ensure the slashable public keys are filtered out from update duties via test

* begin test

* attempt test

* rename for better context

* pass tests

* deep source

* ensure tests pass

* Check for Signing Root Mismatch When Submitting Proposals and Importing Proposals in Slashing Interchange (#8085)

* flexible signing root

* add test

* add tests

* fix test

* Preston's comments

* res tests

* ensure we consider the case for minimum proposals

* pass test

* tests passing

* rem unused code

* Set Empty Epochs in Between Attestations as FAR_FUTURE_EPOCH in Attesting History (#8113)

* set target data

* all tests passing

* ineff assign

* signing root

* Add Slashing Interchange, EIP-3076, Spec Tests to Prysm (#7858)

* Add interchange test framework

* add checks for attestations

* Import genesis root if necessary

* flexible signing root

* add test

* Sync

* fix up test build

* only 3 failing tests now

* two failing

* attempting to debug problems in conformity tests

* include latest changes

* protect test in validator/client passing

* pass tests

* imports

* spec tests passing with bazel

* gh archive link to spectests using tar.gz suffix

* rev

* rev more comment changes

* fix sha

* godoc

* add back save

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>

* Implement Migration for Unattested Epochs in Attesting History Database (#8121)

* migrate attesting history backbone done

* begin migration logic

* implement migration logic

* migration test

* add test

* migration logic

* bazel

* migration to its own file

* Handle empty blocks and attestations in interchange json and sort interchange json by public key (#8132)

* Handle empty blocks and attestations in interchange json

* add test

* sort json

* easier empty arrays

* pass test

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>

* builds

* more tests finally build

* Align Slashing Interchange With Optimized Slashing Protection (#8268)

* attestation history should account for multiple targets per source

* attempt at some fixes

* attempt some test fixes

* experimenting with sorting

* only one more failing test

* tests now pass

* slash protect tests passing

* only few tests now failing

* only spec tests failing now

* spec tests passing

* all tests passing

* helper function for verifying double votes

* use helper

* gaz

* deep source

* tests fixed

* expect specific number of times for domain data calls

* final comments

* Batch Save Imported EIP-3076 Attestations (#8304)

* optimize save

* test added

* add test for sad path

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>

* revert bad find replace

* add comment to db func

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Ivan Martinez <ivanthegreatdev@gmail.com>
Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
Co-authored-by: Shay Zluf <thezluf@gmail.com>
2021-01-22 17:12:22 -06:00

248 lines
7.6 KiB
Go

package kv
import (
"bytes"
"context"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/progressutil"
bolt "go.etcd.io/bbolt"
)
var migrationOptimalAttesterProtectionKey = []byte("optimal_attester_protection_0")
// Migrate attester protection to a more optimal format in the DB. Given we
// stored attesting history as large, 2Mb arrays per validator, we need to perform
// this migration differently than the rest, ensuring we perform each expensive bolt
// update in its own transaction to prevent having everything on the heap.
func (s *Store) migrateOptimalAttesterProtectionUp(ctx context.Context) error {
publicKeyBytes := make([][]byte, 0)
attestingHistoryBytes := make([][]byte, 0)
numKeys := 0
err := s.db.Update(func(tx *bolt.Tx) error {
mb := tx.Bucket(migrationsBucket)
if b := mb.Get(migrationOptimalAttesterProtectionKey); bytes.Equal(b, migrationCompleted) {
return nil // Migration already completed.
}
bkt := tx.Bucket(deprecatedAttestationHistoryBucket)
numKeys = bkt.Stats().KeyN
return bkt.ForEach(func(k, v []byte) error {
if v == nil {
return nil
}
bucket := tx.Bucket(pubKeysBucket)
pkBucket, err := bucket.CreateBucketIfNotExists(k)
if err != nil {
return err
}
_, err = pkBucket.CreateBucketIfNotExists(attestationSourceEpochsBucket)
if err != nil {
return err
}
_, err = pkBucket.CreateBucketIfNotExists(attestationSigningRootsBucket)
if err != nil {
return err
}
nk := make([]byte, len(k))
copy(nk, k)
nv := make([]byte, len(v))
copy(nv, v)
publicKeyBytes = append(publicKeyBytes, nk)
attestingHistoryBytes = append(attestingHistoryBytes, nv)
return nil
})
})
if err != nil {
return err
}
bar := progressutil.InitializeProgressBar(numKeys, "Migrating attesting history to more efficient format")
for i, publicKey := range publicKeyBytes {
attestingHistory := deprecatedEncodedAttestingHistory(attestingHistoryBytes[i])
err = s.db.Update(func(tx *bolt.Tx) error {
if attestingHistory == nil {
return nil
}
bucket := tx.Bucket(pubKeysBucket)
pkBucket := bucket.Bucket(publicKey)
sourceEpochsBucket := pkBucket.Bucket(attestationSourceEpochsBucket)
signingRootsBucket := pkBucket.Bucket(attestationSigningRootsBucket)
// Extract every single source, target, signing root
// from the attesting history then insert them into the
// respective buckets under the new db schema.
latestEpochWritten, err := attestingHistory.getLatestEpochWritten(ctx)
if err != nil {
return err
}
// For every epoch since genesis up to the highest epoch written, we then
// extract historical data and insert it into the new schema.
for targetEpoch := uint64(0); targetEpoch <= latestEpochWritten; targetEpoch++ {
historicalAtt, err := attestingHistory.getTargetData(ctx, targetEpoch)
if err != nil {
return err
}
if historicalAtt.isEmpty() {
continue
}
targetEpochBytes := bytesutil.Uint64ToBytesBigEndian(targetEpoch)
sourceEpochBytes := bytesutil.Uint64ToBytesBigEndian(historicalAtt.Source)
if err := sourceEpochsBucket.Put(sourceEpochBytes, targetEpochBytes); err != nil {
return err
}
if err := signingRootsBucket.Put(targetEpochBytes, historicalAtt.SigningRoot); err != nil {
return err
}
}
return bar.Add(1)
})
if err != nil {
return err
}
}
return s.db.Update(func(tx *bolt.Tx) error {
mb := tx.Bucket(migrationsBucket)
return mb.Put(migrationOptimalAttesterProtectionKey, migrationCompleted)
})
}
// Migrate attester protection from the more optimal format to the old format in the DB.
func (s *Store) migrateOptimalAttesterProtectionDown(ctx context.Context) error {
// First we extract the public keys we are migrating down for.
pubKeys := make([][48]byte, 0)
err := s.view(func(tx *bolt.Tx) error {
mb := tx.Bucket(migrationsBucket)
if b := mb.Get(migrationOptimalAttesterProtectionKey); b == nil {
// Migration has not occurred, meaning data is already in old format
// so no need to perform a down migration.
return nil
}
bkt := tx.Bucket(pubKeysBucket)
if bkt == nil {
return nil
}
return bkt.ForEach(func(pubKey, v []byte) error {
if pubKey == nil {
return nil
}
pkBucket := bkt.Bucket(pubKey)
if pkBucket == nil {
return nil
}
pubKeys = append(pubKeys, bytesutil.ToBytes48(pubKey))
return nil
})
})
if err != nil {
return err
}
// Next up, we extract the data for attested epochs and signing roots
// from the optimized db schema into maps we can use later.
signingRootsByTarget := make(map[uint64][]byte)
targetEpochsBySource := make(map[uint64][]uint64)
err = s.view(func(tx *bolt.Tx) error {
bkt := tx.Bucket(pubKeysBucket)
if bkt == nil {
return nil
}
for _, pubKey := range pubKeys {
pubKeyBkt := bkt.Bucket(pubKey[:])
if pubKeyBkt == nil {
continue
}
sourceEpochsBucket := pubKeyBkt.Bucket(attestationSourceEpochsBucket)
signingRootsBucket := pubKeyBkt.Bucket(attestationSigningRootsBucket)
// Extract signing roots.
if err := signingRootsBucket.ForEach(func(targetBytes, signingRoot []byte) error {
var sr [32]byte
copy(sr[:], signingRoot)
signingRootsByTarget[bytesutil.BytesToUint64BigEndian(targetBytes)] = sr[:]
return nil
}); err != nil {
return err
}
// Next up, extract the target epochs by source.
if err := sourceEpochsBucket.ForEach(func(sourceBytes, targetEpochsBytes []byte) error {
targetEpochs := make([]uint64, 0)
for i := 0; i < len(targetEpochsBytes); i += 8 {
targetEpochs = append(targetEpochs, bytesutil.BytesToUint64BigEndian(targetEpochsBytes[i:i+8]))
}
targetEpochsBySource[bytesutil.BytesToUint64BigEndian(sourceBytes)] = targetEpochs
return nil
}); err != nil {
return err
}
}
return nil
})
if err != nil {
return err
}
// Then, we use the data we extracted to recreate the old
// attesting history format and for each public key, we save it
// to the appropriate bucket.
err = s.update(func(tx *bolt.Tx) error {
bkt := tx.Bucket(pubKeysBucket)
if bkt == nil {
return nil
}
bar := progressutil.InitializeProgressBar(len(pubKeys), "Migrating attesting history to old format")
for _, pubKey := range pubKeys {
// Now we write the attesting history using the data we extracted
// from the buckets accordingly.
history := newDeprecatedAttestingHistory(0)
var maxTargetWritten uint64
for source, targetEpochs := range targetEpochsBySource {
for _, target := range targetEpochs {
signingRoot := params.BeaconConfig().ZeroHash[:]
if sr, ok := signingRootsByTarget[target]; ok {
signingRoot = sr
}
newHist, err := history.setTargetData(ctx, target, &deprecatedHistoryData{
Source: source,
SigningRoot: signingRoot,
})
if err != nil {
return err
}
history = newHist
if target > maxTargetWritten {
maxTargetWritten = target
}
}
}
newHist, err := history.setLatestEpochWritten(ctx, maxTargetWritten)
if err != nil {
return err
}
history = newHist
deprecatedBkt, err := tx.CreateBucketIfNotExists(deprecatedAttestationHistoryBucket)
if err != nil {
return err
}
if err := deprecatedBkt.Put(pubKey[:], history); err != nil {
return err
}
if err := bar.Add(1); err != nil {
return err
}
}
return nil
})
if err != nil {
return err
}
// Finally, we clear the migration key.
return s.update(func(tx *bolt.Tx) error {
migrationsBkt := tx.Bucket(migrationsBucket)
return migrationsBkt.Delete(migrationOptimalAttesterProtectionKey)
})
}