feat:
1. `erigon_getLatestLogs` doesn't have to match the exact position of
the topics. It will match logs that contain the topics regardless of the
topics' position with original bloom filter. And it accepts `blockCount`
& `crit.ToBlock` params for better pagination.
feat: add `erigon_getLatestLogs` as a new feature API.
1. `erigon_getLatestLogs` returns latest logs that match the filter with
`logCount` length. Implementation is similar to `erigon_getLogs` but it
uses `ReverseIterator` which makes it more efficient to fetch the latest
logs.
this pr has two things in it
1. changed filter logs to use a map for the topics. this will speed up
queries with many topics in them. I still don't have a use case for this
though. i put is as a method of Logs, since that made sense to me, happy
to move it back out though.
2. allows json-rpc over http get request. since firefox is a great json
viewer (can search through, collapse large results) i often use it to
debug. it is also useful for sharing data with those who are less
familiar with command line / programming.
example get request:
http://rpcdaemon/?method=eth_getLogs¶ms=[{"fromBlock":"0xf2316b","toBlock":"0xf2316b"}]
it is based on the old jsonrpc http specification
https://www.jsonrpc.org/historical/json-rpc-over-http.html#encoded-parameters
except we also accept not base64 encoded params. since every eth rpc
request has a [], it will immediately fail validation for base64 and
attempt to use the parameters as a they are. otherwise it will attempt
to parse the rest of the payload as base64 and use that.
Co-authored-by: a <a@a.a>
Co-authored-by: gfx <86091021+gfxlabs@users.noreply.github.com>
this pr changes filterLogs to use a pre computed hashset of addresses,
instead of iterating across the list of addresses once per log.
this greatly increases the speed of filter queries that use many
addresses and also return a large number of logs. In our case, we are
performing a query for all the trades performed in a uniswap v3 pool in
a 250 block range.
my benchmarks were performed with the data & code below:
address list gist is here
[addrs](2c30b0df43/gistfile1.txt)
```
c := NewRpcClient()
addrs := []common.Address{AddressListGist}
logs, err := c.FilterLogs(context.TODO(), ethereum.FilterQuery{
FromBlock:big.NewInt(15640000),
ToBlock: big.NewInt(15640250),
Addresses: addrs,
Topics: [][]common.Hash{
{
common.HexToHash("c42079f94a6350d7e6235f29174924f928cc2ac818eb64fed8004e115fbcca67"),
},
},
```
the query contains 8442 addresses, while the response contains 1277 logs
On average, current devel averages a 15.57 second response time on my machine after 10 runs, while the new filterLogs averages 1.05 seconds.
for CURRENT DEVEL, the profile is here: https://pprof.aaaaa.news/cd8dkv0tidul37sctmi0/flamegraph
for the filterLogs branch, the profile is here: https://pprof.aaaaa.news/cd8dlmgtidul37sctmig/flamegraph
while the tests pass with this branch, I am not really sure why filterLogs was originally programmed the way it was. Is there some sort of edge case / compatibility thing that I am missing with this change?
Co-authored-by: a <a@a.a>
* add_abigen_error_handle
* add abigen error type test code
* add field timestamp in `eth_getLogs` api
add field timestamp in `eth_getLogs` api
* undo add field timestamp in `eth_getLogs`
* add `erigon_getLogs` api and add field `timestamp`
add `erigon_getLogs` api and add field `timestamp`
* feat: add `erigon_getLogs` with timestamp field to erigon rpcdaemon
feat: add `erigon_getLogs` with timestamp field to erigon rpcdaemon
* fix: issue `4982` roaring out of range
fix: issue 4982 roaring out of range
* convert rangeEnd to latest
convert rangeEnd to latest when range end is a big value that go out of range of MaxUint32
* add begin condition
add begin condition in case of bigger than latest block
* add annotation to unreachable code