Black Friday Sale: Up to 60% off RPC infrastructure | Valid through December 2

Save Big

chevron-left

Case Study: How GetBlock Helped Revert Finance Run Deterministic Analytics on Base

blog_author_logo

GETBLOCK

November 25, 2025

19 min read

GetBlock x Revert Finance case study

At GetBlock, we build the plumbing that powers Web3 applications and services –  fast, reliable infrastructure and the engineering muscle to run them at scale. 

Revert Finance approached GetBlock because their analytics services required trustworthy, replayable Base on-chain data. The existing infrastructure, however, did not deliver the consistency they needed. This was clearly a challenge for us.

So we did what we do best: rolled up our sleeves and shipped a production node that solved the customer’s problem. This post explains how we approached the challenge and what solution we built along the way.

TL;DR: The problem and solution

1

When Revert Finance came to GetBlock, they needed to build transparent analytics for Aerodrome liquidity pools on the Base blockchain.

2

The required overlay RPC methods (overlay_getLogs, overlay_callConstructor, etc.) were only available in Erigon variants, but the existing op-Erigon fork didn’t port overlay behavior cleanly.

3

GetBlock coordinated with the op-erigon maintainers to address some of the issues, then implemented additional fixes in our fork to deploy a custom node that satisfied customer needs – an archive overlay-enabled Base node.

Revert Finance: Customer background

Revert Finance builds enterprise-grade analytics and tools for AMMs like Uniswap, Sushiswap, PancakeSwap, and others. Their product requires exact, replayable on-chain data so that user-facing insights are deterministic.

When Revert set out to support the Base ecosystem, they integrated with Aerodrome — a next-generation AMM on Base. During integration, the Revert team discovered a critical practical gap: Aerodrome’s contract implementation did not expose the complete event/log coverage that Revert’s analytics rely on.

Rather than accept opaque gaps, Revert’s approach was to reproduce the chain’s state and transactions locally and insert a custom bytecode overlay to restore the missing observability.

The challenge 

The capability that the customer needed existed in Erigon’s overlay RPC namespace. It allows state and bytecode overrides to then replay transactions and blocks against that overridden state. Then, it’s possible to extract the required logs and debug outputs for analytics.

So the concrete product requirement from Revert is a Base node running Erigon (or a compatible client) with the overlay namespace enabled and archive history available.

Overlay namespace — what it is and why it’s powerful

The overlay namespace is an Erigon-specific JSON-RPC extension. It allows to temporarily override account state (bytecode, storage, balance) and then re-execute transactions or entire blocks against that modified state so you can inspect the results (logs, return data, traces) as if the override had existed on-chain. 

It’s a powerful tool, especially for:

Debugging and forensic replays where you need to see what would have happened with a different bytecode or state (e.g., to add instrumentation, events to contracts that didn’t emit them).

For analytics teams that require deterministic, auditable traces computed from a hypothetical state without modifying on-chain data.

It is not part of the canonical Ethereum JSON-RPC spec. That’s why projects that need overlay behaviour either run Erigon or a fork that has it ported.

About GetBlock: Why we were the right team for this challenge

GetBlock is one of the leading Web3 infrastructure providers, trusted by developers, startups, and enterprises worldwide. Our core strengths are:

Node infrastructure at scale — fleets of full and archive nodes tailored to a customer’s exact use cases, serving millions of daily requests.

Cross-chain expertise — GetBlock supports 100+ blockchains, from Bitcoin and Ethereum to emerging chains under one roof. The team has hands-on experience with diverse client implementations and stacks.

Custom engineering — Optimized builds, advanced RPC tooling, operational engineering.

Developer-first support — Working hand-in-hand with customers from proof-of-concept to production, ensuring deterministic results.

Revert’s need was specific and technically deep. Because this problem sat at the intersection of client behavior and RPC ergonomics, our work combined upstream collaboration and targeted client patches. 

Below is how the engagement unfolded – the timeline, the failures we reproduced, and the exact fixes we implemented.

Phase 1 – Pilot deployment 

There was no official Erigon client shipping with op-stack (Base) support. The community fork op-erigon aimed to bridge that gap – and it exposed overlay methods.

The op-erigon node sync took longer than expected. These operational issues are typical when running non-mainline client forks. Additionally, it required coordinating custom flags for archive and overlay workloads.

When GetBlock provisioned an op-erigon node for Revert, the test node revealed several failure classes that prevented the solution from working end-to-end. We investigated these one at a time.

Phase 2 – First challenge: runtime panic 

Initial attempts to call overlay_getLogs on op-erigon produced a runtime crash. 

1

2

3

4

5

6

7

8

9

10

11

12

13

{
"jsonrpc": "2.0",
"id": 1,
"error": {
"code": -32000,
"message": "method handler crashed"
}
}

// logs

[ERROR] RPC method overlay_getLogs crashed: runtime error: invalid memory address or nil pointer dereference
[service.go:219 panic.go:770 panic.go:261 signal_unix.go:881 overlay_api.go:343 value.go:596 value.go:380 service.go:224 handler.go:532 handler.go:482 handler.go:421 handler.go:241 handler.go:334 asm_amd64.s:1695] 

GetBlock filed an issue with the op-erigon maintainers to address this symptom.

Root cause: when the overlay codepath was exercised on op-stack (Base) configuration, certain execution-context functions — specifically L1/Operator cost-related function pointers — were not being set, leading to improper execution flow. 

The testinprod/op-erigon fix in branch #259 added explicit initialization of the L1CostFunc and OperatorCostFunc in the overlay execution path (initially for overlay_getLogs and then for overlay_callConstructor). 

After applying that branch, overlay_getLogs started returning the expected logs when the request included all required parameters:

1

a filter (fromBlock/toBlock/address/topics etc.), and

2

a stateOverride object that describes bytecode/state overrides to apply.

Example:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

curl -X POST https://go.getblock.us/<Access-Token> \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc":"2.0",
    "id":1,
    "method":"overlay_getLogs",
    "params":[
      {
        "fromBlock":"latest",
        "toBlock":"latest"
      },
      {}        /* <-- REQUIRED: empty stateOverride */
    ]
  }'

Note: Even if you don’t want to override any account state, the second param still must be passed — e.g. an empty {}. Many implementations expect the parameter to exist. Passing the empty {} avoids the nil deref.

Phase 3 – New challenge: Overlay behavior gaps

Revert exercised more overlay methods and encountered further edge cases. 

The overlay_callConstructor RPC did not return the created bytecode. Instead, it returned:

null results for certain payloads;

runtime errors referencing op-stack parameters (blob/Cancún-related internal failures).

1

2

3

{"jsonrpc":"2.0","id":1,"result":null}

{"jsonrpc":"2.0","id":1,"error":{"code":-32000,"message":"internal failure: Cancun is active but ExcessBlobGas is nil"}}

These errors meant the overlay execution either failed or didn’t expose the created bytecode.

At this point, after the upstream patches, the immediate crashes for both overlay_getLogs and overlay_callConstructor were resolved but the remaining symptoms persisted. That required deeper changes, which GetBlock implemented in a follow-up patch.

Client alternatives and why we stayed with op-erigon

Overlay is an Erigon-specific extension, and equivalent functionality wasn’t available on other well-supported clients like Geth and Reth. 

Running shadow-reth looked like a possible way to get overlay-style behavior without Erigon because it can produce “shadowed” events by wrapping a Reth node and producing instrumented executions without changing the chain.

However, the project was unmaintained for production use and didn’t properly recognize op-stack hardforks and configs. Attempts to launch shadow-reth for Base produced instability, so this path was deprioritized.

GetBlock focused on stabilizing op-erigon instead. 

Final solution – In-house op-erigon patch

After upstream work addressed the initial nil-pointer crashes, we still saw two persistent failure modes. We treated it like a reproduction + isolation exercise. 

First, we re-ran Revert’s exact payloads against test instances, captured RPC inputs and stack traces, and mapped failures to the overlay execution path. 

That investigation showed two focus areas: 

1

Some op-stack-specific execution context fields were not being passed into the overlay EVM;

2

The CREATE path did not reliably write and expose the deployed bytecode to the RPC response.

Fixing that meant touching two layers: the overlay execution context and the overlay CREATE/tracer flow  — and guided a small, coordinated set of changes across the overlay API, tracer, op-stack helpers, and node runtime configuration.

Concretely, our patch set included the following.

1) Fixing the execution context: inject op-stack semantics

On the execution-context side, we ensured overlay replays run with the same op-stack semantics as the live Base chain. The overlay BlockContext now carries op-stack hooks (L1 and Operator cost functions).

Then, we also made sure it safely populates blob/excess-blob fields when Cancún semantics are active. In practice, this resolved the nil-parameter and hardfork panics that previously aborted replays.

1

2

3

4

5

6

7

// Illustrative 
blockCtx := evmtypes.BlockContext{
    // ... existing fields ...
    BlobBaseFee:      blobBaseFee,
    L1CostFunc:       opstack.NewL1CostFunc(chainConfig, stateDB),
    OperatorCostFunc: opstack.NewOperatorCostFunc(chainConfig, stateDB),
}

We also made overlay message construction preserve op-stack blob fields (for example MaxFeePerBlobGas) from the original transaction so the simulated execution uses matching gas/accounting parameters.

1

2

3

4

5

6

7

var excessBlobGas *uint64
zero := uint64(0)
if parent.ExcessBlobGas != nil {
    excessBlobGas = parent.ExcessBlobGas
} else if chainConfig.IsCancun(parent.Time) {
    excessBlobGas = &zero
}

2) Correctly detect and treat CREATE transactions

The code now sets an `isCreateTx` flag and adapts the message when the overlay is supposed to simulate a contract creation. 

1

isCreateTx := creationTx.GetTo() == nil && contractAddr == address

This lets the overlay execution produce the actual deployed code for CREATE transactions.

3) Making CREATE semantics visible: read deployed code from intra-block state

The CREATE path could finish without the RPC ever returning the created bytecode. To fix that, after executing a CREATE/CREATE2 overlay, we explicitly read the deployed code from the EVM intra-block state and populated the RPC result.code so it no longer shows null.

1

2

3

4

5

6

7

8

9

10

11

12

if isCreateTx {
    deployed := evm.IntraBlockState().GetCode(contractAddr)
    if len(deployed) > 0 {
        result.Code = hexutil.Encode(deployed)
        return result, nil
    }
}

// Fallback: tracer-captured code 
if tracerResult != nil {
    result.Code = hexutil.Encode(tracerResult)
}

4) Tracer integration: write injected code and propagate errors

We enhanced the tracer so that when it injects or replaces code, it writes that code into intra-block state (SetCode), and sets a tracer result that the overlay handler can read. We also made tracer errors propagate up the RPC layer, improving debuggability and avoiding masked failures.

1

2

3

4

5

tracer.evm.IntraBlockState().SetCode(tracer.contractAddr, tracer.injectedCode)
tracer.resultCode = tracer.injectedCode
if tracer.err != nil {
    return nil, tracer.err // propagate to RPC
}

Iterative testing

With the code changes in place we ran iterative tests using Revert’s real payloads: constructor overlays that had previously returned null, block ranges that had triggered Cancun errors, and realistic overlay_getLogs requests. Each iteration removed a failure mode.

Deployment

We packaged the fixes into a hardened op-erigon image and deployed it for the customer. Operational work included tuning archive/history flags to avoid shard/history errors, and adding close monitoring to detect head-lag or state regressions early.

Business & product impact 

Revert moved from trial to production using GetBlock’s custom Base node. They gained the ability to run deterministic, auditable replays of Aerodrome activity on Base. This enabled accurate analytics and debugging without changing on-chain data.

The outcome 

This was the kind of engineering problem GetBlock exists to solve – real customer needs. Specifically for this case:

We deployed an archive Base node with a patched op-erigon client that properly exposes and implements the overlay RPC namespace for op-stack configuration.

Collaborated with community maintainers and the customer team to fix immediate issues and added further patches.

Provided a custom node image that delivered the fastest, lowest-risk route to the functionality our client required.

After iterative testing, we delivered a working node and operational support as the customer moved from trial to production.

As a long-term benefit, the work done allows us to contribute long-term improvements back to the community.

This case study shows that solving low-level node and RPC correctness can be the decisive factor in delivering the product. GetBlock’s work turned an infrastructure gap into a reliable foundation for production-grade Web3 services. 

Want this for your project? Reach out, and we’ll help you reproduce, isolate, and fix the infra problems blocking your use case.