Historical data streams
for 100+ blockchains
GetBlock indexes 100+ blockchains into a decentralized data lake — subscribe to any chain, filter blocks in real time, transform with TypeScript SDK, and store wherever you need. No nodes. No RPC limits. No DevOps.
Retrieve pre-indexed data directly from the data lake
A conventional RPC indexer processes blocks sequentially — one eth_getLogs at a time. GetBlock bypasses this entirely. The data is already parsed and stored. Indexing is reading from a data lake, not polling a node.
Data is pre-indexed at ingestion time — not on request. When you query, you're reading from a structured store, not triggering an RPC call. No timeouts, no sequential bottlenecks, no eth_getLogs loops.
GetBlock tracks chain finality per network and handles reorganizations automatically. Your pipeline receives only canonical data — no rollback logic needed on your side.
Pre-indexed. Decentralized. Streamed on demand
Index
GetBlock continuously listens to new blocks across 100+ supported chains, parses them into granular components — transactions, logs, traces, state diffs — and stores everything in a decentralized data lake.
Stream
We set up the infrastructure that retrieves data for your chains from the decentralized data lake. The Portal locates the right chunks across distributed worker nodes and streams them directly to you — with native filtering, block ranges, and reorg handling built in.
Transform
The TypeScript SDK handles ETL. Subscribe to on-chain events, decode them with auto-generated type-safe bindings, and apply any transformation logic — filters, aggregations, your own data model — all in TypeScript.
processor
. addLog ({
address: [ '0x1f98…' ],
topic0: [events. Swap .topic],
})
. run (db, async (ctx) => {
for (let log of ctx. logs ) {
const e = events. Swap . decode (log)
await db. save ({ ...e, block: ctx.block. height })
}
})
Load
Connect any database that fits your architecture. GetBlock supports pluggable data sinks out of the box — Postgres with auto-generated GraphQL, BigQuery for analytics, S3 or local files for data pipelines.
One platform. Every use case.
Whether you're a solo dev shipping fast or an enterprise data team — the same infrastructure, the same SDK, the same reliability.
Skip node setup, RPC polling, and reorg handling. Get pre-indexed chain data via TypeScript SDK — filter events, decode with ABI types, write to any sink. Focus on product logic, not infra.
Your on-chain data pipeline
starts here.
Tell us what you're building — we'll help you pick the right setup, estimate costs, and get you live fast.