Skip to main content

Streams Billing

Updated on
Feb 01, 2026

Overview

As of February 1, 2026, Streams uses a new API credit–based billing model, where usage is determined by the data processed by your streams.

Each stream consumes API credits based on the following factors:


  • Network — The blockchain network you're streaming from
  • Dataset — The type of data streamed (blocks, transactions, logs, receipts, traces, etc.)
  • Blocks processed — The total number of blocks your Stream processes

This document explains how Streams API credits are calculated, how to use the calculator to estimate API credit usage for both tip streaming (ongoing data) and backfill (historical data), and best practices for optimizing stream configurations to manage costs effectively.


Account-Level Billing

Your Streams billing model is determined at the account level. If you created any Streams before February 1, 2026, your account remains on the legacy GB-based billing model - including for any new Streams created after that date.

To migrate your account to the new API credit–based billing model, contact support@quicknode.com or open a support ticket.

API Credits Calculator

Block headers and metadata20x credit multiplier

Tip Streaming (Monthly)

4,320,000credits/month

Based on 216,000 blocks per month

Backfill (Total)

N/A

Block height unavailable

Understanding the Metrics

The calculator displays three key metrics to help estimate your Streams usage.

Metric
Credit Multiplier
Description
The number of API credits consumed per block. This value is determined by the blockchain network and dataset type.
Example
A 20× multiplier means each block consumes 20 credits.
For example, processing 1,000 blocks consumes 20,000 credits.
Metric
Tip Streaming (Monthly)
Description
The ongoing cost to stream newly produced blocks in real time.
Formula: Monthly Blocks × Multiplier = Monthly Credits
Example
Ethereum produces ~216,000 blocks/month.
With a 20× multiplier: 216,000 × 20 = 4,320,000 credits/month
Metric
Backfill (Total)
Description
The one-time cost to stream all historical blocks from genesis to the current block height.
Formula: Current Block Height × Multiplier = Backfill Credits
Example
Ethereum has 24,227,108 total blocks.
With a 20× multiplier: 24,227,108 × 20 = 484,542,160 credits
Network Support

Most EVM-compatible networks support full historical backfilling, though support varies by network. See Backfill Data by Ecosystem to check if your network is supported.

Optimizing Your Streams Usage

Choose the Right Dataset Type

Different dataset types have different credit costs. Selecting the most specific dataset for your needs helps optimize credit usage.

Available dataset types:

  • Basic datasets — Block, Transactions, Logs, Receipts
  • Combined datasets — Block with Receipts
  • Trace datasets — Debug Trace, Trace Block, Block with Receipts + Traces

Credit efficiency:

  • Block uses approximately 50% fewer credits than Block with Receipts
  • Basic datasets use significantly fewer credits than Trace datasets
Start with basic datasets

Begin with basic datasets and upgrade to combined or trace datasets only when you need the additional data. Reserve trace datasets for cases requiring detailed transaction execution information.

Learn more about the available dataset types and their structures.

Use JavaScript Filters

Filters let you customize your stream's payload before it reaches your destination. You can match specific patterns, transform data, or filter out irrelevant information.

While Filters don't reduce API credits (you're billed per block processed), they allow you to precisely control the data you receive, ensuring you only pay for and process what you need:


  • Bandwidth — Transfer only relevant data to your destination
  • Storage — Store only the data you need
  • Processing — Process only the data that matters to your application

Resources

Use Traces Only When Required

Trace datasets consume significantly more credits than basic datasets. Use them only when you need detailed execution data, internal transactions, or contract call hierarchies. For most use cases, basic or combined datasets are sufficient.

Test Before Scaling

Start with a small block range (1,000–10,000 blocks) to validate your configuration before committing to large backfills or long-running streams. This helps you verify filters, confirm data structure, and estimate costs before processing millions of blocks.

Frequently Asked Questions

Do JavaScript filters reduce my credit consumption?
Am I charged for failed block deliveries or retries?
Why do debug_trace and trace_block datasets cost more?
Can I start with backfill and transition to tip streaming mid-stream?
If I pause and resume a stream, do I pay for the blocks I missed?
Do Streams and RPC share the same credit pool?
How do I calculate costs for multiple concurrent streams?

We ❤️ Feedback!

If you have any feedback or questions about this documentation, let us know. We'd love to hear from you!

Share this doc