//Answers>Learn about scalability & performance>Blockchain throughput vs latency
Blockchain throughput vs latency
// Tags
blockchain throughputTPS blockchain
TL;DR: Throughput and latency are two different measurements of blockchain performance that are frequently confused. Throughput is capacity: how many transactions the network can process per second (TPS). Latency is speed: how long it takes for an individual transaction to be confirmed and finalized. A highway analogy helps clarify the distinction. Throughput is how many cars the highway can carry per hour. Latency is how long it takes one car to drive from entrance to exit. A highway can have high throughput (many cars) but high latency (slow speed due to congestion), or low throughput (few cars) but low latency (each car moves fast). The best blockchains optimize for both, but tradeoffs are real.
The Simple Explanation
When people ask "which blockchain is the fastest?" they usually mean latency: how quickly does my transaction get confirmed? But the marketing materials for most blockchain projects emphasize throughput: how many transactions per second the network can handle. These are related but distinct metrics, and confusing them leads to poor architectural decisions and unrealistic performance expectations.
Throughput, measured in transactions per second (TPS), tells you how much total work the network can do. Ethereum processes roughly 15-30 TPS. Solana processes hundreds to thousands of TPS in practice. Bitcoin processes 5-7 TPS. These numbers represent the aggregate capacity of the network, the total number of transactions from all users that can be included in blocks per second.
Latency, measured in seconds or minutes, tells you how long your individual transaction takes from submission to finalization. On Ethereum, a transaction is typically included in a block within 12-24 seconds but is not considered final until roughly 12-13 minutes (two epochs of finality). On Bitcoin, a transaction usually appears in a block within 10-60 minutes and is considered reliably settled after six confirmations (roughly 60 minutes). On Solana, a transaction is typically confirmed within 400 milliseconds to a few seconds.
A blockchain can have high throughput but high latency (processing many transactions per second, but each one takes a long time to finalize), or low throughput but low latency (processing few transactions per second, but each one finalizes quickly). The ideal is high throughput and low latency, but achieving both simultaneously without sacrificing decentralization or security is the core engineering challenge of blockchain design.
Why the Distinction Matters for Developers
The throughput-latency distinction has direct practical implications for application design. If your application is a payment system, latency is the primary concern. Users waiting at a checkout counter care about how quickly their individual payment confirms, not how many other payments the network is processing simultaneously. A blockchain with 100 TPS and 2-second finality provides a better payment experience than one with 10,000 TPS and 60-second finality.
If your application is a high-volume data platform (like a DEX aggregator, a gaming backend, or an analytics service), throughput matters more. You need the network to handle a large number of concurrent operations without the system becoming congested and degrading performance for all users. A blockchain with 10,000 TPS and 10-second finality may be preferable to one with 100 TPS and 1-second finality if your application generates thousands of transactions per second.
For infrastructure providers and node operators, the distinction affects how they architect their systems. High-throughput chains produce more data per second, requiring more bandwidth, storage, and processing power from nodes. High-latency chains with probabilistic finality require more sophisticated reorg handling in data pipelines. Understanding the throughput and latency profile of each chain you support determines your hardware requirements, your data pipeline design, and your cost model.
What Affects Throughput
Throughput is primarily determined by three protocol parameters: block size (or gas limit), block time, and transaction complexity.
Block size defines the maximum amount of data or computation that can fit in a single block. Ethereum's gas limit caps the total computation per block. Bitcoin's weight limit caps the total data per block. Larger blocks can include more transactions, increasing throughput, but they also take longer to propagate across the network (increasing the risk of forks) and require more resources from nodes (potentially reducing decentralization).
Block time is how frequently new blocks are produced. Ethereum produces blocks every 12 seconds. Bitcoin every 10 minutes. Solana every 400 milliseconds. Shorter block times increase throughput because the network processes blocks more frequently, but they also increase the communication overhead between nodes and can introduce stability challenges.
Transaction complexity varies by chain and by transaction type. A simple ETH transfer uses 21,000 gas, while a complex DeFi interaction might use 500,000 gas or more. Chains that handle simpler transactions (or that optimize execution through parallelism, as Solana, Sui, and Monad do) can achieve higher TPS for a given block size and block time.
Consensus mechanism overhead also impacts throughput. Proof-of-work systems like Bitcoin devote significant resources to mining, limiting how quickly blocks can be produced safely. Proof-of-stake systems reduce this overhead but still require communication rounds between validators. BFT-style consensus (used by chains like Cosmos, Aptos, and Sui) can achieve faster finality but typically requires a smaller validator set, which has implications for decentralization.
What Affects Latency
Latency for an individual transaction has several components: mempool wait time (how long the transaction sits in the queue before being selected for a block), block production time (how frequently new blocks are created), propagation time (how long it takes the new block to reach all nodes), and finality time (how many additional blocks or epochs must pass before the transaction is considered irreversible).
Under normal conditions, mempool wait time is minimal because block capacity exceeds demand. During congestion, mempool wait time dominates latency as transactions compete for limited block space. Transactions with higher fees are selected faster; transactions with lower fees may wait through multiple block production cycles.
Finality model is the biggest differentiator in latency between chains. Chains with instant (deterministic) finality, like those using Tendermint-based consensus, confirm transactions in a single round of consensus with no possibility of reversion. Chains with probabilistic finality, like Bitcoin and Ethereum, require multiple confirmations before a transaction is considered settled. The difference is dramatic: a transaction on a Tendermint chain is final in seconds, while a Bitcoin transaction is not reliably final for an hour.
Layer 2 networks add complexity to the latency picture. A transaction on Arbitrum or Base confirms quickly on the L2 (sub-second to seconds), but the underlying data is not settled on Ethereum L1 until the rollup posts its proof or batch, which can take minutes to hours. Depending on the security requirements of the application, the relevant latency might be L2 confirmation time (fast) or L1 settlement time (slow).
How Quicknode Helps Developers Navigate Both
Quicknode's infrastructure is optimized for both throughput-intensive and latency-sensitive workloads. The Core API delivers low-latency RPC access through a globally distributed node network across 80+ chains, with response times 2.5x faster than competitors. This directly reduces the infrastructure-contributed latency for every RPC call your application makes. For chains where latency is especially critical (like Solana's 400ms slots), Quicknode supports Yellowstone gRPC, which uses binary Protocol Buffers instead of JSON for faster serialization and lower overhead.
For throughput-intensive workloads, Quicknode Streams handles high-volume data ingestion regardless of the underlying chain's TPS. Whether you are indexing Ethereum at 15 TPS or Solana at thousands of TPS, Streams delivers complete block data to your destination with guaranteed delivery and configurable batching for optimal throughput. Dedicated Clusters provide isolated infrastructure for applications that need predictable performance under variable network conditions, ensuring that congestion-driven throughput spikes do not impact your data pipeline's reliability.