//Answers>Learn about scalability & performance>What causes blockchain congestion?
What causes blockchain congestion?
// Tags
blockchain congestionhigh gas fees
TL;DR: Blockchain congestion occurs when more transactions are submitted to the network than can fit in the next block. Every blockchain has a maximum amount of computation or data it can process per block, and when demand exceeds that limit, a backlog forms in the mempool (the waiting area for unconfirmed transactions). Congestion leads to higher transaction fees (as users compete for limited block space), longer confirmation times, and degraded user experience. The root causes include fixed block capacity, demand spikes from popular events, fee market dynamics, and the fundamental design tradeoffs that prioritize security and decentralization over raw throughput.
The Simple Explanation
A blockchain processes transactions in batches called blocks. Each block has a capacity limit. On Ethereum, this limit is expressed as a gas limit (currently around 30 million gas per block). On Bitcoin, it is a weight limit (4 million weight units per block). On Solana, it is a combination of compute units and account locks per slot. Regardless of the specific mechanism, every chain has a ceiling on how much work it can do per block.
When the number of transactions being submitted to the network is lower than the block capacity, everything runs smoothly. Transactions are included in the next block, fees are low, and confirmation is fast. When the number of transactions exceeds the block capacity, a queue forms. This queue is the mempool, a holding area maintained by each node where valid but unconfirmed transactions wait for inclusion in a future block.
The mempool is not first-come, first-served. It is a fee-based priority queue. Block producers (miners or validators) select the transactions that pay the highest fees because that maximizes their revenue. When the mempool is full, users who want their transactions confirmed quickly must outbid everyone else by offering a higher fee. This bidding war is what causes gas prices to spike during congestion. Users who are unwilling or unable to pay the elevated fees have their transactions stuck in the mempool for minutes, hours, or sometimes days.
Demand Spikes
The most visible cause of congestion is a sudden surge in demand for block space. These spikes are often triggered by specific events. A popular NFT mint can generate tens of thousands of transactions within minutes as collectors rush to secure limited-edition tokens. A DeFi protocol launch or airdrop claim attracts a flood of users interacting with new smart contracts simultaneously. A major market crash triggers a wave of liquidations, position closures, and panic selling as DeFi participants scramble to adjust their positions.
On Ethereum, some of the worst congestion events in the network's history were caused by specific applications. The CryptoKitties craze in December 2017 clogged the network for days. Yuga Labs' Otherside NFT mint in May 2022 caused gas prices to exceed 8,000 gwei, with some users paying thousands of dollars in fees for transactions that ultimately failed. Airdrop claims for tokens like Uniswap's UNI and Arbitrum's ARB created sustained congestion lasting hours as millions of eligible wallets rushed to claim.
Meme token trading creates recurring congestion on chains like Solana and Base, where a viral token launch can generate millions of swap transactions within hours. Unlike planned events (where users at least know congestion is coming), meme token surges are unpredictable and can overwhelm network capacity with little warning.
Fixed Block Capacity and Production Rate
The deeper cause of congestion is that block capacity and production rate are fixed by the protocol's design, while demand is variable and sometimes explosive. Ethereum produces a block every 12 seconds with a gas limit of roughly 30 million gas. A simple ETH transfer costs 21,000 gas. A Uniswap swap costs 150,000-300,000 gas. A complex DeFi interaction might cost 500,000 gas or more. Under ideal conditions, Ethereum can process roughly 15-30 transactions per second depending on the complexity of those transactions. When thousands of users are simultaneously trying to mint an NFT, swap a token, or claim an airdrop, 15-30 TPS is nowhere near enough. Bitcoin faces the same fundamental constraint with even tighter limits. Blocks are produced every 10 minutes (compared to Ethereum's 12 seconds), and each block can hold approximately 2,000-3,000 transactions. This gives Bitcoin a throughput of roughly 5-7 TPS. During periods of high demand, Bitcoin's mempool can accumulate hundreds of thousands of unconfirmed transactions, with low-fee transactions waiting days for confirmation.
These capacity limits are not arbitrary. They exist to maintain decentralization. If blocks were larger or produced faster, fewer nodes could keep up with the data processing requirements, which would centralize the network around operators with expensive hardware. The tradeoff between throughput and decentralization is at the heart of blockchain design and is often called the blockchain trilemma (the difficulty of simultaneously achieving scalability, security, and decentralization).
MEV and Priority Gas Auctions
Maximal Extractable Value (MEV) contributes to congestion in a less obvious but significant way. MEV refers to the profit that block producers and searchers can extract by reordering, inserting, or censoring transactions within a block. When a profitable MEV opportunity appears (such as an arbitrage between two DEXes), multiple searchers compete to capture it by submitting transactions with progressively higher fees. This priority gas auction consumes block space and drives up fees for everyone, even users whose transactions have nothing to do with the MEV opportunity.
Sandwich attacks are a specific MEV pattern that directly impacts regular users. A searcher detects a large swap transaction in the mempool, places a buy order before it (front-running) and a sell order after it (back-running), profiting from the price impact the victim's transaction creates. These front-run and back-run transactions consume additional block space and increase congestion beyond what organic user demand would produce.
Layer 2 Solutions and Congestion Relief
Layer 2 networks (like Arbitrum, Base, Optimism, and zkSync) were designed in part to relieve congestion on Ethereum L1. By processing transactions on a separate chain and posting compressed summaries back to Ethereum, L2s dramatically increase the total throughput available to users. A transaction that costs $5 in gas on Ethereum L1 during congestion might cost $0.01 on an L2. However, L2s can also experience their own congestion during extreme demand spikes, and they inherit some limitations from the L1 they settle to.
Solana takes a different approach to congestion management with its local fee markets. Instead of a single global fee market where all transactions compete for the same block space, Solana attempts to isolate congestion to specific "hot" accounts. Transactions interacting with a congested contract (like a popular DEX pool) pay elevated fees, while transactions interacting with unrelated contracts are unaffected. This design reduces the collateral damage of congestion spikes, though it does not eliminate them entirely.
How Congestion Impacts Developers Building on Quicknode
For developers, congestion affects both transaction submission and data ingestion. On the submission side, transactions with insufficient gas fees get stuck in the mempool or are dropped entirely. Applications need robust fee estimation, retry logic, and nonce management to handle congested conditions gracefully. Quicknode's Core API provides reliable transaction submission with consistent performance even during peak congestion, and enhanced API methods help applications estimate appropriate gas prices based on current network conditions.
On the data ingestion side, congestion means more transactions per block, larger blocks, more events to process, and higher data volumes. Polling-based architectures struggle during congestion because the volume of data per block increases while the polling interval remains fixed, creating a processing backlog. Quicknode Streams handles congestion gracefully because its push-based architecture scales with block size. Whether a block contains 100 transactions or 1,000 transactions, Streams delivers the complete data to your destination with the same guaranteed delivery and ordering. Configurable batching and compression help manage the increased data volume during sustained congestion periods.