Skip to main content

Monitor High-Volume Liquidity Pools with CoinPaprika and Yellowstone Geyser

Updated on
May 30, 2025

15 min read

Overview

Tracking activity in high-volume liquidity pools can be essential for automated trading strategies on Solana. By identifying pools with significant trading volume and monitoring their transaction flow in real-time, traders can spot opportunities for arbitrage, market making, or trend following. This guide demonstrates how to combine QuickNode's CoinPaprika Price & Market Data API with the Yellowstone Geyser gRPC add-on to build a powerful pool monitoring system.

What You Will Do

We'll build a Rust application that:

  • Fetches the highest volume pools for a specific token
  • Subscribes to real-time transaction updates for those pools
  • Processes incoming transactions for analysis

What You Will Need

DependencyVersion
rustc1.85.0
cargo1.85.0

Understanding the Architecture

Before diving into the code, let's understand how these services work together:

CoinPaprika API provides comprehensive market data including:

  • Pool volume and liquidity information
  • Token price data
  • Historical trading metrics
  • DEX-specific pool data

Yellowstone Geyser gRPC offers:

  • Real-time streaming of Solana blockchain data
  • Low-latency transaction notifications
  • Filtered subscriptions based on accounts
  • Commitment level control

By combining these services, we can identify important pools and monitor their activity with minimal latency.

Let's build!

Project Setup

First, create a new Rust project and add the required dependencies:

cargo new pool-monitor && cd pool-monitor

Update your Cargo.toml with the following dependencies:

[package]
name = "pool-monitor"
version = "0.1.0"
edition = "2021"

[dependencies]
reqwest = { version = "0.12", features = ["json"] }
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
tokio = { version = "1.0", features = ["full"] }
anyhow = "1.0"
dotenv = "0.15"
bs58 = "0.5.1"
futures = "0.3"
log = "0.4"
env_logger = "0.11"
tonic = { version = "0.12", features = ["tls", "tls-roots"] }
yellowstone-grpc-client = "6.1.0"
yellowstone-grpc-proto = "6.1.0"

Create a .env file in your project root:

# QuickNode endpoint with CoinPaprika add-on enabled
QUICKNODE_URL=https://your-endpoint-name.solana-mainnet.quiknode.pro/your-token

# Yellowstone Geyser credentials
GEYSER_ENDPOINT=https://your-endpoint-name.solana-mainnet.quiknode.pro:10000
GEYSER_AUTH_TOKEN=your-auth-token

# Configuration
TARGET_TOKEN_ADDRESS=DezXAZ8z7PnrnRJjz3wXBoRgixCa6xjnB7YaB1pPB263
MONITORED_POOL_COUNT=3

Replace the placeholder values with your actual QuickNode Yellowstone gRPC endpoint and authentication token. You can find information on configuring your endpoint in our docs, here.

Building the Pool Monitor

Let's break down the implementation into manageable sections.

Importing Dependencies

First, we need to import the necessary crates and modules in our main.rs file:

use {
anyhow::{Context, Result},
bs58,
dotenv::dotenv,
futures::{sink::SinkExt, stream::StreamExt},
log::{error, info, warn},
reqwest,
serde::{Deserialize, Serialize},
std::{collections::HashMap, env},
tokio,
tonic::{service::Interceptor, transport::ClientTlsConfig, Status},
yellowstone_grpc_client::GeyserGrpcClient,
yellowstone_grpc_proto::{
geyser::SubscribeUpdate,
prelude::{
subscribe_update::UpdateOneof, CommitmentLevel, SubscribeRequest,
SubscribeRequestFilterTransactions,
},
},
};

Define Constants

Add constants for the BONK token address and default pool count:

const RUST_LOG_LEVEL: &str = "info";
const BONK_TOKEN_ADDRESS: &str = "DezXAZ8z7PnrnRJjz3wXBoRgixCa6xjnB7YaB1pPB263";

Setup Logging

Next, we need to set up logging to capture important events and errors:

fn setup_logging() {
env::set_var("RUST_LOG", RUST_LOG_LEVEL);
env_logger::init();
}

This function initializes the logger with a default log level, which can be overridden by the RUST_LOG environment variable.

Data Structures

Next, we'll define the data structures to represent pool information from the CoinPaprika API:

#[derive(Debug, Deserialize)]
pub struct PoolsResponse {
pub pools: Vec<Pool>,
}

#[derive(Debug, Deserialize, Clone)]
pub struct Pool {
pub id: String,
pub dex_name: String,
pub volume_usd: f64,
pub tokens: Vec<Token>,
}

#[derive(Debug, Deserialize, Clone)]
pub struct Token {
pub symbol: String,
}

#[derive(Debug, Serialize)]
pub struct PoolsQuery {
pub limit: u32,
pub sort: String,
pub order_by: String,
}

These structures map directly to the CoinPaprika API response format. The Pool struct contains:

  • id: The pool's on-chain address
  • dex_name: Which DEX hosts this pool (Raydium, Orca, etc.)
  • volume_usd: 24-hour trading volume in USD
  • tokens: The token pair in this pool

Source: DexPaprika Docs

Next, let's define methods for the Pool struct to help us format the token pair and manage pool metadata:

impl Pool {
pub fn token_pair(&self) -> String {
if self.tokens.len() >= 2 {
format!("{}/{}", self.tokens[0].symbol, self.tokens[1].symbol)
} else {
"Unknown pair".to_string()
}
}
}

#[derive(Debug, Clone)]
pub struct PoolMetadata {
pools: Vec<Pool>,
}

impl PoolMetadata {
pub fn new(pools: Vec<Pool>) -> Self {
Self { pools }
}

pub fn get_pool_ids(&self) -> Vec<String> {
self.pools.iter().map(|p| p.id.clone()).collect()
}
}

QuickNode Client for CoinPaprika

Next, we'll create a client to interact with the CoinPaprika API:

#[derive(Debug, Clone)]
pub struct QuickNodeClient {
client: reqwest::Client,
base_url: String,
}

impl QuickNodeClient {
pub fn from_env() -> Result<Self> {
let base_url = env::var("QUICKNODE_URL")
.context("Missing QUICKNODE_URL")?;
Ok(Self {
client: reqwest::Client::new(),
base_url,
})
}

pub async fn get_top_pools_by_volume(
&self,
token_address: &str,
limit: u32
) -> Result<Vec<Pool>> {
let query = PoolsQuery {
limit,
sort: "desc".to_string(),
order_by: "volume_usd".to_string(),
};

let url = format!(
"{}/addon/912/networks/solana/tokens/{}/pools",
self.base_url,
token_address
);

for attempt in 1..=3 {
let response = self.client
.get(&url)
.query(&query)
.send()
.await;

match response {
Ok(resp) if resp.status().is_success() => {
let json_text = resp.text().await.context("Failed to read response body")?;
let pools_response: PoolsResponse = serde_json::from_str(&json_text).context("Failed to parse JSON response")?;
return Ok(pools_response.pools);
}
Ok(resp) if attempt < 3 => {
let status = resp.status().as_u16();
warn!("Request failed with status {}, retrying in {}s... (attempt {}/3)", status, attempt, attempt);
tokio::time::sleep(tokio::time::Duration::from_secs(attempt)).await;
continue;
}
Ok(resp) => {
anyhow::bail!("API error: {}", resp.status());
}
Err(e) if attempt < 3 => {
warn!("Network error, retrying: {}", e);
tokio::time::sleep(
tokio::time::Duration::from_secs(attempt)
).await;
continue;
}
Err(e) => return Err(e).context("Request failed"),
}
}

unreachable!()
}
}

This client:

  • Constructs the proper API endpoint URL
  • Implements retry logic with exponential backoff
  • Queries pools sorted by volume in descending order to fetch the highest volume pools
  • Returns parsed pool data or throws an error if the request fails

Fetching High-Volume Pools

Now let's implement the logic to fetch and display top pools:

async fn fetch_pools() -> Result<PoolMetadata> {
let target_address = env::var("TARGET_TOKEN_ADDRESS").unwrap_or_else(|_| BONK_TOKEN_ADDRESS.to_string());
let pool_count: u32 = env::var("MONITORED_POOL_COUNT")
.unwrap_or_else(|_| "3".to_string())
.parse()
.unwrap_or(3);

info!("Fetching top {} target pools by volume...", pool_count);

let client = QuickNodeClient::from_env().context("Failed to create QuickNode client")?;
let pools = client.get_top_pools_by_volume(&target_address, pool_count)
.await
.context("Failed to fetch BONK pools")?;

if pools.is_empty() {
anyhow::bail!("No BONK pools found");
}

for (i, pool) in pools.iter().enumerate() {
info!("{}. {} - {} (${:.0})", i + 1, pool.dex_name, pool.token_pair(), pool.volume_usd);
}

Ok(PoolMetadata::new(pools))
}

This function:

  • Reads configuration from environment variables
  • Fetches the top 3 pools by volume
  • Displays pool information for verification
  • Returns pool metadata for further processing

Setting Up Yellowstone Geyser

With our pools identified, we need to set up the Geyser client for real-time monitoring:

async fn create_geyser_client() -> Result<GeyserGrpcClient<impl Interceptor>> {
let endpoint = env::var("GEYSER_ENDPOINT").context("Missing GEYSER_ENDPOINT")?;
let auth_token = env::var("GEYSER_AUTH_TOKEN").context("Missing GEYSER_AUTH_TOKEN")?;

info!("Connecting to gRPC endpoint...");

let client = GeyserGrpcClient::build_from_shared(endpoint)?
.x_token(Some(auth_token))?
.tls_config(ClientTlsConfig::new().with_native_roots())?
.connect()
.await?;

Ok(client)
}

The Geyser client setup:

  • Uses TLS for secure connections
  • Authenticates with your QuickNode token
  • Returns a connected client ready for subscriptions

If you are new to Yellowstone, check out our Yellowstone Geyser documentation or Yellowstone gRPC (Rust) Guide for more details on how to configure and use it.

Subscribing to Pool Transactions

Now we'll subscribe to transactions involving our target pools:

async fn subscribe_to_pools(
client: &mut GeyserGrpcClient<impl Interceptor>,
pool_ids: Vec<String>,
) -> Result<impl StreamExt<Item = Result<SubscribeUpdate, Status>>> {
let (mut tx, rx) = client.subscribe().await?;

info!("Setting up filters for {} pools", pool_ids.len());

let mut accounts_filter = HashMap::new();
accounts_filter.insert(
"bonk_monitor".to_string(),
SubscribeRequestFilterTransactions {
account_include: pool_ids,
account_exclude: vec![],
account_required: vec![],
vote: Some(false),
failed: Some(false),
signature: None,
},
);

tx.send(SubscribeRequest {
transactions: accounts_filter,
commitment: Some(CommitmentLevel::Processed as i32),
..Default::default()
}).await?;

Ok(rx)
}

This subscription:

  • Filters for transactions that include our pool addresses
  • Excludes vote and failed transactions to reduce noise
  • Uses "Processed" commitment for faster updates
  • Returns a stream of transaction updates

Processing Transaction Stream

Finally, let's process the incoming transaction stream:

async fn process_transaction_stream(
mut stream: impl StreamExt<Item = Result<SubscribeUpdate, Status>> + Unpin,
) -> Result<()> {
while let Some(message) = stream.next().await {
match message {
Ok(msg) => handle_update(msg),
Err(e) => {
error!("Stream error: {:?}", e);
break;
}
}
}
Ok(())
}


fn handle_update(update: SubscribeUpdate) {
if let Some(UpdateOneof::Transaction(transaction_update)) = update.update_oneof {
if let Some(tx_info) = &transaction_update.transaction {
let tx_id = bs58::encode(&tx_info.signature).into_string();
info!(" Pool transaction: {}", tx_id);

// Here you would implement your trading logic:
// - Parse instruction data
// - Calculate trade amounts
// - Check for arbitrage opportunities
// - Execute counter-trades
}
}
}

The stream processor:

  • Continuously receives transaction updates
  • Extracts transaction signatures
  • Provides a hook for implementing trading logic

Putting It All Together

Here's the main function that orchestrates everything:

#[tokio::main]
async fn main() -> Result<()> {
dotenv().ok();
setup_logging();

info!("🚀 Starting Pool Monitor");

// Step 1: Fetch high-volume pools
let pool_metadata = fetch_pools().await?;

// Step 2: Connect to Geyser
let mut client = create_geyser_client().await?;

// Step 3: Subscribe to pool transactions
let pool_ids = pool_metadata.get_pool_ids();
let stream = subscribe_to_pools(&mut client, pool_ids).await?;

info!("👂 Listening for transactions...");

// Step 4: Process transactions
process_transaction_stream(stream).await?;

Ok(())
}

Running the Monitor

To run your pool monitor, execute the following command in your terminal:

cargo run

Ensure your .env file is correctly configured with your QuickNode endpoint and Yellowstone Geyser credentials. The monitor will fetch the top BONK pools, subscribe to their transaction streams, and log updates as they occur. If all is set up correctly, you should see output similar to:

[2025-05-29T17:32:11Z INFO  pool-monitor] 🚀 Starting BONK Pool Monitor
[2025-05-29T17:32:11Z INFO pool-monitor] Fetching top 3 BONK pools by volume...
[2025-05-29T17:32:13Z INFO pool-monitor] 1. Orca - SOL/Bonk ($1675994)
[2025-05-29T17:32:13Z INFO pool-monitor] 2. Meteora - Bonk/SOL ($1316065)
[2025-05-29T17:32:13Z INFO pool-monitor] 3. Raydium CLMM - SOL/Bonk ($1373222)
[2025-05-29T17:32:13Z INFO pool-monitor] Connecting to gRPC endpoint...
[2025-05-29T17:32:13Z INFO pool-monitor] Setting up filters for 3 pools
[2025-05-29T17:32:13Z INFO pool-monitor] 👂 Listening for transactions...
[2025-05-29T17:32:22Z INFO pool-monitor] BONK Pool transaction: 4ZMQyEM...
[2025-05-29T17:32:23Z INFO pool-monitor] BONK Pool transaction: 3ztQfCE...
[2025-05-29T17:32:23Z INFO pool-monitor] BONK Pool transaction: rKkM21m...

Nice work!

Extending the Monitor

This basic monitor provides a foundation for more sophisticated trading strategies. Here are some ideas for enhancements you could implement:

1. Transaction Analysis

Parse instruction data to determine:

  • Trade direction (buy/sell)
  • Trade size and price impact
  • Slippage tolerance
  • Fee structure

Check out our guides to make parsing instructions easier:

2. Arbitrage Detection

Compare prices across pools to identify:

  • Cross-DEX arbitrage opportunities
  • Triangular arbitrage paths
  • Flash loan opportunities

3. Market Making

Implement strategies for:

  • Providing liquidity at optimal price ranges
  • Rebalancing positions based on flow
  • Dynamic fee adjustment

4. Data Persistence

Store transaction data for:

  • Historical analysis
  • Strategy backtesting
  • Performance tracking

Best Practices

When building production trading systems:

  1. Error Handling: Implement comprehensive error recovery
  2. Rate Limiting: Understand your Rate Limits and implement your own limits to avoid hitting them
  3. Monitoring: Track system health and performance metrics
  4. Security: Never expose API keys in code or logs
  5. Testing: Thoroughly test on devnet before mainnet deployment. Start with small amounts when deploying real trading strategies, and monitor closely for unexpected behavior/edge cases.

Conclusion

By combining CoinPaprika's Solana Token Price & Liquidity Pools API and QuickNode's Yellowstone Geyser add-ons, you can build powerful pool monitoring systems with minimal infrastructure overhead. This approach provides:

  • Real-time visibility into high-volume pools
  • Low-latency transaction notifications
  • Reliable data feeds for trading decisions

Whether you're building arbitrage bots, market makers, or analytics tools, this foundation gives you the data pipeline needed for sophisticated trading strategies on Solana.

Additional Resources

We ❤️ Feedback!

Let us know if you have any feedback or requests for new topics. We'd love to hear from you.

Share this guide