11 min read
Overviewโ
Getting structured market data from Hyperliquid (volume rankings, liquidation summaries, funding rate snapshots) usually means building and maintaining your own indexing pipeline. SQL Explorer gives you direct SQL access to billions of rows of indexed Hyperliquid data and lets you call any query as a REST endpoint. In this guide, you'll build a Python bot that queries Hyperliquid tables, composes the results into a market digest, and delivers it to a Telegram channel on a daily schedule.
- SQL Explorer turns any SQL query into a REST endpoint: write SQL, POST it, get JSON back. No indexer, no pipeline, no additional infrastructure
- This guide uses some of the query patterns behind Quicknode's weekly Hyperliquid recap on X
- Full source code (Python and TypeScript) available in qn-guide-examples
What You Will Learnโ
- How SQL Explorer's REST API works and how to call it programmatically
- How to choose the right Hyperliquid tables for different analytical questions
- Key query patterns: time-window filtering, numeric casting with
toFloat64(), snapshot subqueries, and deduplication filters likeuser = liquidated_user - How to compose multi-table query results into an automated report
What You Will Needโ
- A Quicknode account with an API key that has SQL Explorer access (available on paid plans)
- Python 3.10+ or Node.js 20+ installed locally
- (Optional) A Telegram bot token and chat ID (create a bot via @BotFather, add it to a channel, and retrieve the chat ID from the getUpdates endpoint)
- Familiarity with SQL and REST APIs
If you use AI coding agents, point them at the SQL Explorer documentation and schema reference. You can also install the Quicknode Skill that provides knowledge of Quicknode's every product, including SQL Explorer.
With table schemas and query patterns as context, your agent can generate custom SQL queries for any Hyperliquid analysis.
What is SQL Explorer?โ
SQL Explorer is a REST API that indexes onchain data into queryable tables and exposes a single endpoint that accepts any SQL query. Every table, column, and pre-built query is documented in the schema reference and pre-built queries pages.
The endpoint is:
POST https://api.quicknode.com/sql/rest/v1/query
Every request takes the same shape:
{
"query": "SELECT ...",
"clusterId": "hyperliquid-core-mainnet"
}
An example cURL request:
curl -X POST 'https://api.quicknode.com/sql/rest/v1/query' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-H 'x-api-key: YOUR_API_KEY' \
-d '{
"query": "SELECT ...",
"clusterId": "hyperliquid-core-mainnet"
}'
The response includes your data, column metadata, and performance stats:
{
"data": [
{
"day": "2026-04-07",
"total_volume_usd": "24100000000",
"total_fills": "12453211",
"active_traders": "60142"
}
],
"meta": [
{ "name": "day", "type": "Date" },
{ "name": "total_volume_usd", "type": "Decimal(38, 18)" }
],
"rows": 7,
"statistics": {
"elapsed": 0.034,
"rows_read": 7,
"bytes_read": 1456
}
}
Key things to note are that data is an array of row objects, all numeric values come back as strings (you will need to cast them in your code), and statistics lets you monitor query cost and optimize over time by telling you how long the query took and how much data it scanned.
Note that this guide focuses on the REST API for programmatic access. If you want to explore the data interactively first, open the SQL Explorer dashboard and try running the queries below before putting them in code.
Querying the Hyperliquid Datasetโ
SQL Explorer currently indexes 35+ Hyperliquid tables covering trading data, market snapshots, funding, liquidations, bridge activity, staking, and more. The bot in this guide uses four of them, but you can mix and match any tables to answer your own questions. Understanding how to approach these tables matters more than the specific bot: once you know the patterns, you can adapt them to any Hyperliquid analysis.
Here are some of the most commonly used tables:
| Table | What it's used for |
|---|---|
hyperliquid_metrics_overview | Pre-aggregated daily platform totals: volume, fills, active traders, fees, liquidation count |
hyperliquid_trades | Every individual trade; use for volume rankings and per-coin activity |
hyperliquid_fills | Every fill including liquidations; filter with is_liquidation = 1 for forced closures |
hyperliquid_perpetual_market_contexts | Point-in-time market snapshots: funding rate, open interest, mark price, oracle price |
hyperliquid_funding_summary_hourly | Pre-aggregated hourly funding rates per coin; faster than scanning raw fill data |
hyperliquid_clearinghouse_states | Account-level position and margin snapshots; use for portfolio and PnL tracking |
Platform Overviewโ
Table: hyperliquid_metrics_overview | Sort key: day
This is a pre-aggregated daily metrics table. Instead of computing volume by scanning billions of rows in hyperliquid_trades, this table gives you platform-wide totals per day: volume, fills, active traders, fees, liquidation count, and liquidation volume. For any "big picture" question, start here.
SELECT
day,
total_volume_usd,
total_fills,
active_traders,
total_fees,
liquidation_count,
liquidation_volume_usd
FROM hyperliquid_metrics_overview
WHERE day >= today() - INTERVAL 7 DAY
ORDER BY day ASC
In the weekly recap, this query powers the opening summary: total weekly volume, peak daily traders, and total liquidation figures. You can change the interval to INTERVAL 30 DAY for a monthly view or filter to a specific date range to analyze a particular event.
Top Assets by Volumeโ
Table: hyperliquid_trades | Sort keys: block_number, trade_id
This is the raw trades table, every individual trade on Hyperliquid. The query aggregates by coin to rank assets by 24-hour USD volume. The toFloat64() casts are required because price and size are stored as high-precision Decimal types, and arithmetic operations need explicit conversion.
SELECT
coin,
count() AS trades,
round(sum(toFloat64(price) * toFloat64(size)), 2) AS volume_usd,
countDistinct(buyer_address) AS unique_buyers
FROM hyperliquid_trades
WHERE timestamp >= now() - INTERVAL 24 HOUR
GROUP BY coin
ORDER BY volume_usd DESC
LIMIT 10
The countDistinct(buyer_address) column gives a rough measure of market participation. To isolate HIP-3 assets specifically, add WHERE coin LIKE 'xyz:%' OR coin LIKE 'cash:%' to the filter.
Time-window filters like timestamp >= now() - INTERVAL 24 HOUR enable partition pruning. Queries that filter on time columns run significantly faster than full table scans.
Liquidation Recapโ
Table: hyperliquid_fills | Sort keys: block_number, tid, user
Every fill on Hyperliquid is recorded here, including liquidations. The is_liquidation = 1 flag marks forced closures. The critical filter is user = liquidated_user: each liquidation creates two fill records (one for the liquidated party, one for the counterparty who took the other side of the trade). Without this condition, you double-count every liquidation event.
SELECT
coin,
countDistinct(liquidated_user) AS users_rekt,
count() AS liq_count,
round(sum(toFloat64(price) * toFloat64(size)), 2) AS liq_volume_usd
FROM hyperliquid_fills
WHERE is_liquidation = 1
AND user = liquidated_user
AND time >= now() - INTERVAL 24 HOUR
GROUP BY coin
ORDER BY liq_volume_usd DESC
LIMIT 10
To break liquidations down by direction, add side to the SELECT and GROUP BY. You can also query individual liquidation events (remove the GROUP BY, add liquidated_user and liquidation_mark_price to the SELECT) to find the largest single liquidations.
Funding Rate Extremesโ
Table: hyperliquid_perpetual_market_contexts | Sort keys: coin, polled_at
This is a snapshot table that captures the state of every perpetual market at regular intervals: funding rate, mark price, oracle price, and open interest. The subquery SELECT max(polled_at) retrieves the latest snapshot, giving you a point-in-time view across all markets.
SELECT
coin,
round(toFloat64(funding) * 8760 * 100, 2) AS annualized_rate_pct,
round(toFloat64(open_interest) * toFloat64(mark_px), 2) AS oi_usd,
round(toFloat64(mark_px), 4) AS mark_price
FROM hyperliquid_perpetual_market_contexts
WHERE polled_at = (
SELECT max(polled_at)
FROM hyperliquid_perpetual_market_contexts
)
ORDER BY abs(toFloat64(funding)) DESC
LIMIT 10
The annualized rate formula: the raw funding field is a per-hour rate, so multiplying by 8,760 (hours per year) and 100 converts it to a yearly percentage. Sorting by abs(funding) catches both extremes: a large negative rate means longs are paying shorts (bearish positioning), while a large positive rate means shorts are paying longs.
Always pair funding rates with open interest (open_interest * mark_px for USD value). A -800% annualized rate on $50K of open interest is noise. The same rate on $13M of open interest (like the recap's TRUMP finding) is a meaningful signal.
To track how funding evolves over time instead of viewing a single snapshot, use the hyperliquid_funding_summary_hourly table, which provides pre-aggregated hourly funding rate averages per coin.
Building the Intelligence Botโ
The queries above are the analytical core. The bot itself is straightforward: it calls each query through the REST API, formats the results, and sends them to any platform like Telegram or Discord. Clone the repository to get the full implementation:
git clone https://github.com/quiknode-labs/qn-guide-examples.git
cd qn-guide-examples/sql-explorer/hyperliquid-intel-bot
python3 -m venv .venv && source .venv/bin/activate # use "python" if python3 is not found on your system
pip install -r requirements.txt
cp .env.example .env
Fill in your .env with your Quicknode API key, Telegram bot token, and chat ID:
QUICKNODE_API_KEY: Copy from dashboard.quicknode.com/api-keysTELEGRAM_BOT_TOKEN: Create a bot via @BotFather and copy the token it providesTELEGRAM_CHAT_ID: This must be a numeric ID. Add your bot to the channel or group (or send it a DM), send any message to it, then openhttps://api.telegram.org/bot<YOUR_TOKEN>/getUpdatesin your browser and find"chat": {"id": ...}in the response. That number is your chat ID (e.g.-1001234567890for a channel/group)
QUICKNODE_API_KEY=your_quicknode_api_key
TELEGRAM_BOT_TOKEN=your_telegram_bot_token
TELEGRAM_CHAT_ID=your_telegram_chat_id
Never commit your .env file to version control. Add it to .gitignore.
hyperliquid-intel-bot/
โโโ /typescript/ # TypeScript version of the bot with identical functionality
โโโ bot.py # entry point, Telegram delivery
โโโ queries.py # SQL definitions and REST API calls
โโโ formatter.py # digest formatting
โโโ .env.example
โโโ requirements.txt
The SQL-to-REST Bridgeโ
The key function in the project is run_query() in queries.py. This is the bridge between SQL and your application: it sends any SQL string to the REST API and returns the parsed result rows.
import os
import requests
from dotenv import load_dotenv
load_dotenv()
SQL_EXPLORER_URL = "https://api.quicknode.com/sql/rest/v1/query"
QUICKNODE_API_KEY = os.environ.get("QUICKNODE_API_KEY")
if not QUICKNODE_API_KEY:
raise ValueError("Set QUICKNODE_API_KEY in your .env file before running the bot.")
CLUSTER_ID = "hyperliquid-core-mainnet"
def run_query(sql: str) -> list[dict]:
"""Send a SQL query to Quicknode SQL Explorer, return the result rows."""
resp = requests.post(
SQL_EXPLORER_URL,
headers={
"x-api-key": QUICKNODE_API_KEY,
"Content-Type": "application/json",
},
json={"query": sql, "clusterId": CLUSTER_ID},
)
resp.raise_for_status()
result = resp.json()
print(f" {result['rows']} rows in {result['statistics']['elapsed']:.2f}s")
return result["data"]
Every query function in the project is a thin wrapper that passes a SQL string through run_query(). The SQL is the same as what you tested in the dashboard:
def get_platform_overview() -> list[dict]:
return run_query("""
SELECT day, total_volume_usd, total_fills, active_traders,
total_fees, liquidation_count, liquidation_volume_usd
FROM hyperliquid_metrics_overview
WHERE day >= today() - INTERVAL 7 DAY
ORDER BY day ASC
""")
# get_top_assets(), get_liquidations(), get_funding_extremes()
# follow the same pattern with the SQL from the sections above
bot.py is the entry point that ties everything together:
- Fetch: calls each query function in
queries.py, which sends the SQL to the REST API and returns the result rows - Format: passes the raw rows to
formatter.py, which composes a structured text digest - Send: posts the digest to Telegram (or prints it to the console in
--dry-runmode)
The full implementation is in the repository.
Running the Botโ
python3 bot.py # fetch data and send to Telegram
python3 bot.py --dry-run # print the digest to console without sending
Before configuring Telegram, run python3 bot.py --dry-run to confirm your API key is working and the queries return data. The digest will print to the console without sending anything.
Expected output:
2026-04-14 08:00:01 INFO Fetching digest data from SQL Explorer...
2026-04-14 08:00:01 INFO Platform overview: 7 rows in 0.03s
2026-04-14 08:00:02 INFO Top assets: 10 rows in 0.41s
2026-04-14 08:00:02 INFO Liquidations: 10 rows in 0.38s
2026-04-14 08:00:03 INFO Funding extremes: 10 rows in 0.12s
# ...formatted digest output...
The Telegram message:
๐ Hyperliquid Daily Digest
April 14, 2026
Overview (7d)
Volume: $105.2B
Fills: 58.6M
Active traders: 60,247
Fees: $16.2M
Top Assets by Volume (24h)
1. BTC $2.4B
2. xyz:CL $1.2B
3. ETH $1.0B
4. xyz:CB $580M
5. xyz:SP500 $270M
Liquidations (24h)
Total: $78.5M
Longs: $31.4M | Shorts: $47.1M
Addresses liquidated: 2,293
Funding Rate Extremes
0G: -871% ann. OI $2.1M
MAVIA: +733% ann. OI $890K
AXS: -260% ann. OI $3.4M
TRUMP: -135% ann. OI $13M
PONKE: +98% ann. OI $1.2M
Powered by Quicknode SQL Explorer
A TypeScript version of this bot is also available in the qn-guide-examples repository. The REST API calls are identical; only the language wrapper differs.
Running on a Scheduleโ
The bot is a standalone script, so any scheduler works. A cron job is the simplest option:
0 8 * * * cd /path/to/hyperliquid-intel-bot && /path/to/.venv/bin/python3 bot.py
For something more robust, deploy it as a scheduled task on a cloud provider (AWS Lambda with EventBridge, Google Cloud Functions with Cloud Scheduler, or a simple VM with cron). The script has no server dependencies, just Python and outbound HTTPS. Ensure QUICKNODE_API_KEY, TELEGRAM_BOT_TOKEN, and TELEGRAM_CHAT_ID are available in the execution environment via .env or your cloud provider's secrets manager.
Conclusionโ
You went from SQL queries to a scheduled bot that delivers a Hyperliquid market digest to Telegram. The pattern is simple: write SQL, call the REST endpoint, do something with the JSON. But the interesting part is what you can build on top of it.
These are the same query patterns powering Quicknode's weekly Hyperliquid recap on X. The dataset covers every trade, liquidation, funding payment, bridge event, and market snapshot on Hyperliquid. All of it is queryable through a single endpoint.
Next Stepsโ
- Build a research dashboard. Feed the query results into a charting library (Plotly, D3, Recharts) and build an interactive dashboard that updates on each visit.
- Power an AI agent. Give an LLM access to
run_query()as a tool and let it answer natural-language questions about Hyperliquid markets. "Which coins had the most liquidations this week?" becomes a function call. - Create a portfolio tracker. Use the
hyperliquid_fillsandhyperliquid_clearinghouse_statestables to build a PnL tracker for specific addresses, with historical position snapshots and funding payment history. - Extend the queries. The pre-built query library has 40+ templates covering whale trades, builder leaderboards, HIP-3 vs. native crypto breakdowns, bridge flows, validator rewards, and more.
Frequently Asked Questionsโ
What chains does SQL Explorer support?
SQL Explorer currently supports Hyperliquid (HyperCore Mainnet) with additional chains on the way. The REST API interface is the same regardless of chain, so code you write today will work with new chains by changing the clusterId parameter.
How much does SQL Explorer cost?
SQL Explorer uses a credit-based billing model. Each query costs a minimum of 10 credits, calculated based on execution time and data scanned.
Is there a row limit on query results?
Yes, the REST API returns a maximum of 1,000 rows per request. For larger result sets, use LIMIT and OFFSET clauses to paginate. The response includes a rows_before_limit_at_least field that tells you the total number of matching rows.
Can I use SQL Explorer for real-time alerts?
SQL Explorer is designed for analytical queries over historical and aggregated data, not real-time event streaming. For real-time notifications (like instant liquidation alerts), use Quicknode Streams or gRPC endpoints. SQL Explorer is the right tool for scheduled reports, dashboards, and research queries.
Why do numeric values come back as strings in the API response?
SQL Explorer runs on ClickHouse, which uses high-precision Decimal types for financial data. These are serialized as strings in JSON to avoid floating-point precision loss. Cast them to float or Decimal in your application code as needed.
We โค๏ธ Feedback!
Let us know if you have any feedback or requests for new topics. We'd love to hear from you.