Streams Destinations
Streams offer various destinations for sending blockchain data, tailored to different needs. For real-time applications, a Webhook destination is ideal. For archiving or managing large datasets, consider using an Object storage solution like S3.
Before setting up your destination, you must first configure your Stream settings. These include selecting the blockchain chain and network, determining batch size, specifying the date range, and setting up reorg handling, among others. Proper reorg handling is crucial for effective data management. For more information on managing reorgs, refer to the Reorg handling section.
Streams processes blocks sequentially and will not proceed to the next block or batch until receiving confirmation that the current block or batch was successfully delivered to your destination. This ensures data consistency but means your destination must be able to process and acknowledge receipt of each block or batch within a reasonable timeframe.
Multiple Destinations
Streams supports configuring multiple destinations per stream, allowing you to send the same real-time blockchain data to several endpoints simultaneously from a single pipeline. Instead of creating and managing parallel streams for each environment or integration, you route data once and deliver it everywhere it needs to go.
With multiple destinations you can:
- Unify environments: Feed dev, staging, and prod from a single stream with no config drift or duplicated costs.
- Combine real-time and archival workflows: Route to a webhook for live processing and to S3 for long-term storage and backfills at the same time.
- Power multiple systems in parallel: Deliver data to your backend, internal dashboards, and customer-facing API from one pipeline.
- Reduce infrastructure complexity: Eliminate duplicate streams and the operational overhead of keeping them in sync.
Plan Limits
The number of destinations you can attach to a single stream depends on your Quicknode plan:
| Plan | Destinations per Stream |
|---|---|
| Free & Build | 1 |
| Scale & Business | Up to 4 |
| Enterprise | Up to 6 |
Delivery Behavior
When a stream has multiple destinations, each batch must be delivered to all destinations before the stream advances to the next batch. If any destination fails after retries, the entire stream stops. On restart, the last batch is resent to all destinations, which means some destinations may receive duplicates. Make sure every destination can handle or deduplicate re-delivered batches.
Webhooks
The use of a Webhook for your Stream destination is ideal when you are using a lightweight application that requires real-time responses. The benefits of this include data availability in real-time and the flexibility to use any type of Webhook destination. This destination type is usually not ideal for storing large amounts of data. Other benefits include:
- Real-Time Data Handling: Webhooks are ideal for scenarios requiring real-time data processing. They enable immediate reaction to incoming data streams, which is crucial for applications needing instant updates.
- Direct Integration with Services: Webhooks provide a straightforward way to integrate Streams with various third-party services and custom applications. They can directly push data to services that can accept webhook payloads.
- Sequential Processing: Each block or batch is processed in order, with delivery confirmation required before proceeding to the next block or batch.
- Flexible Retry Configuration: Configure retry attempts and intervals to handle temporary delivery issues while maintaining data consistency.
- Flexibility and Custom Workflows: Webhooks might offer more flexibility in handling data. They allow users to create custom workflows and processing logic tailored to their specific needs, which might not be as straightforward with predefined destinations like S3 or PostgreSQL.
- Simplicity and Ease of Use: For some users, setting up a webhook endpoint might be simpler than configuring integration with cloud storage or databases, especially if they already have a system in place to handle webhook calls.
To set up Webhooks as a destination, learn more here.
S3-Compatible Storage
The use of object storage destinations like S3 are suitable to use when processing and archiving large amounts of data in batches. The reliable and scalable nature of S3 storage enables data durability and integration with data lakes and other big data tools. Other benefits include:
- Large Data Storage: S3 offers virtually unlimited storage, making it suitable for handling massive amounts of data that webhooks might not efficiently process.
- Data Durability and Reliability: S3 provides high durability and secure storage options, ensuring data is safely stored and readily available for future analysis.
- Cost-Effective for Large Data: For substantial data volumes, S3 can be more cost-effective due to its pricing model based on storage and access.
- Ease of Data Analysis Integration: Data stored in S3 can be seamlessly integrated with various analytics tools, simplifying the data analysis process.
- Scalability: S3 scales automatically to accommodate data growth, which is beneficial for applications with increasing data streaming needs.
These factors make S3 a preferred choice for scenarios involving large-scale data storage, analytics, and applications requiring robust data backup and retrieval capabilities. To set up S3 storage as a destination, learn more here.
Azure Blob Storage
Azure Blob Storage provides a cloud-based object storage solution that's ideal for storing large amounts of unstructured data from your Streams. It offers enterprise-grade features and seamless integration with Microsoft's ecosystem. Key benefits include:
- Enterprise-Grade Storage: Built-in redundancy, high availability, and compliance features for enterprise requirements.
- Cost Optimization: Multiple storage tiers (Hot, Cool, Archive) to optimize costs based on data access patterns.
- Global Reach: Data centers worldwide for low-latency access and compliance with regional data requirements.
- Integration Ecosystem: Seamless integration with Azure services like Data Factory, Synapse Analytics, and Power BI.
- Security Features: Advanced security features including encryption at rest and in transit, and fine-grained access control.
To set up Azure Blob Storage as a destination, learn more here.
- Data Transformation: Allows parsing and custom transformation of blockchain data according to your criteria.
- Scalability: Automatically scale your Function to meet your Streams' data requirements, ensuring optimal performance.
- Cost Efficiency: Ensures you only pay for the resources your Function uses, optimizing your expenses.
- Integration Flexibility: Facilitates easy integration with additional services like IPFS and Streams, enhancing functionality.
PostgreSQL
PostgreSQL serves as a robust relational database destination for Streams, offering structured storage and powerful querying capabilities. It's particularly well-suited for applications requiring complex data relationships and SQL-based analysis. Benefits include:
- Structured Data Storage: Enables organizing blockchain data in a relational format with defined schemas and relationships.
- Advanced Querying: Powerful SQL querying capabilities for complex data analysis and reporting.
- ACID Compliance: Ensures data integrity and consistency through transactions.
- Performance: Optimized for both read and write operations with support for indexing and materialized views.
- Extensibility: Rich ecosystem of extensions and tools for enhanced functionality.
To set up PostgreSQL as a destination, learn more here.