Skip to main content

Overview

Updated on
Sep 4, 2024

Streams Destinations

Streams offer various destinations for sending blockchain data, tailored to different needs. For real-time applications, a Webhook destination is ideal. For archiving or managing large datasets, consider using an Object storage solution like S3. Functions serve well for scenarios requiring the execution of arbitrary code based on stream data.

Before setting up your destination, you must first configure your Stream settings. These include selecting the blockchain chain and network, determining batch size, specifying the date range, and setting up reorg handling, among others. Proper reorg handling is crucial for effective data management. For more information on managing reorgs, refer to the Reorg handling section.

Webhooks

The use of a Webhook for your Stream destination is ideal when you are using a lightweight application that requires real-time responses. The benefits of this include data availability in real-time and the flexibility to use any type of Webhook destination. This destination type is usually not ideal for storing large amounts of data. Other benefits include:


  • Real-Time Data Handling: Webhooks are ideal for scenarios requiring real-time data processing. They enable immediate reaction to incoming data streams, which is crucial for applications needing instant updates.
  • Direct Integration with Services: Webhooks provide a straightforward way to integrate Streams with various third-party services and custom applications. They can directly push data to services that can accept webhook payloads.
  • Flexibility and Custom Workflows: Webhooks might offer more flexibility in handling data. They allow users to create custom workflows and processing logic tailored to their specific needs, which might not be as straightforward with predefined destinations like S3 or PostgreSQL.
  • Simplicity and Ease of Use: For some users, setting up a webhook endpoint might be simpler than configuring integration with cloud storage or databases, especially if they already have a system in place to handle webhook calls.

S3-Compatible Storage

The use of object storage destinations like S3 are suitable to use when processing and archiving large amounts of data in batches. The reliable and scalable nature of S3 storage enables data durability and integration with data lakes and other big data tools. Other benefits include:


  • Large Data Storage: S3 offers virtually unlimited storage, making it suitable for handling massive amounts of data that webhooks might not efficiently process.
  • Data Durability and Reliability: S3 provides high durability and secure storage options, ensuring data is safely stored and readily available for future analysis.
  • Cost-Effective for Large Data: For substantial data volumes, S3 can be more cost-effective due to its pricing model based on storage and access.
  • Ease of Data Analysis Integration: Data stored in S3 can be seamlessly integrated with various analytics tools, simplifying the data analysis process.
  • Scalability: S3 scales automatically to accommodate data growth, which is beneficial for applications with increasing data streaming needs.

These factors make S3 a preferred choice for scenarios involving large-scale data storage, analytics, and applications requiring robust data backup and retrieval capabilities.

Functions

The use of Functions as a Stream destination enables developers to filter, transform and enhance the data they recieve from their Stream. Functions automatically takes care of deployment and scaling, ensuring your functions perform optimally at all times. Functions are compatible with multiple programming languages, including Node.js, Python, Ruby, and more, offering flexibility in development. Note that Functions are in private beta. Fill the form here to apply. Other benefits include:


  • Data Transformation: Allows parsing and custom transformation of blockchain data according to your criteria.
  • Scalability: Automatically scale your Function to meet your Streams' data requirements, ensuring optimal performance.
  • Cost Efficiency: Ensures you only pay for the resources your Function uses, optimizing your expenses.
  • Programming Language Flexibility: Supports deployment in various programming languages, offering development flexibility.
  • Integration Flexibility: Facilitates easy integration with additional services like IPFS and Streams, enhancing functionality.
Share this doc