Cryo MCP

Provides a powerful Ethereum blockchain data extraction and analysis interface using Cryo and DuckDB, enabling efficient SQL-based querying of on-chain datasets with advanced filtering capabilities.

Skills

Explore the skills and capabilities of this skillset.

query_sql

Run a SQL query against downloaded blockchain data files IMPORTANT WORKFLOW: This function should be used after calling query_dataset to download data. Use the file paths returned by query_dataset as input to this function. Workflow steps: 1. Download data: result = query_dataset('transactions', blocks='1000:1010', output_format='parquet') 2. Get file paths: files = result.get('files', []) 3. Execute SQL using either: - Direct table references: query_sql("SELECT * FROM transactions", files=files) - Or read_parquet(): query_sql("SELECT * FROM read_parquet('/path/to/file.parquet')", files=files) To see the schema of a file, use get_sql_table_schema(file_path) before writing your query. DuckDB supports both approaches: 1. Direct table references (simpler): "SELECT * FROM blocks" 2. read_parquet function (explicit): "SELECT * FROM read_parquet('/path/to/file.parquet')" Args: query: SQL query to execute - can use simple table names or read_parquet() files: List of parquet file paths to query (typically from query_dataset results) include_schema: Whether to include schema information in the result Returns: Query results and metadata

list_datasets

Return a list of all available cryo datasets

query_dataset

Download blockchain data and return the file paths where the data is stored. IMPORTANT WORKFLOW NOTE: When running SQL queries, use this function first to download data, then use the returned file paths with query_sql() to execute SQL on those files. Example workflow for SQL: 1. First download data: result = query_dataset('transactions', blocks='1000:1010', output_format='parquet') 2. Get file paths: files = result.get('files', []) 3. Run SQL query: query_sql("SELECT * FROM read_parquet('/path/to/file.parquet')", files=files) DATASET-SPECIFIC PARAMETERS: For datasets that require specific address parameters (like 'balances', 'erc20_transfers', etc.), ALWAYS use the 'contract' parameter to pass ANY Ethereum address. For example: - For 'balances' dataset: Use contract parameter for the address you want balances for query_dataset('balances', blocks='1000:1010', contract='0x123...') - For 'logs' or 'erc20_transfers': Use contract parameter for contract address query_dataset('logs', blocks='1000:1010', contract='0x123...') To check what parameters a dataset requires, always use lookup_dataset() first: lookup_dataset('balances') # Will show required parameters Args: dataset: The name of the dataset to query (e.g., 'logs', 'transactions', 'balances') blocks: Block range specification as a string (e.g., '1000:1010') start_block: Start block number as integer (alternative to blocks) end_block: End block number as integer (alternative to blocks) use_latest: If True, query the latest block blocks_from_latest: Number of blocks before the latest to include (e.g., 10 = latest-10 to latest) contract: Contract address to filter by - IMPORTANT: Use this parameter for ALL address-based filtering regardless of the parameter name in the native cryo command (address, contract, etc.) output_format: Output format (json, csv, parquet) - use 'parquet' for SQL queries include_columns: Columns to include alongside the defaults exclude_columns: Columns to exclude from the defaults Returns: Dictionary containing file paths where the downloaded data is stored

lookup_dataset

Look up a specific dataset and return detailed information about it. IMPORTANT: Always use this function before querying a new dataset to understand its required parameters and schema. The returned information includes: 1. Required parameters for the dataset (IMPORTANT for datasets like 'balances' that need an address) 2. Schema details showing available columns and data types 3. Example queries for the dataset When the dataset requires specific parameters like 'address' (for 'balances'), ALWAYS use the 'contract' parameter in query_dataset() to pass these values. Example: For 'balances' dataset, lookup_dataset('balances') will show it requires an 'address' parameter. You should then query it using: query_dataset('balances', blocks='1000:1010', contract='0x1234...') Args: name: The name of the dataset to look up sample_start_block: Optional start block for sample data (integer) sample_end_block: Optional end block for sample data (integer) use_latest_sample: If True, use the latest block for sample data sample_blocks_from_latest: Number of blocks before the latest to include in sample Returns: Detailed information about the dataset including schema and available fields

get_sql_examples

Get example SQL queries for different blockchain datasets with DuckDB SQL WORKFLOW TIPS: 1. First download data: result = query_dataset('dataset_name', blocks='...', output_format='parquet') 2. Inspect schema: schema = get_sql_table_schema(result['files'][0]) 3. Run SQL: query_sql("SELECT * FROM read_parquet('/path/to/file.parquet')", files=result['files']) OR use the combined approach: - query_blockchain_sql(sql_query="SELECT * FROM read_parquet('...')", dataset='blocks', blocks='...') Returns: Dictionary of example queries categorized by dataset type and workflow patterns

get_sql_table_schema

Get the schema and sample data for a specific parquet file WORKFLOW NOTE: Use this function to explore the structure of parquet files before writing SQL queries against them. This will show you: 1. All available columns and their data types 2. Sample data from the file 3. Total row count Usage example: 1. Get list of files: files = list_available_sql_tables() 2. For a specific file: schema = get_sql_table_schema(files[0]['path']) 3. Use columns in your SQL: query_sql("SELECT column1, column2 FROM read_parquet('/path/to/file.parquet')") Args: file_path: Path to the parquet file (from list_available_sql_tables or query_dataset) Returns: Table schema information including columns, data types, and sample data

query_blockchain_sql

Download blockchain data and run SQL query in a single step CONVENIENCE FUNCTION: This combines query_dataset and query_sql into one call. You can write SQL queries using either approach: 1. Simple table references: "SELECT * FROM blocks LIMIT 10" 2. Explicit read_parquet: "SELECT * FROM read_parquet('/path/to/file.parquet') LIMIT 10" DATASET-SPECIFIC PARAMETERS: For datasets that require specific address parameters (like 'balances', 'erc20_transfers', etc.), ALWAYS use the 'contract' parameter to pass ANY Ethereum address. For example: - For 'balances' dataset: Use contract parameter for the address you want balances for query_blockchain_sql( sql_query="SELECT * FROM balances", dataset="balances", blocks='1000:1010', contract='0x123...' # Address you want balances for ) Examples: ``` # Using simple table name query_blockchain_sql( sql_query="SELECT * FROM blocks LIMIT 10", dataset="blocks", blocks_from_latest=100 ) # Using read_parquet() (the path will be automatically replaced) query_blockchain_sql( sql_query="SELECT * FROM read_parquet('/any/path.parquet') LIMIT 10", dataset="blocks", blocks_from_latest=100 ) ``` ALTERNATIVE WORKFLOW (more control): If you need more control, you can separate the steps: 1. Download data: result = query_dataset('blocks', blocks_from_latest=100, output_format='parquet') 2. Inspect schema: schema = get_sql_table_schema(result['files'][0]) 3. Run SQL query: query_sql("SELECT * FROM blocks", files=result['files']) Args: sql_query: SQL query to execute - using table names or read_parquet() dataset: The specific dataset to query (e.g., 'transactions', 'logs', 'balances') If None, will be extracted from the SQL query blocks: Block range specification as a string (e.g., '1000:1010') start_block: Start block number (alternative to blocks) end_block: End block number (alternative to blocks) use_latest: If True, query the latest block blocks_from_latest: Number of blocks before the latest to include contract: Contract address to filter by - IMPORTANT: Use this parameter for ALL address-based filtering regardless of the parameter name in the native cryo command (address, contract, etc.) force_refresh: Force download of new data even if it exists include_schema: Include schema information in the result Returns: SQL query results and metadata

get_transaction_by_hash

Get detailed information about a transaction by its hash Args: tx_hash: The transaction hash to look up Returns: Detailed information about the transaction

get_latest_ethereum_block

Get information about the latest Ethereum block Returns: Information about the latest block including block number

list_available_sql_tables

List all available parquet files that can be queried with SQL USAGE NOTES: - This function lists parquet files that have already been downloaded - Each file can be queried using read_parquet('/path/to/file.parquet') in your SQL - For each file, this returns the file path, dataset type, and other metadata - Use these file paths in your SQL queries with query_sql() Returns: List of available files and their metadata

Configuration

Customize the skillset to fit your needs.
MCP Server

Connect to MCP Server

Cryo MCP

Google Analyst
Step-by-step guide to connect your Google Analytics 4 (GA4) property to the Google Analyst agent. Covers creating a Google Cloud service account, enabling the Analytics Data API, granting GA4 Viewer access, and configuring the agent with supported metrics like sessions, users, bounce rate, conversions, and more. Perfect for quickly setting up GA4 data reporting in Bika.ai.
Customer Support Scribe
An AI assistant that helps customer support teams create high-quality support documentation, including FAQs, ticket replies, apology letters, and SOPs. Guides you through creating both internal resources and customer-facing materials.
Requirements Document Writer
Tell me about your product or feature idea — I'll help you create comprehensive and detailed requirements documents that cover user stories, acceptance criteria, technical specifications, and more.
AI Programmer
AI Programmer is an AI agent that transforms your raw release notes into stylish, ready-to-publish HTML pages.
Email Marketer
Finds leads and sends a 3-day follow-up email sequence automatically.
Stock News Reporter
This AI agent monitors and analyzes major U.S. stock news in real time to generate structured investment reports with key insights, market reactions, and sector-level summaries.
Office Docs Helper
An AI-powered virtual administrative assistant for internal company operations. Helps you quickly create high-quality internal documents like announcements, meeting minutes, summaries, forms, procedures, and HR records.
Brand Designer
A brand marketing AI assistant specially designed for start-up digital products, helping you quickly generate online promotional materials suitable for Product Hunt, AppSumo and other platforms, covering visual creativity, promotional slogans, brand tone and selling point communication
Discourse Community Manager
Discourse Community Manager Agent helps you quickly generate clear, friendly, and well-structured replies to user posts, making community moderation easier and more professional.

Frequently Asked Questions

Quick one-sentence introduction: What is Bika.ai?
What make Bika.ai so unique?
The English abbreviation "BIKA" stands for what meaning?
How does Bika.ai automate tasks with AI?
Is Bika.ai free to use?
What is the difference between Bika.ai and AI assistants like ChatGPT, Gemini?
What is the difference between Bika.ai and spreadsheet database?
Does Bika.ai get poor performance when the single database records reaches tens of thousands or hundreds of thousands of rows and the associations become more complex?
What is the 'Space' in Bika.ai?
How many paid spaces do I own after making a payment?
What does 'Resources' mean?
How does the Bika.ai team 'eat your own dog food' (use their own product)?
How does Bika.ai help improve work efficiency?
What are the features of Bika.ai's AI automation?
What are the automation templates in Bika.ai?
Does Bika.ai support team collaboration and permissions features?

Embark on Your AI Automation