Cryo MCP

Provides a powerful Ethereum blockchain data extraction and analysis interface using Cryo and DuckDB, enabling efficient SQL-based querying of on-chain datasets with advanced filtering capabilities.

Skills

Explore the skills and capabilities of this skillset.

query_sql

Run a SQL query against downloaded blockchain data files IMPORTANT WORKFLOW: This function should be used after calling query_dataset to download data. Use the file paths returned by query_dataset as input to this function. Workflow steps: 1. Download data: result = query_dataset('transactions', blocks='1000:1010', output_format='parquet') 2. Get file paths: files = result.get('files', []) 3. Execute SQL using either: - Direct table references: query_sql("SELECT * FROM transactions", files=files) - Or read_parquet(): query_sql("SELECT * FROM read_parquet('/path/to/file.parquet')", files=files) To see the schema of a file, use get_sql_table_schema(file_path) before writing your query. DuckDB supports both approaches: 1. Direct table references (simpler): "SELECT * FROM blocks" 2. read_parquet function (explicit): "SELECT * FROM read_parquet('/path/to/file.parquet')" Args: query: SQL query to execute - can use simple table names or read_parquet() files: List of parquet file paths to query (typically from query_dataset results) include_schema: Whether to include schema information in the result Returns: Query results and metadata

list_datasets

Return a list of all available cryo datasets

query_dataset

Download blockchain data and return the file paths where the data is stored. IMPORTANT WORKFLOW NOTE: When running SQL queries, use this function first to download data, then use the returned file paths with query_sql() to execute SQL on those files. Example workflow for SQL: 1. First download data: result = query_dataset('transactions', blocks='1000:1010', output_format='parquet') 2. Get file paths: files = result.get('files', []) 3. Run SQL query: query_sql("SELECT * FROM read_parquet('/path/to/file.parquet')", files=files) DATASET-SPECIFIC PARAMETERS: For datasets that require specific address parameters (like 'balances', 'erc20_transfers', etc.), ALWAYS use the 'contract' parameter to pass ANY Ethereum address. For example: - For 'balances' dataset: Use contract parameter for the address you want balances for query_dataset('balances', blocks='1000:1010', contract='0x123...') - For 'logs' or 'erc20_transfers': Use contract parameter for contract address query_dataset('logs', blocks='1000:1010', contract='0x123...') To check what parameters a dataset requires, always use lookup_dataset() first: lookup_dataset('balances') # Will show required parameters Args: dataset: The name of the dataset to query (e.g., 'logs', 'transactions', 'balances') blocks: Block range specification as a string (e.g., '1000:1010') start_block: Start block number as integer (alternative to blocks) end_block: End block number as integer (alternative to blocks) use_latest: If True, query the latest block blocks_from_latest: Number of blocks before the latest to include (e.g., 10 = latest-10 to latest) contract: Contract address to filter by - IMPORTANT: Use this parameter for ALL address-based filtering regardless of the parameter name in the native cryo command (address, contract, etc.) output_format: Output format (json, csv, parquet) - use 'parquet' for SQL queries include_columns: Columns to include alongside the defaults exclude_columns: Columns to exclude from the defaults Returns: Dictionary containing file paths where the downloaded data is stored

lookup_dataset

Look up a specific dataset and return detailed information about it. IMPORTANT: Always use this function before querying a new dataset to understand its required parameters and schema. The returned information includes: 1. Required parameters for the dataset (IMPORTANT for datasets like 'balances' that need an address) 2. Schema details showing available columns and data types 3. Example queries for the dataset When the dataset requires specific parameters like 'address' (for 'balances'), ALWAYS use the 'contract' parameter in query_dataset() to pass these values. Example: For 'balances' dataset, lookup_dataset('balances') will show it requires an 'address' parameter. You should then query it using: query_dataset('balances', blocks='1000:1010', contract='0x1234...') Args: name: The name of the dataset to look up sample_start_block: Optional start block for sample data (integer) sample_end_block: Optional end block for sample data (integer) use_latest_sample: If True, use the latest block for sample data sample_blocks_from_latest: Number of blocks before the latest to include in sample Returns: Detailed information about the dataset including schema and available fields

get_sql_examples

Get example SQL queries for different blockchain datasets with DuckDB SQL WORKFLOW TIPS: 1. First download data: result = query_dataset('dataset_name', blocks='...', output_format='parquet') 2. Inspect schema: schema = get_sql_table_schema(result['files'][0]) 3. Run SQL: query_sql("SELECT * FROM read_parquet('/path/to/file.parquet')", files=result['files']) OR use the combined approach: - query_blockchain_sql(sql_query="SELECT * FROM read_parquet('...')", dataset='blocks', blocks='...') Returns: Dictionary of example queries categorized by dataset type and workflow patterns

get_sql_table_schema

Get the schema and sample data for a specific parquet file WORKFLOW NOTE: Use this function to explore the structure of parquet files before writing SQL queries against them. This will show you: 1. All available columns and their data types 2. Sample data from the file 3. Total row count Usage example: 1. Get list of files: files = list_available_sql_tables() 2. For a specific file: schema = get_sql_table_schema(files[0]['path']) 3. Use columns in your SQL: query_sql("SELECT column1, column2 FROM read_parquet('/path/to/file.parquet')") Args: file_path: Path to the parquet file (from list_available_sql_tables or query_dataset) Returns: Table schema information including columns, data types, and sample data

query_blockchain_sql

Download blockchain data and run SQL query in a single step CONVENIENCE FUNCTION: This combines query_dataset and query_sql into one call. You can write SQL queries using either approach: 1. Simple table references: "SELECT * FROM blocks LIMIT 10" 2. Explicit read_parquet: "SELECT * FROM read_parquet('/path/to/file.parquet') LIMIT 10" DATASET-SPECIFIC PARAMETERS: For datasets that require specific address parameters (like 'balances', 'erc20_transfers', etc.), ALWAYS use the 'contract' parameter to pass ANY Ethereum address. For example: - For 'balances' dataset: Use contract parameter for the address you want balances for query_blockchain_sql( sql_query="SELECT * FROM balances", dataset="balances", blocks='1000:1010', contract='0x123...' # Address you want balances for ) Examples: ``` # Using simple table name query_blockchain_sql( sql_query="SELECT * FROM blocks LIMIT 10", dataset="blocks", blocks_from_latest=100 ) # Using read_parquet() (the path will be automatically replaced) query_blockchain_sql( sql_query="SELECT * FROM read_parquet('/any/path.parquet') LIMIT 10", dataset="blocks", blocks_from_latest=100 ) ``` ALTERNATIVE WORKFLOW (more control): If you need more control, you can separate the steps: 1. Download data: result = query_dataset('blocks', blocks_from_latest=100, output_format='parquet') 2. Inspect schema: schema = get_sql_table_schema(result['files'][0]) 3. Run SQL query: query_sql("SELECT * FROM blocks", files=result['files']) Args: sql_query: SQL query to execute - using table names or read_parquet() dataset: The specific dataset to query (e.g., 'transactions', 'logs', 'balances') If None, will be extracted from the SQL query blocks: Block range specification as a string (e.g., '1000:1010') start_block: Start block number (alternative to blocks) end_block: End block number (alternative to blocks) use_latest: If True, query the latest block blocks_from_latest: Number of blocks before the latest to include contract: Contract address to filter by - IMPORTANT: Use this parameter for ALL address-based filtering regardless of the parameter name in the native cryo command (address, contract, etc.) force_refresh: Force download of new data even if it exists include_schema: Include schema information in the result Returns: SQL query results and metadata

get_transaction_by_hash

Get detailed information about a transaction by its hash Args: tx_hash: The transaction hash to look up Returns: Detailed information about the transaction

get_latest_ethereum_block

Get information about the latest Ethereum block Returns: Information about the latest block including block number

list_available_sql_tables

List all available parquet files that can be queried with SQL USAGE NOTES: - This function lists parquet files that have already been downloaded - Each file can be queried using read_parquet('/path/to/file.parquet') in your SQL - For each file, this returns the file path, dataset type, and other metadata - Use these file paths in your SQL queries with query_sql() Returns: List of available files and their metadata

Configuration

Customize the skillset to fit your needs.
MCP Server

Connect to MCP Server

Cryo MCP

Google 分析师
逐步指南,教您如何将 Google Analytics 4 (GA4) 属性连接到 Google 分析师代理。涵盖创建 Google Cloud 服务账户、启用 Analytics Data API、授予 GA4 查看者访问权限,以及配置代理以支持会话、用户、跳出率、转换等指标。非常适合快速在 Bika.ai 中设置 GA4 数据报告。
Github issues 助手
Github Issues 助手是一个 AI 智能体,用于简化 GitHub issues的管理。它可以直接在存储库中简化创建、跟踪和优先处理错误、任务或功能请求的过程。非常适合团队使用,确保一致的格式,自动化重复步骤,并与开发管道集成。
工单管理员
收集、分析和管理来自表单和数据库的支持工单,帮助您高效地跟踪、优先处理和回应。
需求文档撰写助手
告诉我您的产品或功能想法 - 我将帮助您创建全面且详细的需求文档,涵盖用户故事、验收标准、技术规范等内容。
X/Twitter 助手
一个 AI 驱动的 Twitter 助手,帮助内容创作者将 AI 产品体验转化为病毒式推文 - 具有自动润色、智能研究和一键发布功能。
Email 营销助手
自动寻找潜在客户并发送为期3天的跟进邮件序列。
客服文档助手
AI 助手协助客服团队创建高质量的支持文档,包括常见问题、工单回复、道歉信和标准操作程序。引导您创建内部资源和面向客户的材料。
股票新闻报告员
这个 AI 智能体实时监控和分析美国主要股票新闻,生成结构化的投资报告,提供关键见解、市场反应和行业级别的总结。
AI 网页工程师
AI Programmer 是一个 AI 页面,可以将您的原始发布说明转换为时尚、可发布的 HTML 页面。

Frequently Asked Questions

Bika.ai是免费使用的吗?
是什么让 Bika.ai 如此独特?
一句话快速介绍:什么是Bika.ai?
"BIKA" 这个缩写单词代表什么意思?
Bika.ai是怎么做到AI自动化做事的?
Bika.ai与Kimi、ChatGPT等AI助手有什么区别?
Bika.ai与多维表格有什么区别?
Bika.ai在单表数据量、关联引用变多后,如几万行、几十万行,会卡吗?
Bika.ai中的"空间站"是什么?
付款后我拥有多少个付费空间?
什么是"资源"?
Bika.ai的团队是怎样”吃自己的狗粮“(应用自己的产品)的?
Bika.ai如何帮助提高工作效率?
Bika.ai 的AI自动化功能有哪些特点?
Bika.ai 中的自动化模板是什么?
Bika.ai 是否支持团队协作及权限功能?
Bika.ai是否只适合个人使用?企业团队会不适合?

Embark on Your AI Automation