Phoenix MCP

Provides a unified interface to Arize Phoenix's capabilities for managing prompts, exploring datasets, and running experiments across different LLM providers

Skills

Explore the skills and capabilities of this skillset.

get-spans

Get spans from a project with filtering criteria. Spans represent individual operations or units of work within a trace. They contain timing information, attributes, and context about the operation being performed. Example usage: Get recent spans from project "my-project" Get spans in a time range from project "my-project" Expected return: Object containing spans array and optional next cursor for pagination. Example: { "spans": [ { "id": "span123", "name": "http_request", "context": { "trace_id": "trace456", "span_id": "span123" }, "start_time": "2024-01-01T12:00:00Z", "end_time": "2024-01-01T12:00:01Z", "attributes": { "http.method": "GET", "http.url": "/api/users" } } ], "nextCursor": "cursor_for_pagination" }

list-prompts

Get a list of all the prompts. Prompts (templates, prompt templates) are versioned templates for input messages to an LLM. Each prompt includes both the input messages, but also the model and invocation parameters to use when generating outputs. Returns a list of prompt objects with their IDs, names, and descriptions. Example usage: List all available prompts Expected return: Array of prompt objects with metadata. Example: [{ "name": "article-summarizer", "description": "Summarizes an article into concise bullet points", "source_prompt_id": null, "id": "promptid1234" }]

list-datasets

Get a list of all datasets. Datasets are collections of 'dataset examples' that each example includes an input, (expected) output, and optional metadata. They are primarily used as inputs for experiments. Example usage: Show me all available datasets Expected return: Array of dataset objects with metadata. Example: [ { "id": "RGF0YXNldDox", "name": "my-dataset", "description": "A dataset for testing", "metadata": {}, "created_at": "2024-03-20T12:00:00Z", "updated_at": "2024-03-20T12:00:00Z" } ]

list-projects

Get a list of all projects. Projects are containers for organizing traces, spans, and other observability data. Each project has a unique name and can contain traces from different applications or experiments. Example usage: Show me all available projects Expected return: Array of project objects with metadata. Example: [ { "id": "UHJvamVjdDox", "name": "default", "description": "Default project for traces" }, { "id": "UHJvamVjdDoy", "name": "my-experiment", "description": "Project for my ML experiment" } ]

upsert-prompt

Create or update a prompt with its template and configuration. Creates a new prompt and its initial version with specified model settings. Example usage: Create a new prompt named 'email_generator' with a template for generating emails Expected return: A confirmation message of successful prompt creation

phoenix-support

Get help with Phoenix and OpenInference. - Tracing AI applications via OpenInference and OpenTelemetry - Phoenix datasets, experiments, and prompt management - Phoenix evals and annotations Use this tool when you need assistance with Phoenix features, troubleshooting, or best practices. Expected return: Expert guidance about how to use and integrate Phoenix

get-latest-prompt

Get the latest version of a prompt. Returns the prompt version with its template, model configuration, and invocation parameters. Example usage: Get the latest version of a prompt named 'article-summarizer' Expected return: Prompt version object with template and configuration. Example: { "description": "Initial version", "model_provider": "OPENAI", "model_name": "gpt-3.5-turbo", "template": { "type": "chat", "messages": [ { "role": "system", "content": "You are an expert summarizer. Create clear, concise bullet points highlighting the key information." }, { "role": "user", "content": "Please summarize the following {{topic}} article: {{article}}" } ] }, "template_type": "CHAT", "template_format": "MUSTACHE", "invocation_parameters": { "type": "openai", "openai": {} }, "id": "promptversionid1234" }

get-prompt-version

Get a specific version of a prompt using its version ID. Returns the prompt version with its template, model configuration, and invocation parameters. Example usage: Get a specific prompt version with ID 'promptversionid1234' Expected return: Prompt version object with template and configuration. Example: { "description": "Initial version", "model_provider": "OPENAI", "model_name": "gpt-3.5-turbo", "template": { "type": "chat", "messages": [ { "role": "system", "content": "You are an expert summarizer. Create clear, concise bullet points highlighting the key information." }, { "role": "user", "content": "Please summarize the following {{topic}} article: {{article}}" } ] }, "template_type": "CHAT", "template_format": "MUSTACHE", "invocation_parameters": { "type": "openai", "openai": {} }, "id": "promptversionid1234" }

add-dataset-examples

Add examples to an existing dataset. This tool adds one or more examples to an existing dataset. Each example includes an input, output, and metadata. The metadata will automatically include information indicating that these examples were synthetically generated via MCP. When calling this tool, check existing examples using the "get-dataset-examples" tool to ensure that you are not adding duplicate examples and following existing patterns for how data should be structured. Example usage: Look at the analyze "my-dataset" and augment them with new examples to cover relevant edge cases Expected return: Confirmation of successful addition of examples to the dataset. Example: { "dataset_name": "my-dataset", "message": "Successfully added examples to dataset" }

get-dataset-examples

Get examples from a dataset. Dataset examples are an array of objects that each include an input, (expected) output, and optional metadata. These examples are typically used to represent input to an application or model (e.g. prompt template variables, a code file, or image) and used to test or benchmark changes. Example usage: Show me all examples from dataset RGF0YXNldDox Expected return: Object containing dataset ID, version ID, and array of examples. Example: { "dataset_id": "datasetid1234", "version_id": "datasetversionid1234", "examples": [ { "id": "exampleid1234", "input": { "text": "Sample input text" }, "output": { "text": "Expected output text" }, "metadata": {}, "updated_at": "YYYY-MM-DDTHH:mm:ssZ" } ] }

get-experiment-by-id

Get an experiment by its ID. The tool returns experiment metadata in the first content block and a JSON object with the experiment data in the second. The experiment data contains both the results of each experiment run and the annotations made by an evaluator to score or label the results, for example, comparing the output of an experiment run to the expected output from the dataset example. Example usage: Show me the experiment results for experiment RXhwZXJpbWVudDo4 Expected return: Object containing experiment metadata and results. Example: { "metadata": { "id": "experimentid1234", "dataset_id": "datasetid1234", "dataset_version_id": "datasetversionid1234", "repetitions": 1, "metadata": {}, "project_name": "Experiment-abc123", "created_at": "YYYY-MM-DDTHH:mm:ssZ", "updated_at": "YYYY-MM-DDTHH:mm:ssZ" }, "experimentResult": [ { "example_id": "exampleid1234", "repetition_number": 0, "input": "Sample input text", "reference_output": "Expected output text", "output": "Actual output text", "error": null, "latency_ms": 1000, "start_time": "2025-03-20T12:00:00Z", "end_time": "2025-03-20T12:00:01Z", "trace_id": "trace-123", "prompt_token_count": 10, "completion_token_count": 20, "annotations": [ { "name": "quality", "annotator_kind": "HUMAN", "label": "good", "score": 0.9, "explanation": "Output matches expected format", "trace_id": "trace-456", "error": null, "metadata": {}, "start_time": "YYYY-MM-DDTHH:mm:ssZ", "end_time": "YYYY-MM-DDTHH:mm:ssZ" } ] } ] }

get-span-annotations

Get span annotations for a list of span IDs. Span annotations provide additional metadata, scores, or labels for spans. They can be created by humans, LLMs, or code and help in analyzing and categorizing spans. Example usage: Get annotations for spans ["span1", "span2"] from project "my-project" Get quality score annotations for span "span1" from project "my-project" Expected return: Object containing annotations array and optional next cursor for pagination. Example: { "annotations": [ { "id": "annotation123", "span_id": "span1", "name": "quality_score", "result": { "label": "good", "score": 0.95, "explanation": null }, "annotator_kind": "LLM", "metadata": { "model": "gpt-4" } } ], "nextCursor": "cursor_for_pagination" }

list-prompt-versions

Get a list of all versions for a specific prompt. Returns versions with pagination support. Example usage: List all versions of a prompt named 'article-summarizer' Expected return: Array of prompt version objects with IDs and configuration. Example: [ { "description": "Initial version", "model_provider": "OPENAI", "model_name": "gpt-3.5-turbo", "template": { "type": "chat", "messages": [ { "role": "system", "content": "You are an expert summarizer. Create clear, concise bullet points highlighting the key information." }, { "role": "user", "content": "Please summarize the following {{topic}} article: {{article}}" } ] }, "template_type": "CHAT", "template_format": "MUSTACHE", "invocation_parameters": { "type": "openai", "openai": {} }, "id": "promptversionid1234" } ]

add-prompt-version-tag

Add a tag to a specific prompt version. The operation returns no content on success (204 status code). Example usage: Tag prompt version 'promptversionid1234' with the name 'production' Expected return: Confirmation message of successful tag addition

get-dataset-experiments

List experiments run on a dataset. Example usage: Show me all experiments run on dataset RGF0YXNldDox Expected return: Array of experiment objects with metadata. Example: [ { "id": "experimentid1234", "dataset_id": "datasetid1234", "dataset_version_id": "datasetversionid1234", "repetitions": 1, "metadata": {}, "project_name": "Experiment-abc123", "created_at": "YYYY-MM-DDTHH:mm:ssZ", "updated_at": "YYYY-MM-DDTHH:mm:ssZ" } ]

get-prompt-by-identifier

Get a prompt's latest version by its identifier (name or ID). Returns the prompt version with its template, model configuration, and invocation parameters. Example usage: Get the latest version of a prompt with name 'article-summarizer' Expected return: Prompt version object with template and configuration. Example: { "description": "Initial version", "model_provider": "OPENAI", "model_name": "gpt-3.5-turbo", "template": { "type": "chat", "messages": [ { "role": "system", "content": "You are an expert summarizer. Create clear, concise bullet points highlighting the key information." }, { "role": "user", "content": "Please summarize the following {{topic}} article: {{article}}" } ] }, "template_type": "CHAT", "template_format": "MUSTACHE", "invocation_parameters": { "type": "openai", "openai": {} }, "id": "promptversionid1234" }

list-prompt-version-tags

Get a list of all tags for a specific prompt version. Returns tag objects with pagination support. Example usage: List all tags associated with prompt version 'promptversionid1234' Expected return: Array of tag objects with names and IDs. Example: [ { "name": "staging", "description": "The version deployed to staging", "id": "promptversionid1234" }, { "name": "development", "description": "The version deployed for development", "id": "promptversionid1234" } ]

get-prompt-version-by-tag

Get a prompt version by its tag name. Returns the prompt version with its template, model configuration, and invocation parameters. Example usage: Get the 'production' tagged version of prompt 'article-summarizer' Expected return: Prompt version object with template and configuration. Example: { "description": "Initial version", "model_provider": "OPENAI", "model_name": "gpt-3.5-turbo", "template": { "type": "chat", "messages": [ { "role": "system", "content": "You are an expert summarizer. Create clear, concise bullet points highlighting the key information." }, { "role": "user", "content": "Please summarize the following {{topic}} article: {{article}}" } ] }, "template_type": "CHAT", "template_format": "MUSTACHE", "invocation_parameters": { "type": "openai", "openai": {} }, "id": "promptversionid1234" }

list-experiments-for-dataset

Get a list of all the experiments run on a given dataset. Experiments are collections of experiment runs, each experiment run corresponds to a single dataset example. The dataset example is passed to an implied `task` which in turn produces an output. Example usage: Show me all the experiments I've run on dataset RGF0YXNldDox Expected return: Array of experiment objects with metadata. Example: [ { "id": "experimentid1234", "dataset_id": "datasetid1234", "dataset_version_id": "datasetversionid1234", "repetitions": 1, "metadata": {}, "project_name": "Experiment-abc123", "created_at": "YYYY-MM-DDTHH:mm:ssZ", "updated_at": "YYYY-MM-DDTHH:mm:ssZ" } ]

Configuration

Customize the skillset to fit your needs.
MCP Server

Connect to MCP Server

Phoenix MCP

工單管理員
收集、分析和管理來自表單和數據庫的支持工單,幫助您高效地跟踪、優先處理和回應。
Discourse 社區管理員
Discourse 社區管理員助手幫助您快速生成清晰、友好且結構良好的用戶回覆,使社區管理變得更輕鬆和專業。
Email 营销助手
自動尋找潛在客戶並發送為期3天的跟進郵件序列。
Github issues 助手
Github Issues 助手是一個 AI 智能體,用於簡化 GitHub issues的管理。它可以直接在存儲庫中簡化創建、跟踪和優先處理錯誤、任務或功能請求的過程。非常適合團隊使用,確保一致的格式,自動化重複步驟,並與開發管道集成。
客服文檔助手
AI 助手協助客服團隊創建高質量的支援文檔,包括常見問題、工單回覆、道歉信和標準作業程序。引導您創建內部資源和面向客戶的材料。
品牌设计师
一款專為初創數字產品設計的品牌營銷 AI 助手,幫助您快速生成適合 Product Hunt、AppSumo 等平台的在線推廣材料,涵蓋視覺創意、推廣標語、品牌語調和賣點傳達
需求文檔撰寫助手
告訴我您的產品或功能想法 - 我將幫助您創建全面且詳細的需求文檔,涵蓋用戶故事、驗收標準、技術規範等內容。
社區活動分析員
分析社區活動截圖,報告參與趨勢和討論亮點。上傳社區互動的截圖,該 Agent 會生成一份清晰的markdown報告,總結參與水平、關鍵討論主題和顯著亮點 — 非常適合社區經理、行銷人員和產品團隊。
Google 分析師
逐步指南,教您如何將 Google Analytics 4 (GA4) 屬性連接到 Google 分析師代理。涵蓋創建 Google Cloud 服務帳戶、啟用 Analytics Data API、授予 GA4 查看者訪問權限,以及配置代理以支持會話、用戶、跳出率、轉換等指標。非常適合快速在 Bika.ai 中設置 GA4 數據報告。

Frequently Asked Questions

一句話快速介紹:什麼是Bika.ai?
是什麽让 Bika.ai 如此独特?
"BIKA" 這個縮寫單詞代表什麼意思?
Bika.ai是怎麼做到AI自動化做事的?
Bika.ai是免費使用的嗎?
Bika.ai與ChatGPT、Gemini等AI助手有什麼區別?
Bika.ai與多維表格有什麼區別?
Bika.ai 在單表數據量、關聯引用變多後,如幾萬行、幾十萬行,會卡住嗎?
Bika.ai中的"空間站"是什麼?
付款後我擁有多少個付費空間?
什麼是"資源"?
Bika.ai 的團隊是如何「吃自己的狗糧」的?
Bika.ai如何幫助提高工作效率?
Bika.ai 的AI自動化功能有哪些特點?
Bika.ai 中的自動化模板是什麼?
Bika.ai 是否支持團隊協作及權限功能?

Embark on Your AI Automation