Axiom MCP Server

Integrates with Axiom for executing APL queries and listing datasets, enabling log analysis, anomaly detection, and data-driven decision making.

Skills

Explore the skills and capabilities of this skillset.

queryApl

# Instructions 1. Query Axiom datasets using Axiom Processing Language (APL). The query must be a valid APL query string. 2. ALWAYS get the schema of the dataset before running queries rather than guessing. You can do this by getting a single event and projecting all fields. 3. Keep in mind that there's a maximum row limit of 65000 rows per query. 4. Prefer aggregations over non aggregating queries when possible to reduce the amount of data returned. 5. Be selective in what you project in each query (unless otherwise needed, like for discovering the schema). It's expensive to project all fields. 6. ALWAYS restrict the time range of the query to the smallest possible range that meets your needs. This will reduce the amount of data scanned and improve query performance. 7. NEVER guess the schema of the dataset. If you don't where something is, use search first to find in which fields it appears. # Examples Basic: - Filter: ['logs'] | where ['severity'] == "error" or ['duration'] > 500ms - Time range: ['logs'] | where ['_time'] > ago(2h) and ['_time'] < now() - Project rename: ['logs'] | project-rename responseTime=['duration'], path=['url'] Aggregations: - Count by: ['logs'] | summarize count() by bin(['_time'], 5m), ['status'] - Multiple aggs: ['logs'] | summarize count(), avg(['duration']), max(['duration']), p95=percentile(['duration'], 95) by ['endpoint'] - Dimensional: ['logs'] | summarize dimensional_analysis(['isError'], pack_array(['endpoint'], ['status'])) - Histograms: ['logs'] | summarize histogram(['responseTime'], 100) by ['endpoint'] - Distinct: ['logs'] | summarize dcount(['userId']) by bin_auto(['_time']) Search & Parse: - Search all: search "error" or "exception" - Parse logs: ['logs'] | parse-kv ['message'] as (duration:long, error:string) with (pair_delimiter=",") - Regex extract: ['logs'] | extend errorCode = extract("error code ([0-9]+)", 1, ['message']) - Contains ops: ['logs'] | where ['message'] contains_cs "ERROR" or ['message'] startswith "FATAL" Data Shaping: - Extend & Calculate: ['logs'] | extend duration_s = ['duration']/1000, success = ['status'] < 400 - Dynamic: ['logs'] | extend props = parse_json(['properties']) | where ['props.level'] == "error" - Pack/Unpack: ['logs'] | extend fields = pack("status", ['status'], "duration", ['duration']) - Arrays: ['logs'] | where ['url'] in ("login", "logout", "home") | where array_length(['tags']) > 0 Advanced: - Make series: ['metrics'] | make-series avg(['cpu']) default=0 on ['_time'] step 1m by ['host'] - Join: ['errors'] | join kind=inner (['users'] | project ['userId'], ['email']) on ['userId'] - Union: union ['logs-app*'] | where ['severity'] == "error" - Fork: ['logs'] | fork (where ['status'] >= 500 | as errors) (where ['status'] < 300 | as success) - Case: ['logs'] | extend level = case(['status'] >= 500, "error", ['status'] >= 400, "warn", "info") Time Operations: - Bin & Range: ['logs'] | where ['_time'] between(datetime(2024-01-01)..now()) - Multiple time bins: ['logs'] | summarize count() by bin(['_time'], 1h), bin(['_time'], 1d) - Time shifts: ['logs'] | extend prev_hour = ['_time'] - 1h String Operations: - String funcs: ['logs'] | extend domain = tolower(extract("://([^/]+)", 1, ['url'])) - Concat: ['logs'] | extend full_msg = strcat(['level'], ": ", ['message']) - Replace: ['logs'] | extend clean_msg = replace_regex("(password=)[^&]*", "\1***", ['message']) Common Patterns: - Error analysis: ['logs'] | where ['severity'] == "error" | summarize error_count=count() by ['error_code'], ['service'] - Status codes: ['logs'] | summarize requests=count() by ['status'], bin_auto(['_time']) | where ['status'] >= 500 - Latency tracking: ['logs'] | summarize p50=percentile(['duration'], 50), p90=percentile(['duration'], 90) by ['endpoint'] - User activity: ['logs'] | summarize user_actions=count() by ['userId'], ['action'], bin(['_time'], 1h)

listDatasets

List all available Axiom datasets

getDatasetInfoAndSchema

Get dataset info and schema

Configuration

Customize the skillset to fit your needs.
MCP Server

Connect to MCP Server

Axiom MCP Server

需求文檔撰寫助手
告訴我您的產品或功能想法 - 我將幫助您創建全面且詳細的需求文檔,涵蓋用戶故事、驗收標準、技術規範等內容。
AI 網頁工程師
AI Programmer 是一個 AI 頁面,可以將您的原始發布說明轉換為時尚、可發布的 HTML 頁面。
股票新聞報告員
這個 AI 智能體實時監控和分析美國主要股票新聞,生成結構化的投資報告,提供關鍵見解、市場反應和行業級別的總結。
X/Twitter 助手
一個 AI 驅動的 Twitter 助手,幫助內容創作者將 AI 產品體驗轉化為病毒式推文 - 具有自動潤色、智能研究和一鍵發布功能。
辦公文檔助手
一個專為公司內部營運設計的 AI 虛擬行政助理。幫助您快速創建高品質的內部文檔,如公告、會議記錄、摘要、表格、流程和人力資源記錄。
AI 寫作助手
告訴我有關 AI 產品或品牌的信息 - 我將撰寫吸引人的營銷文案、文章和社交媒體帖子,根據您的品牌聲音和產品細節量身定制,並附上相關鏈接和插圖。
社區活動分析員
分析社區活動截圖,報告參與趨勢和討論亮點。上傳社區互動的截圖,該 Agent 會生成一份清晰的markdown報告,總結參與水平、關鍵討論主題和顯著亮點 — 非常適合社區經理、行銷人員和產品團隊。
Discourse 社區管理員
Discourse 社區管理員助手幫助您快速生成清晰、友好且結構良好的用戶回覆,使社區管理變得更輕鬆和專業。
Google 分析師
逐步指南,教您如何將 Google Analytics 4 (GA4) 屬性連接到 Google 分析師代理。涵蓋創建 Google Cloud 服務帳戶、啟用 Analytics Data API、授予 GA4 查看者訪問權限,以及配置代理以支持會話、用戶、跳出率、轉換等指標。非常適合快速在 Bika.ai 中設置 GA4 數據報告。

Frequently Asked Questions

一句話快速介紹:什麼是Bika.ai?
是什麽让 Bika.ai 如此独特?
"BIKA" 這個縮寫單詞代表什麼意思?
Bika.ai是怎麼做到AI自動化做事的?
Bika.ai是免費使用的嗎?
Bika.ai與ChatGPT、Gemini等AI助手有什麼區別?
Bika.ai與多維表格有什麼區別?
Bika.ai 在單表數據量、關聯引用變多後,如幾萬行、幾十萬行,會卡住嗎?
Bika.ai中的"空間站"是什麼?
付款後我擁有多少個付費空間?
什麼是"資源"?
Bika.ai 的團隊是如何「吃自己的狗糧」的?
Bika.ai如何幫助提高工作效率?
Bika.ai 的AI自動化功能有哪些特點?
Bika.ai 中的自動化模板是什麼?
Bika.ai 是否支持團隊協作及權限功能?

Embark on Your AI Automation

Axiom MCP Server | Bika.ai