Axiom MCP Server

Integrates with Axiom for executing APL queries and listing datasets, enabling log analysis, anomaly detection, and data-driven decision making.

Skills

Explore the skills and capabilities of this skillset.

queryApl

# Instructions 1. Query Axiom datasets using Axiom Processing Language (APL). The query must be a valid APL query string. 2. ALWAYS get the schema of the dataset before running queries rather than guessing. You can do this by getting a single event and projecting all fields. 3. Keep in mind that there's a maximum row limit of 65000 rows per query. 4. Prefer aggregations over non aggregating queries when possible to reduce the amount of data returned. 5. Be selective in what you project in each query (unless otherwise needed, like for discovering the schema). It's expensive to project all fields. 6. ALWAYS restrict the time range of the query to the smallest possible range that meets your needs. This will reduce the amount of data scanned and improve query performance. 7. NEVER guess the schema of the dataset. If you don't where something is, use search first to find in which fields it appears. # Examples Basic: - Filter: ['logs'] | where ['severity'] == "error" or ['duration'] > 500ms - Time range: ['logs'] | where ['_time'] > ago(2h) and ['_time'] < now() - Project rename: ['logs'] | project-rename responseTime=['duration'], path=['url'] Aggregations: - Count by: ['logs'] | summarize count() by bin(['_time'], 5m), ['status'] - Multiple aggs: ['logs'] | summarize count(), avg(['duration']), max(['duration']), p95=percentile(['duration'], 95) by ['endpoint'] - Dimensional: ['logs'] | summarize dimensional_analysis(['isError'], pack_array(['endpoint'], ['status'])) - Histograms: ['logs'] | summarize histogram(['responseTime'], 100) by ['endpoint'] - Distinct: ['logs'] | summarize dcount(['userId']) by bin_auto(['_time']) Search & Parse: - Search all: search "error" or "exception" - Parse logs: ['logs'] | parse-kv ['message'] as (duration:long, error:string) with (pair_delimiter=",") - Regex extract: ['logs'] | extend errorCode = extract("error code ([0-9]+)", 1, ['message']) - Contains ops: ['logs'] | where ['message'] contains_cs "ERROR" or ['message'] startswith "FATAL" Data Shaping: - Extend & Calculate: ['logs'] | extend duration_s = ['duration']/1000, success = ['status'] < 400 - Dynamic: ['logs'] | extend props = parse_json(['properties']) | where ['props.level'] == "error" - Pack/Unpack: ['logs'] | extend fields = pack("status", ['status'], "duration", ['duration']) - Arrays: ['logs'] | where ['url'] in ("login", "logout", "home") | where array_length(['tags']) > 0 Advanced: - Make series: ['metrics'] | make-series avg(['cpu']) default=0 on ['_time'] step 1m by ['host'] - Join: ['errors'] | join kind=inner (['users'] | project ['userId'], ['email']) on ['userId'] - Union: union ['logs-app*'] | where ['severity'] == "error" - Fork: ['logs'] | fork (where ['status'] >= 500 | as errors) (where ['status'] < 300 | as success) - Case: ['logs'] | extend level = case(['status'] >= 500, "error", ['status'] >= 400, "warn", "info") Time Operations: - Bin & Range: ['logs'] | where ['_time'] between(datetime(2024-01-01)..now()) - Multiple time bins: ['logs'] | summarize count() by bin(['_time'], 1h), bin(['_time'], 1d) - Time shifts: ['logs'] | extend prev_hour = ['_time'] - 1h String Operations: - String funcs: ['logs'] | extend domain = tolower(extract("://([^/]+)", 1, ['url'])) - Concat: ['logs'] | extend full_msg = strcat(['level'], ": ", ['message']) - Replace: ['logs'] | extend clean_msg = replace_regex("(password=)[^&]*", "\1***", ['message']) Common Patterns: - Error analysis: ['logs'] | where ['severity'] == "error" | summarize error_count=count() by ['error_code'], ['service'] - Status codes: ['logs'] | summarize requests=count() by ['status'], bin_auto(['_time']) | where ['status'] >= 500 - Latency tracking: ['logs'] | summarize p50=percentile(['duration'], 50), p90=percentile(['duration'], 90) by ['endpoint'] - User activity: ['logs'] | summarize user_actions=count() by ['userId'], ['action'], bin(['_time'], 1h)

listDatasets

List all available Axiom datasets

getDatasetInfoAndSchema

Get dataset info and schema

Configuration

Customize the skillset to fit your needs.
MCP Server

Connect to MCP Server

Axiom MCP Server

客服文檔助手
AI 助手協助客服團隊創建高質量的支援文檔,包括常見問題、工單回覆、道歉信和標準作業程序。引導您創建內部資源和面向客戶的材料。
Github issues 助手
Github Issues 助手是一個 AI 智能體,用於簡化 GitHub issues的管理。它可以直接在存儲庫中簡化創建、跟踪和優先處理錯誤、任務或功能請求的過程。非常適合團隊使用,確保一致的格式,自動化重複步驟,並與開發管道集成。
品牌设计师
一款專為初創數字產品設計的品牌營銷 AI 助手,幫助您快速生成適合 Product Hunt、AppSumo 等平台的在線推廣材料,涵蓋視覺創意、推廣標語、品牌語調和賣點傳達
X/Twitter 助手
一個 AI 驅動的 Twitter 助手,幫助內容創作者將 AI 產品體驗轉化為病毒式推文 - 具有自動潤色、智能研究和一鍵發布功能。
Google 分析師
逐步指南,教您如何將 Google Analytics 4 (GA4) 屬性連接到 Google 分析師代理。涵蓋創建 Google Cloud 服務帳戶、啟用 Analytics Data API、授予 GA4 查看者訪問權限,以及配置代理以支持會話、用戶、跳出率、轉換等指標。非常適合快速在 Bika.ai 中設置 GA4 數據報告。
Discourse 社區管理員
Discourse 社區管理員助手幫助您快速生成清晰、友好且結構良好的用戶回覆,使社區管理變得更輕鬆和專業。
AI 寫作助手
告訴我有關 AI 產品或品牌的信息 - 我將撰寫吸引人的營銷文案、文章和社交媒體帖子,根據您的品牌聲音和產品細節量身定制,並附上相關鏈接和插圖。
AI 網頁工程師
AI Programmer 是一個 AI 頁面,可以將您的原始發布說明轉換為時尚、可發布的 HTML 頁面。
工單管理員
收集、分析和管理來自表單和數據庫的支持工單,幫助您高效地跟踪、優先處理和回應。

Frequently Asked Questions

一句話快速介紹:什麼是Bika.ai?
是什麽让 Bika.ai 如此独特?
"BIKA" 這個縮寫單詞代表什麼意思?
Bika.ai是怎麼做到AI自動化做事的?
Bika.ai是免費使用的嗎?
Bika.ai與ChatGPT、Gemini等AI助手有什麼區別?
Bika.ai與多維表格有什麼區別?
Bika.ai 在單表數據量、關聯引用變多後,如幾萬行、幾十萬行,會卡住嗎?
Bika.ai中的"空間站"是什麼?
付款後我擁有多少個付費空間?
什麼是"資源"?
Bika.ai 的團隊是如何「吃自己的狗糧」的?
Bika.ai如何幫助提高工作效率?
Bika.ai 的AI自動化功能有哪些特點?
Bika.ai 中的自動化模板是什麼?
Bika.ai 是否支持團隊協作及權限功能?

Embark on Your AI Automation

Axiom MCP Server | Bika.ai