Axiom MCP Server

Integrates with Axiom for executing APL queries and listing datasets, enabling log analysis, anomaly detection, and data-driven decision making.

Skills

Explore the skills and capabilities of this skillset.

queryApl

# Instructions 1. Query Axiom datasets using Axiom Processing Language (APL). The query must be a valid APL query string. 2. ALWAYS get the schema of the dataset before running queries rather than guessing. You can do this by getting a single event and projecting all fields. 3. Keep in mind that there's a maximum row limit of 65000 rows per query. 4. Prefer aggregations over non aggregating queries when possible to reduce the amount of data returned. 5. Be selective in what you project in each query (unless otherwise needed, like for discovering the schema). It's expensive to project all fields. 6. ALWAYS restrict the time range of the query to the smallest possible range that meets your needs. This will reduce the amount of data scanned and improve query performance. 7. NEVER guess the schema of the dataset. If you don't where something is, use search first to find in which fields it appears. # Examples Basic: - Filter: ['logs'] | where ['severity'] == "error" or ['duration'] > 500ms - Time range: ['logs'] | where ['_time'] > ago(2h) and ['_time'] < now() - Project rename: ['logs'] | project-rename responseTime=['duration'], path=['url'] Aggregations: - Count by: ['logs'] | summarize count() by bin(['_time'], 5m), ['status'] - Multiple aggs: ['logs'] | summarize count(), avg(['duration']), max(['duration']), p95=percentile(['duration'], 95) by ['endpoint'] - Dimensional: ['logs'] | summarize dimensional_analysis(['isError'], pack_array(['endpoint'], ['status'])) - Histograms: ['logs'] | summarize histogram(['responseTime'], 100) by ['endpoint'] - Distinct: ['logs'] | summarize dcount(['userId']) by bin_auto(['_time']) Search & Parse: - Search all: search "error" or "exception" - Parse logs: ['logs'] | parse-kv ['message'] as (duration:long, error:string) with (pair_delimiter=",") - Regex extract: ['logs'] | extend errorCode = extract("error code ([0-9]+)", 1, ['message']) - Contains ops: ['logs'] | where ['message'] contains_cs "ERROR" or ['message'] startswith "FATAL" Data Shaping: - Extend & Calculate: ['logs'] | extend duration_s = ['duration']/1000, success = ['status'] < 400 - Dynamic: ['logs'] | extend props = parse_json(['properties']) | where ['props.level'] == "error" - Pack/Unpack: ['logs'] | extend fields = pack("status", ['status'], "duration", ['duration']) - Arrays: ['logs'] | where ['url'] in ("login", "logout", "home") | where array_length(['tags']) > 0 Advanced: - Make series: ['metrics'] | make-series avg(['cpu']) default=0 on ['_time'] step 1m by ['host'] - Join: ['errors'] | join kind=inner (['users'] | project ['userId'], ['email']) on ['userId'] - Union: union ['logs-app*'] | where ['severity'] == "error" - Fork: ['logs'] | fork (where ['status'] >= 500 | as errors) (where ['status'] < 300 | as success) - Case: ['logs'] | extend level = case(['status'] >= 500, "error", ['status'] >= 400, "warn", "info") Time Operations: - Bin & Range: ['logs'] | where ['_time'] between(datetime(2024-01-01)..now()) - Multiple time bins: ['logs'] | summarize count() by bin(['_time'], 1h), bin(['_time'], 1d) - Time shifts: ['logs'] | extend prev_hour = ['_time'] - 1h String Operations: - String funcs: ['logs'] | extend domain = tolower(extract("://([^/]+)", 1, ['url'])) - Concat: ['logs'] | extend full_msg = strcat(['level'], ": ", ['message']) - Replace: ['logs'] | extend clean_msg = replace_regex("(password=)[^&]*", "\1***", ['message']) Common Patterns: - Error analysis: ['logs'] | where ['severity'] == "error" | summarize error_count=count() by ['error_code'], ['service'] - Status codes: ['logs'] | summarize requests=count() by ['status'], bin_auto(['_time']) | where ['status'] >= 500 - Latency tracking: ['logs'] | summarize p50=percentile(['duration'], 50), p90=percentile(['duration'], 90) by ['endpoint'] - User activity: ['logs'] | summarize user_actions=count() by ['userId'], ['action'], bin(['_time'], 1h)

listDatasets

List all available Axiom datasets

getDatasetInfoAndSchema

Get dataset info and schema

Configuration

Customize the skillset to fit your needs.
MCP Server

Connect to MCP Server

Axiom MCP Server

AI 写作助手
告诉我有关 AI 产品或品牌的信息 - 我将撰写吸引人的营销文案、文章和社交媒体帖子,根据您的品牌声音和产品细节量身定制,并附上相关链接和插图。
工单管理员
收集、分析和管理来自表单和数据库的支持工单,帮助您高效地跟踪、优先处理和回应。
客服文档助手
AI 助手协助客服团队创建高质量的支持文档,包括常见问题、工单回复、道歉信和标准操作程序。引导您创建内部资源和面向客户的材料。
Google 分析师
逐步指南,教您如何将 Google Analytics 4 (GA4) 属性连接到 Google 分析师代理。涵盖创建 Google Cloud 服务账户、启用 Analytics Data API、授予 GA4 查看者访问权限,以及配置代理以支持会话、用户、跳出率、转换等指标。非常适合快速在 Bika.ai 中设置 GA4 数据报告。
X/Twitter 助手
一个 AI 驱动的 Twitter 助手,帮助内容创作者将 AI 产品体验转化为病毒式推文 - 具有自动润色、智能研究和一键发布功能。
社区活动分析员
分析社区活动截图,报告参与趋势和讨论亮点。上传社区互动的截图,该智能体会生成一份清晰的markdown报告,总结参与水平、关键讨论主题和显著亮点 — 非常适合社区经理、市场营销人员和产品团队。
股票新闻报告员
这个 AI 智能体实时监控和分析美国主要股票新闻,生成结构化的投资报告,提供关键见解、市场反应和行业级别的总结。
AI 网页工程师
AI Programmer 是一个 AI 页面,可以将您的原始发布说明转换为时尚、可发布的 HTML 页面。
品牌设计师
一款专为初创数字产品设计的品牌营销 AI 助手,帮助您快速生成适合 Product Hunt、AppSumo 等平台的在线推广材料,涵盖视觉创意、推广标语、品牌语调和卖点传达

Frequently Asked Questions

Bika.ai是免费使用的吗?
是什么让 Bika.ai 如此独特?
一句话快速介绍:什么是Bika.ai?
"BIKA" 这个缩写单词代表什么意思?
Bika.ai是怎么做到AI自动化做事的?
Bika.ai与Kimi、ChatGPT等AI助手有什么区别?
Bika.ai与多维表格有什么区别?
Bika.ai在单表数据量、关联引用变多后,如几万行、几十万行,会卡吗?
Bika.ai中的"空间站"是什么?
付款后我拥有多少个付费空间?
什么是"资源"?
Bika.ai的团队是怎样”吃自己的狗粮“(应用自己的产品)的?
Bika.ai如何帮助提高工作效率?
Bika.ai 的AI自动化功能有哪些特点?
Bika.ai 中的自动化模板是什么?
Bika.ai 是否支持团队协作及权限功能?
Bika.ai是否只适合个人使用?企业团队会不适合?

Embark on Your AI Automation