FireCrawl MCP Server

Integration with FireCrawl to provide advanced web scraping capabilities for extracting structured data from complex websites.

Skills

Explore the skills and capabilities of this skillset.

firecrawl_map

Map a website to discover all indexed URLs on the site. **Best for:** Discovering URLs on a website before deciding what to scrape; finding specific sections of a website. **Not recommended for:** When you already know which specific URL you need (use scrape or batch_scrape); when you need the content of the pages (use scrape after mapping). **Common mistakes:** Using crawl to discover URLs instead of map. **Prompt Example:** "List all URLs on example.com." **Usage Example:** ```json { "name": "firecrawl_map", "arguments": { "url": "https://example.com" } } ``` **Returns:** Array of URLs found on the site.

firecrawl_crawl

Starts an asynchronous crawl job on a website and extracts content from all pages. **Best for:** Extracting content from multiple related pages, when you need comprehensive coverage. **Not recommended for:** Extracting content from a single page (use scrape); when token limits are a concern (use map + batch_scrape); when you need fast results (crawling can be slow). **Warning:** Crawl responses can be very large and may exceed token limits. Limit the crawl depth and number of pages, or use map + batch_scrape for better control. **Common mistakes:** Setting limit or maxDepth too high (causes token overflow); using crawl for a single page (use scrape instead). **Prompt Example:** "Get all blog posts from the first two levels of example.com/blog." **Usage Example:** ```json { "name": "firecrawl_crawl", "arguments": { "url": "https://example.com/blog/*", "maxDepth": 2, "limit": 100, "allowExternalLinks": false, "deduplicateSimilarURLs": true } } ``` **Returns:** Operation ID for status checking; use firecrawl_check_crawl_status to check progress.

firecrawl_scrape

Scrape content from a single URL with advanced options. This is the most powerful, fastest and most reliable scraper tool, if available you should always default to using this tool for any web scraping needs. **Best for:** Single page content extraction, when you know exactly which page contains the information. **Not recommended for:** Multiple pages (use batch_scrape), unknown page (use search), structured data (use extract). **Common mistakes:** Using scrape for a list of URLs (use batch_scrape instead). If batch scrape doesnt work, just use scrape and call it multiple times. **Prompt Example:** "Get the content of the page at https://example.com." **Usage Example:** ```json { "name": "firecrawl_scrape", "arguments": { "url": "https://example.com", "formats": ["markdown"], "maxAge": 3600000 } } ``` **Performance:** Add maxAge parameter for 500% faster scrapes using cached data. **Returns:** Markdown, HTML, or other formats as specified.

firecrawl_search

Search the web and optionally extract content from search results. This is the most powerful search tool available, and if available you should always default to using this tool for any web search needs. **Best for:** Finding specific information across multiple websites, when you don't know which website has the information; when you need the most relevant content for a query. **Not recommended for:** When you already know which website to scrape (use scrape); when you need comprehensive coverage of a single website (use map or crawl). **Common mistakes:** Using crawl or map for open-ended questions (use search instead). **Prompt Example:** "Find the latest research papers on AI published in 2023." **Usage Example:** ```json { "name": "firecrawl_search", "arguments": { "query": "latest AI research papers 2023", "limit": 5, "lang": "en", "country": "us", "scrapeOptions": { "formats": ["markdown"], "onlyMainContent": true } } } ``` **Returns:** Array of search results (with optional scraped content).

firecrawl_extract

Extract structured information from web pages using LLM capabilities. Supports both cloud AI and self-hosted LLM extraction. **Best for:** Extracting specific structured data like prices, names, details. **Not recommended for:** When you need the full content of a page (use scrape); when you're not looking for specific structured data. **Arguments:** - urls: Array of URLs to extract information from - prompt: Custom prompt for the LLM extraction - systemPrompt: System prompt to guide the LLM - schema: JSON schema for structured data extraction - allowExternalLinks: Allow extraction from external links - enableWebSearch: Enable web search for additional context - includeSubdomains: Include subdomains in extraction **Prompt Example:** "Extract the product name, price, and description from these product pages." **Usage Example:** ```json { "name": "firecrawl_extract", "arguments": { "urls": ["https://example.com/page1", "https://example.com/page2"], "prompt": "Extract product information including name, price, and description", "systemPrompt": "You are a helpful assistant that extracts product information", "schema": { "type": "object", "properties": { "name": { "type": "string" }, "price": { "type": "number" }, "description": { "type": "string" } }, "required": ["name", "price"] }, "allowExternalLinks": false, "enableWebSearch": false, "includeSubdomains": false } } ``` **Returns:** Extracted structured data as defined by your schema.

firecrawl_deep_research

Conduct deep web research on a query using intelligent crawling, search, and LLM analysis. **Best for:** Complex research questions requiring multiple sources, in-depth analysis. **Not recommended for:** Simple questions that can be answered with a single search; when you need very specific information from a known page (use scrape); when you need results quickly (deep research can take time). **Arguments:** - query (string, required): The research question or topic to explore. - maxDepth (number, optional): Maximum recursive depth for crawling/search (default: 3). - timeLimit (number, optional): Time limit in seconds for the research session (default: 120). - maxUrls (number, optional): Maximum number of URLs to analyze (default: 50). **Prompt Example:** "Research the environmental impact of electric vehicles versus gasoline vehicles." **Usage Example:** ```json { "name": "firecrawl_deep_research", "arguments": { "query": "What are the environmental impacts of electric vehicles compared to gasoline vehicles?", "maxDepth": 3, "timeLimit": 120, "maxUrls": 50 } } ``` **Returns:** Final analysis generated by an LLM based on research. (data.finalAnalysis); may also include structured activities and sources used in the research process.

firecrawl_generate_llmstxt

Generate a standardized llms.txt (and optionally llms-full.txt) file for a given domain. This file defines how large language models should interact with the site. **Best for:** Creating machine-readable permission guidelines for AI models. **Not recommended for:** General content extraction or research. **Arguments:** - url (string, required): The base URL of the website to analyze. - maxUrls (number, optional): Max number of URLs to include (default: 10). - showFullText (boolean, optional): Whether to include llms-full.txt contents in the response. **Prompt Example:** "Generate an LLMs.txt file for example.com." **Usage Example:** ```json { "name": "firecrawl_generate_llmstxt", "arguments": { "url": "https://example.com", "maxUrls": 20, "showFullText": true } } ``` **Returns:** LLMs.txt file contents (and optionally llms-full.txt).

firecrawl_check_crawl_status

Check the status of a crawl job. **Usage Example:** ```json { "name": "firecrawl_check_crawl_status", "arguments": { "id": "550e8400-e29b-41d4-a716-446655440000" } } ``` **Returns:** Status and progress of the crawl job, including results if available.

Configuration

Customize the skillset to fit your needs.
MCP Server

Connect to MCP Server

FireCrawl MCP Server

Email 营销助手
自动寻找潜在客户并发送为期3天的跟进邮件序列。
Discourse 社区管理员
Discourse 社区管理员助手帮助您快速生成清晰、友好且结构良好的用户回复,使社区管理变得更轻松和专业。
X/Twitter 助手
一个 AI 驱动的 Twitter 助手,帮助内容创作者将 AI 产品体验转化为病毒式推文 - 具有自动润色、智能研究和一键发布功能。
办公文档助手
一个专为公司内部运营设计的 AI 虚拟行政助理。帮助您快速创建高质量的内部文档,如公告、会议记录、摘要、表格、流程和人力资源记录。
AI 写作助手
告诉我有关 AI 产品或品牌的信息 - 我将撰写吸引人的营销文案、文章和社交媒体帖子,根据您的品牌声音和产品细节量身定制,并附上相关链接和插图。
Github issues 助手
Github Issues 助手是一个 AI 智能体,用于简化 GitHub issues的管理。它可以直接在存储库中简化创建、跟踪和优先处理错误、任务或功能请求的过程。非常适合团队使用,确保一致的格式,自动化重复步骤,并与开发管道集成。
客服文档助手
AI 助手协助客服团队创建高质量的支持文档,包括常见问题、工单回复、道歉信和标准操作程序。引导您创建内部资源和面向客户的材料。
Google 分析师
逐步指南,教您如何将 Google Analytics 4 (GA4) 属性连接到 Google 分析师代理。涵盖创建 Google Cloud 服务账户、启用 Analytics Data API、授予 GA4 查看者访问权限,以及配置代理以支持会话、用户、跳出率、转换等指标。非常适合快速在 Bika.ai 中设置 GA4 数据报告。
工单管理员
收集、分析和管理来自表单和数据库的支持工单,帮助您高效地跟踪、优先处理和回应。

Frequently Asked Questions

Bika.ai是免费使用的吗?
是什么让 Bika.ai 如此独特?
一句话快速介绍:什么是Bika.ai?
"BIKA" 这个缩写单词代表什么意思?
Bika.ai是怎么做到AI自动化做事的?
Bika.ai与Kimi、ChatGPT等AI助手有什么区别?
Bika.ai与多维表格有什么区别?
Bika.ai在单表数据量、关联引用变多后,如几万行、几十万行,会卡吗?
Bika.ai中的"空间站"是什么?
付款后我拥有多少个付费空间?
什么是"资源"?
Bika.ai的团队是怎样”吃自己的狗粮“(应用自己的产品)的?
Bika.ai如何帮助提高工作效率?
Bika.ai 的AI自动化功能有哪些特点?
Bika.ai 中的自动化模板是什么?
Bika.ai 是否支持团队协作及权限功能?
Bika.ai是否只适合个人使用?企业团队会不适合?

Embark on Your AI Automation

FireCrawl MCP Server | Bika.ai