FireCrawl MCP Server

Integration with FireCrawl to provide advanced web scraping capabilities for extracting structured data from complex websites.

Skills

Explore the skills and capabilities of this skillset.

firecrawl_map

Map a website to discover all indexed URLs on the site. **Best for:** Discovering URLs on a website before deciding what to scrape; finding specific sections of a website. **Not recommended for:** When you already know which specific URL you need (use scrape or batch_scrape); when you need the content of the pages (use scrape after mapping). **Common mistakes:** Using crawl to discover URLs instead of map. **Prompt Example:** "List all URLs on example.com." **Usage Example:** ```json { "name": "firecrawl_map", "arguments": { "url": "https://example.com" } } ``` **Returns:** Array of URLs found on the site.

firecrawl_crawl

Starts an asynchronous crawl job on a website and extracts content from all pages. **Best for:** Extracting content from multiple related pages, when you need comprehensive coverage. **Not recommended for:** Extracting content from a single page (use scrape); when token limits are a concern (use map + batch_scrape); when you need fast results (crawling can be slow). **Warning:** Crawl responses can be very large and may exceed token limits. Limit the crawl depth and number of pages, or use map + batch_scrape for better control. **Common mistakes:** Setting limit or maxDepth too high (causes token overflow); using crawl for a single page (use scrape instead). **Prompt Example:** "Get all blog posts from the first two levels of example.com/blog." **Usage Example:** ```json { "name": "firecrawl_crawl", "arguments": { "url": "https://example.com/blog/*", "maxDepth": 2, "limit": 100, "allowExternalLinks": false, "deduplicateSimilarURLs": true } } ``` **Returns:** Operation ID for status checking; use firecrawl_check_crawl_status to check progress.

firecrawl_scrape

Scrape content from a single URL with advanced options. This is the most powerful, fastest and most reliable scraper tool, if available you should always default to using this tool for any web scraping needs. **Best for:** Single page content extraction, when you know exactly which page contains the information. **Not recommended for:** Multiple pages (use batch_scrape), unknown page (use search), structured data (use extract). **Common mistakes:** Using scrape for a list of URLs (use batch_scrape instead). If batch scrape doesnt work, just use scrape and call it multiple times. **Prompt Example:** "Get the content of the page at https://example.com." **Usage Example:** ```json { "name": "firecrawl_scrape", "arguments": { "url": "https://example.com", "formats": ["markdown"], "maxAge": 3600000 } } ``` **Performance:** Add maxAge parameter for 500% faster scrapes using cached data. **Returns:** Markdown, HTML, or other formats as specified.

firecrawl_search

Search the web and optionally extract content from search results. This is the most powerful search tool available, and if available you should always default to using this tool for any web search needs. **Best for:** Finding specific information across multiple websites, when you don't know which website has the information; when you need the most relevant content for a query. **Not recommended for:** When you already know which website to scrape (use scrape); when you need comprehensive coverage of a single website (use map or crawl). **Common mistakes:** Using crawl or map for open-ended questions (use search instead). **Prompt Example:** "Find the latest research papers on AI published in 2023." **Usage Example:** ```json { "name": "firecrawl_search", "arguments": { "query": "latest AI research papers 2023", "limit": 5, "lang": "en", "country": "us", "scrapeOptions": { "formats": ["markdown"], "onlyMainContent": true } } } ``` **Returns:** Array of search results (with optional scraped content).

firecrawl_extract

Extract structured information from web pages using LLM capabilities. Supports both cloud AI and self-hosted LLM extraction. **Best for:** Extracting specific structured data like prices, names, details. **Not recommended for:** When you need the full content of a page (use scrape); when you're not looking for specific structured data. **Arguments:** - urls: Array of URLs to extract information from - prompt: Custom prompt for the LLM extraction - systemPrompt: System prompt to guide the LLM - schema: JSON schema for structured data extraction - allowExternalLinks: Allow extraction from external links - enableWebSearch: Enable web search for additional context - includeSubdomains: Include subdomains in extraction **Prompt Example:** "Extract the product name, price, and description from these product pages." **Usage Example:** ```json { "name": "firecrawl_extract", "arguments": { "urls": ["https://example.com/page1", "https://example.com/page2"], "prompt": "Extract product information including name, price, and description", "systemPrompt": "You are a helpful assistant that extracts product information", "schema": { "type": "object", "properties": { "name": { "type": "string" }, "price": { "type": "number" }, "description": { "type": "string" } }, "required": ["name", "price"] }, "allowExternalLinks": false, "enableWebSearch": false, "includeSubdomains": false } } ``` **Returns:** Extracted structured data as defined by your schema.

firecrawl_deep_research

Conduct deep web research on a query using intelligent crawling, search, and LLM analysis. **Best for:** Complex research questions requiring multiple sources, in-depth analysis. **Not recommended for:** Simple questions that can be answered with a single search; when you need very specific information from a known page (use scrape); when you need results quickly (deep research can take time). **Arguments:** - query (string, required): The research question or topic to explore. - maxDepth (number, optional): Maximum recursive depth for crawling/search (default: 3). - timeLimit (number, optional): Time limit in seconds for the research session (default: 120). - maxUrls (number, optional): Maximum number of URLs to analyze (default: 50). **Prompt Example:** "Research the environmental impact of electric vehicles versus gasoline vehicles." **Usage Example:** ```json { "name": "firecrawl_deep_research", "arguments": { "query": "What are the environmental impacts of electric vehicles compared to gasoline vehicles?", "maxDepth": 3, "timeLimit": 120, "maxUrls": 50 } } ``` **Returns:** Final analysis generated by an LLM based on research. (data.finalAnalysis); may also include structured activities and sources used in the research process.

firecrawl_generate_llmstxt

Generate a standardized llms.txt (and optionally llms-full.txt) file for a given domain. This file defines how large language models should interact with the site. **Best for:** Creating machine-readable permission guidelines for AI models. **Not recommended for:** General content extraction or research. **Arguments:** - url (string, required): The base URL of the website to analyze. - maxUrls (number, optional): Max number of URLs to include (default: 10). - showFullText (boolean, optional): Whether to include llms-full.txt contents in the response. **Prompt Example:** "Generate an LLMs.txt file for example.com." **Usage Example:** ```json { "name": "firecrawl_generate_llmstxt", "arguments": { "url": "https://example.com", "maxUrls": 20, "showFullText": true } } ``` **Returns:** LLMs.txt file contents (and optionally llms-full.txt).

firecrawl_check_crawl_status

Check the status of a crawl job. **Usage Example:** ```json { "name": "firecrawl_check_crawl_status", "arguments": { "id": "550e8400-e29b-41d4-a716-446655440000" } } ``` **Returns:** Status and progress of the crawl job, including results if available.

Configuration

Customize the skillset to fit your needs.
MCP Server

Connect to MCP Server

FireCrawl MCP Server

辦公文檔助手
一個專為公司內部營運設計的 AI 虛擬行政助理。幫助您快速創建高品質的內部文檔,如公告、會議記錄、摘要、表格、流程和人力資源記錄。
客服文檔助手
AI 助手協助客服團隊創建高質量的支援文檔,包括常見問題、工單回覆、道歉信和標準作業程序。引導您創建內部資源和面向客戶的材料。
AI 網頁工程師
AI Programmer 是一個 AI 頁面,可以將您的原始發布說明轉換為時尚、可發布的 HTML 頁面。
Github issues 助手
Github Issues 助手是一個 AI 智能體,用於簡化 GitHub issues的管理。它可以直接在存儲庫中簡化創建、跟踪和優先處理錯誤、任務或功能請求的過程。非常適合團隊使用,確保一致的格式,自動化重複步驟,並與開發管道集成。
Google 分析師
逐步指南,教您如何將 Google Analytics 4 (GA4) 屬性連接到 Google 分析師代理。涵蓋創建 Google Cloud 服務帳戶、啟用 Analytics Data API、授予 GA4 查看者訪問權限,以及配置代理以支持會話、用戶、跳出率、轉換等指標。非常適合快速在 Bika.ai 中設置 GA4 數據報告。
X/Twitter 助手
一個 AI 驅動的 Twitter 助手,幫助內容創作者將 AI 產品體驗轉化為病毒式推文 - 具有自動潤色、智能研究和一鍵發布功能。
社區活動分析員
分析社區活動截圖,報告參與趨勢和討論亮點。上傳社區互動的截圖,該 Agent 會生成一份清晰的markdown報告,總結參與水平、關鍵討論主題和顯著亮點 — 非常適合社區經理、行銷人員和產品團隊。
AI 寫作助手
告訴我有關 AI 產品或品牌的信息 - 我將撰寫吸引人的營銷文案、文章和社交媒體帖子,根據您的品牌聲音和產品細節量身定制,並附上相關鏈接和插圖。
品牌设计师
一款專為初創數字產品設計的品牌營銷 AI 助手,幫助您快速生成適合 Product Hunt、AppSumo 等平台的在線推廣材料,涵蓋視覺創意、推廣標語、品牌語調和賣點傳達

Frequently Asked Questions

一句話快速介紹:什麼是Bika.ai?
是什麽让 Bika.ai 如此独特?
"BIKA" 這個縮寫單詞代表什麼意思?
Bika.ai是怎麼做到AI自動化做事的?
Bika.ai是免費使用的嗎?
Bika.ai與ChatGPT、Gemini等AI助手有什麼區別?
Bika.ai與多維表格有什麼區別?
Bika.ai 在單表數據量、關聯引用變多後,如幾萬行、幾十萬行,會卡住嗎?
Bika.ai中的"空間站"是什麼?
付款後我擁有多少個付費空間?
什麼是"資源"?
Bika.ai 的團隊是如何「吃自己的狗糧」的?
Bika.ai如何幫助提高工作效率?
Bika.ai 的AI自動化功能有哪些特點?
Bika.ai 中的自動化模板是什麼?
Bika.ai 是否支持團隊協作及權限功能?

Embark on Your AI Automation