Built by Metorial, the integration platform for agentic AI.

Learn More

    Server Summary

    • Scrape web pages

    • Crawl entire websites

    • Extract structured data

    • Automate browser interactions

Firecrawl MCP Server

The Firecrawl MCP Server provides powerful web scraping and crawling capabilities through the Model Context Protocol. It enables you to extract content from web pages in multiple formats, perform advanced browser automation tasks, and crawl entire websites with sophisticated filtering and control options. Whether you need to scrape a single page or systematically harvest data from an entire domain, this server offers the tools to get structured, clean data from the web.

Overview

Firecrawl is a comprehensive web scraping solution that goes beyond simple HTML retrieval. It handles JavaScript-heavy sites, performs browser automation, extracts structured data using AI, and provides multiple output formats including markdown, HTML, screenshots, and custom JSON schemas. The server supports both single-page scraping and large-scale website crawling with features like proxy rotation, ad-blocking, mobile emulation, and intelligent content extraction.

Tools

scrape_url

Scrape a single URL and return its content in various formats. This is your primary tool for extracting data from individual web pages.

Parameters:

  • url (required): The URL to scrape
  • formats: Output formats for the scraped content. Options include:
    • markdown: Clean markdown representation of the page
    • html: Cleaned HTML content
    • rawHtml: Original unprocessed HTML
    • links: All links found on the page
    • screenshot: Visual capture of the page
    • summary: AI-generated summary of the content
    • json: Structured data extraction using a custom schema with optional prompt
  • actions: Sequence of browser automation steps to perform before scraping:
    • wait: Pause for a specified duration or until a selector appears
    • click: Click on elements matching a CSS selector
    • write: Type text into input fields
    • press: Press keyboard keys
    • scroll: Scroll the page up or down
    • screenshot: Capture a screenshot at this point
    • executeJavascript: Run custom JavaScript code
    • scrape: Extract content at this point
    • pdf: Generate a PDF of the page
  • proxy: Proxy configuration for the request (basic, stealth, or auto)
  • mobile: Emulate a mobile device
  • maxAge: Return cached version if younger than specified milliseconds (default: 2 days)
  • headers: Custom HTTP headers including cookies and user-agent
  • timeout: Request timeout in milliseconds
  • waitFor: Delay before fetching content
  • blockAds: Enable ad-blocking and cookie popup removal
  • location: Geographic settings with country code and languages
  • excludeTags: HTML tags to exclude from output
  • includeTags: HTML tags to include in output
  • onlyMainContent: Extract only the main content, excluding navigation and footers
  • removeBase64Images: Strip base64-encoded images from output
  • skipTlsVerification: Skip TLS certificate verification
  • storeInCache: Store the page in Firecrawl's cache
  • zeroDataRetention: Enable zero data retention mode for privacy

start_crawl

Start a crawl job to systematically spider an entire website or domain. This tool initiates a background job that discovers and scrapes multiple pages according to your specifications.

Parameters:

  • url (required): The starting URL for the crawl
  • limit: Maximum number of pages to crawl (default: 10000)
  • includePaths: Array of regex patterns for URL paths to include
  • excludePaths: Array of regex patterns for URL paths to exclude
  • maxDiscoveryDepth: Maximum depth to crawl based on link discovery order
  • allowSubdomains: Follow links to subdomains of the starting domain
  • crawlEntireDomain: Follow links to sibling and parent URLs, not just child pages
  • allowExternalLinks: Follow links to external websites
  • ignoreQueryParameters: Treat URLs with different query parameters as the same page
  • sitemap: Sitemap strategy (include to use sitemap.xml or skip to discover from links)
  • delay: Delay in seconds between individual page scrapes
  • maxConcurrency: Maximum number of pages to scrape simultaneously
  • webhook: Webhook configuration for receiving real-time crawl events:
    • url: Webhook endpoint URL
    • events: Events to subscribe to (started, page, completed, failed)
    • headers: Custom headers for webhook requests
    • metadata: Additional metadata to include in webhook payloads
  • scrapeOptions: All scraping options from scrape_url apply to each crawled page
  • prompt: Natural language prompt to automatically generate crawler options
  • zeroDataRetention: Enable zero data retention mode

get_crawl_status

Check the status and retrieve results from an ongoing or completed crawl job.

Parameters:

  • crawlId (required): The crawl job ID returned from start_crawl
  • limit: Maximum number of results to return per page
  • next: Pagination cursor from a previous response to retrieve additional results

Returns: Current status, progress metrics, credits used, and scraped data from all pages.

cancel_crawl

Stop a crawl job that is currently in progress.

Parameters:

  • crawlId (required): The crawl job ID to cancel

Resource Templates

The Firecrawl MCP Server provides resource templates for accessing scraped content and crawl job data through a URI-based interface.

scraped-page

Access the scraped content of a specific URL.

URI Template: firecrawl://scraped/{url}

Use this resource to retrieve previously scraped content for a given URL. The URL should be properly encoded.

crawl-job

Access information about a specific crawl job including its status and metadata.

URI Template: firecrawl://crawl/{crawlId}

Retrieve comprehensive information about a crawl job, including its current state, configuration, and summary statistics.

crawl-pages

Access all pages discovered and scraped during a crawl job.

URI Template: firecrawl://crawl/{crawlId}/pages

Get the complete collection of pages from a crawl job, including their content in the requested formats.

crawl-page

Access a specific page from a crawl job by its index position.

URI Template: firecrawl://crawl/{crawlId}/page/{pageIndex}

Retrieve an individual page from a crawl job's results using its zero-based index.

Key Features

Multiple Output Formats

Extract web content in the format that best suits your needs. Convert web pages to clean markdown for LLM consumption, preserve HTML structure for parsing, capture visual screenshots, or extract structured data using custom JSON schemas with AI-powered extraction.

Advanced Browser Automation

Perform complex interactions with web pages before scraping. Click buttons, fill forms, scroll to load dynamic content, wait for elements to appear, and execute custom JavaScript. These actions enable scraping of JavaScript-heavy applications and content behind interactions.

Intelligent Content Extraction

Use AI-powered extraction to get only the content you need. The onlyMainContent option removes navigation, footers, and sidebars automatically. Custom JSON schemas with prompts allow you to extract specific structured data points using natural language instructions.

Large-Scale Crawling

Systematically crawl entire websites with sophisticated control over what gets scraped. Use path filtering with regex patterns to include or exclude specific sections. Control crawl depth, handle subdomains, and manage concurrency for efficient data collection. Real-time webhook notifications keep you informed of progress.

Privacy and Performance Options

Choose your proxy type based on needs: basic for speed, stealth for reliability, or auto for automatic fallback. Enable zero data retention for sensitive operations. Use caching to avoid redundant requests. Block ads and cookie popups for cleaner extraction and faster processing.

Geographic and Device Emulation

Scrape as if you're browsing from different countries with location settings. Emulate mobile devices to see mobile-optimized content. Set custom headers to match specific browser configurations.

Use Cases

This MCP server excels at research and data collection tasks. Use it to monitor competitor websites, aggregate news and articles, extract product information from e-commerce sites, collect real estate listings, gather job postings, archive web content, validate web page changes, or build datasets for machine learning. The combination of single-page scraping and site-wide crawling makes it suitable for both targeted extraction and comprehensive data harvesting operations.

The structured data extraction with custom schemas is particularly powerful for transforming unstructured web content into clean, typed data that can be directly used in applications or analysis pipelines. The browser automation capabilities enable scraping of modern single-page applications that traditional scrapers cannot handle.