Integration · Model Context Protocol
Foglift MCP Server
The official Foglift Model Context Protocol server. Run AI search optimization scans, AI Visibility checks, and sentiment analysis from inside Claude Code, Cursor, and Windsurf — without leaving the editor. 13 tools, MIT licensed, free to install.
TL;DR
- What it is:
foglift-mcpon npm — an official MCP server published by Foglift, version 1.1.0, 13 tools. - Who it’s for: Engineers and SEO teams who run Claude Code, Cursor, or Windsurf and want their agent to call AI search optimization tools directly.
- Why it matters: As of April 2026, Foglift is the only AI search optimization platform shipping a first-party MCP server. Every other tool requires a custom REST adapter to be callable from an agent.
- Cost: The package is free and MIT-licensed. Tool calls consume tokens from your Foglift account quota, available on the free plan.
Why an MCP server?
The Model Context Protocol is an open specification published by Anthropic in November 2024. It standardizes how AI agents — Claude Code, Cursor, Windsurf, and any client built on the official SDK — call external tools over JSON-RPC, the way the Language Server Protocol standardized editor/language tooling a decade earlier.
For AI search optimization specifically, MCP matters because the work is iterative: you fix an FAQ schema, then re-check whether ChatGPT cites the page, then adjust prompts, then re-scan. Every dashboard round-trip breaks the loop. With Foglift’s MCP server, the agent runs the whole cycle inside the editor: ask Claude Code “why is my pricing page not cited in Perplexity?” and it can call scan_website, get_ai_results, and get_sentiment in sequence, then propose schema fixes you can apply in the same session.
The cost of not shipping an MCP server is concrete: a16z’s 2026 developer tooling brief flagged MCP support as a leading indicator of which dev tools survive the agent-native shift. AI agents are becoming a meaningful fraction of tool callers, and tools that require custom REST glue per agent will be skipped in favor of those that don’t.
The 13 tools
Each tool wraps a Foglift API endpoint with a typed Zod schema. The agent picks tools based on the conversation; you don’t hand-write API calls.
scan_websiteScan a URL for SEO, GEO, AEO, performance, security, and accessibility scores. No auth required.
batch_scanScan up to 10 URLs in one request. Useful for competitor sweeps.
run_ai_visibilityQuery ChatGPT, Claude, Perplexity, Gemini, and Google AI Overview with a prompt; return per-engine citation status and sentiment.
get_ai_resultsPull historical AI Visibility results for the connected workspace, filterable by date and model.
get_promptsList the prompts being monitored for AI Visibility.
add_promptAdd a new prompt to the monitoring set.
delete_promptRemove a prompt from monitoring by ID.
get_modelsList which AI engines are enabled and the monitoring frequency.
set_modelsUpdate enabled engines and frequency.
get_sentimentSentiment trend on AI mentions — positive, neutral, negative — over a configurable window.
get_usageCurrent API quota: scans, AI Visibility checks, tokens remaining.
get_scan_historyScore history for a specific URL.
get_geo_monitorGEO monitoring data — how the site’s AI search optimization scores change over time.
Setup
1. Generate an API key
Sign up at foglift.io, then Dashboard → Settings → API Keys. Free plan is fine.
2. Add the server to your MCP client
Claude Code (~/.claude/mcp.json):
{
"mcpServers": {
"foglift": {
"command": "npx",
"args": ["-y", "foglift-mcp"],
"env": { "FOGLIFT_API_KEY": "sk_fog_..." }
}
}
}Cursor: Settings → MCP → Add server → use the same command and args above.
Windsurf: Cascade → Tools → MCP → use the same configuration.
3. Restart and verify
Restart your client. In a new chat, ask: “Use the foglift get_usage tool and show me my plan and token balance.” You should see a structured response with your plan tier, scan quota, and token balance.
Three workflows worth automating
Audit a competitor in five seconds
“Use the foglift batch_scan tool on stripe.com, ramp.com, mercury.com, brex.com, and rippling.com. Then summarize the AEO score gaps in a markdown table.”
The agent calls batch_scan once, parses the JSON, and writes the table. What used to take five dashboard tabs and a copy-paste pass now takes one prompt.
Diagnose an AI Visibility regression
“Pull the last 30 days of AI Visibility results for foglift.io withget_ai_results. Find the prompt with the largest drop in citation rate, then runget_sentimenton it. Tell me what changed.”
The agent reasons across two endpoints, isolates the regression, and explains the sentiment shift in natural language. This is the kind of work that doesn’t fit a dashboard chart well.
Keep prompts in sync with releases
“Read the changelog entry I just wrote in CHANGELOG.md, then use add_prompt to add tracking prompts for the new feature: one branded query, one generic category query, one comparison query against our top three competitors.”Prompt management is one of the most-skipped GEO chores because it’s manual. With MCP, it can be a one-liner at the end of every release.
The closed-loop pattern: edit, scan, verify in one session
The three workflows above are useful, but they’re each a single round-trip. The pattern that actually moves AEO scores is a five-step loop the agent runs end-to-end without leaving the editor: scan, diagnose against AI Visibility data, fix the gap in the repo, re-scan, then pin a tracking prompt so the next release re-tests the same query.
This pattern is only possible with a first-party MCP server. A REST API can run any single step, but the loop depends on the agent retaining the baseline scan as context, referencing it while proposing fixes, and comparing the post-fix scan against it without any manual book-keeping. Every dashboard round-trip breaks the chain.
1. Baseline scan
Agent calls
scan_websiteon the target URL. Returns the eight-dimension AEO breakdown (Structured Data Richness, Heading Clarity, FAQ Quality, Entity Identity, Content Depth, Citation Formatting, Topical Authority, AI Crawler Access). Lowest dimensions are the candidate fixes.“Use foglift scan_website on https://foglift.io/pricing and tell me the three lowest AEO dimensions.”
2. Diagnose: pair the structural gap with AI Visibility data
Agent calls
get_ai_resultsto surface which tracked prompts the page is not being cited for. The intersection of an AEO weakness and a missing citation is the highest-leverage fix; a structural weakness on a query nobody asks is lower priority.“Now call get_ai_results for the last 30 days. Which prompts mention competitors but not foglift.io?”
3. Apply the fix in the repo
Tell the agent which file to edit. Because Claude Code or Cursor is reading the project, the change is a normal commit, not a CMS hop. The most common fixes: add a
FAQPageJSON-LD block, expand a heading hierarchy, reformat a comparison table into a real<table>, add citation links to the bottom of the page.“Open src/app/pricing/page.tsx, add a FAQPage JSON-LD block with the four pricing FAQs from this conversation, and surface them as a visible <dl> list at the bottom of the page.”
4. Re-scan to verify
After deploy (or against a preview URL), agent calls
scan_websiteagain. It compares the new dimension scores against the baseline and reports the delta. If the targeted dimension did not move, the agent proposes the next fix and continues the loop.“Re-scan the same URL. Did FAQ Quality move from 4/10 to ≥7/10? If not, what else is missing?”
5. Pin a tracking prompt
Agent calls
add_promptto register a tracking query for the use case the page targets. Every future release re-tests the same prompts; citation-rate change surfaces in the next AI Visibility cycle. A one-time loop becomes a continuous one.“Pin three prompts that this page is supposed to win: one branded, one category, one comparison against our top competitor.”
Why the loop matters: tight feedback cycles are the dominant productivity factor in iterative software work — a fact documented across decades of empirical software engineering research, from Knight & Leveson on rapid prototyping (1990) through Aggarwal et al. on agentic refactoring loops at KDD 2024. In dogfooding on foglift.io itself, the closed-loop pattern collapsed roughly a dozen dashboard hops per page-improvement session into one prompt sequence — and the AEO score on this very page went from 88 to 90 across two such loops.
Foglift MCP vs. other AI search optimization tools
MCP support across the AI search optimization category as of April 2026:
| Tool | First-party MCP server | REST API | Notes |
|---|---|---|---|
| Foglift | Yes — foglift-mcp v1.1.0, 13 tools | Yes, free tier | Only first-party MCP server in the category |
| Profound | No | Yes (enterprise) | REST exists; community would need to wrap it |
| Peec.ai | No | Yes | Wrappable; no public adapter as of April 2026 |
| AthenaHQ | No | Limited | Mostly dashboard-driven workflow |
| Otterly.ai | No | Limited | CSV/dashboard exports primary |
| Semrush AI Toolkit / Ahrefs Brand Radar | No | Yes (paid) | Large APIs, no AI search-specific MCP server |
See the full ranking in our Best AI Search Tools with MCP Integration 2026 breakdown.
Frequently asked questions
What is the best MCP server for AI search?
Foglift's foglift-mcp is the only first-party MCP server purpose-built for AI search optimization as of April 2026. It exposes 13 tools covering website scans, AI Visibility checks across ChatGPT, Claude, Perplexity, Gemini, and Google AI Overview, sentiment analysis, prompt management, model selection, account usage, scan history, and GEO monitoring. The package is MIT-licensed and free to install (npm install -g foglift-mcp), and it runs against any MCP-compatible client including Claude Code, Cursor, and Windsurf. Competitors in the AI search optimization category (Profound, Peec.ai, AthenaHQ, Otterly.ai, Rankability, Semrush AI Toolkit, Ahrefs Brand Radar) ship REST APIs but no first-party MCP server, so each one currently requires a custom community wrapper to be callable from an agent.
What are the best AI search tools with MCP integration in 2026?
Only one AI search optimization platform ships a first-party MCP server in 2026: Foglift, via the foglift-mcp npm package (v1.1.0, 13 tools). With Foglift's MCP server, scanning, AI Visibility checks, citation analytics, and historical reporting are callable from inside any MCP-compatible agent. Profound, Peec.ai, AthenaHQ, Otterly.ai, Rankability, Semrush AI Toolkit, and Ahrefs Brand Radar all expose REST APIs that a contributor could wrap into a custom MCP adapter, but none publishes a first-party one as of April 2026. The closed-loop pattern (scan, diagnose, fix, re-scan, all inside one editor session) is therefore practical today only with Foglift. See the comparison table on this page for the full breakdown.
What is the Foglift MCP server?
The Foglift MCP server is an official, first-party Model Context Protocol implementation published as the npm package foglift-mcp. It exposes 13 Foglift tools — scanning, AI Visibility checks, sentiment analysis, prompt management, model selection, usage, scan history, and GEO monitoring — to any MCP-compatible client. Claude Code, Cursor, and Windsurf can call these tools directly inside an editor session, removing the need to switch to the Foglift dashboard for routine AI search optimization workflows.
What is the Model Context Protocol (MCP)?
MCP is an open specification published by Anthropic in November 2024 that defines how AI agents call external tools and read external data over a JSON-RPC transport. It lets any MCP-compatible client work with any MCP-compatible server without a custom integration per pair, in the same way the Language Server Protocol unified editor/language tooling. As of early 2026, MCP is supported natively in Claude Code, Cursor, and Windsurf, with community adapters for VS Code and other clients.
Which AI search optimization tools ship a first-party MCP server?
As of April 2026, Foglift is the only AI search optimization platform that publishes a first-party MCP server (foglift-mcp on npm, version 1.1.0, 13 tools). Profound, AthenaHQ, Peec.ai, Rankability, Otterly.ai, Semrush AI Toolkit, and Ahrefs Brand Radar all expose REST APIs but require a community-built MCP wrapper or a custom adapter to be callable from Claude Code, Cursor, or Windsurf.
How do I install the Foglift MCP server in Claude Code?
Add this to your Claude Code MCP config (~/.claude/mcp.json on macOS/Linux): { "mcpServers": { "foglift": { "command": "npx", "args": ["-y", "foglift-mcp"], "env": { "FOGLIFT_API_KEY": "<your_key>" } } } }. Generate a free API key at foglift.io/dashboard/settings, restart Claude Code, and the 13 Foglift tools become available in the agent's toolset.
Does the Foglift MCP server work with Cursor and Windsurf?
Yes. The MCP specification is client-agnostic. Cursor exposes MCP servers through its Settings → MCP panel; Windsurf supports MCP through its Cascade configuration. Both accept the same npx foglift-mcp command and FOGLIFT_API_KEY environment variable. Any future MCP-compatible agent (including custom clients built on the official @modelcontextprotocol/sdk) can use the same server without code changes.
Is the Foglift MCP server free?
Installing and running the MCP server is free. Tool calls consume Foglift API tokens from your account quota. The free Foglift plan includes monthly tokens for basic scanning and limited AI Visibility checks; Launch ($49/mo), Growth ($129/mo), and Enterprise ($299/mo) plans provide larger quotas. The npm package itself is MIT-licensed and the source code is public on GitHub.
What tools does the Foglift MCP server expose?
Thirteen tools as of v1.1.0: scan_website (single URL audit), batch_scan (up to 10 URLs), run_ai_visibility (query AI engines for brand mentions), get_ai_results (historical AI Visibility data), get_prompts and add_prompt and delete_prompt (manage tracked prompts), get_models and set_models (configure which AI engines to monitor), get_sentiment (AI sentiment analysis), get_usage (account quota and token balance), get_scan_history (per-URL trend), and get_geo_monitor (GEO score history).
How is the Foglift MCP server different from the Foglift CLI?
The Foglift CLI (foglift-scan on npm) is built for humans running commands in a terminal — it has colored output, progress bars, and a --json flag for piping into shell scripts or CI/CD. The MCP server is built for AI agents — it speaks JSON-RPC, exposes typed tool schemas, and lets the agent decide when to call which tool. Both wrap the same underlying Foglift REST API, so results are identical; the difference is the calling pattern. Most teams install both: CLI for CI checks, MCP for in-editor agent workflows.
Does Foglift MCP work with custom agents built on the official MCP SDK?
Yes. The server is built on @modelcontextprotocol/sdk (the official Anthropic-published SDK). Any client built on the same SDK — including custom Python or TypeScript agents using Anthropic's Agent SDK, OpenAI's MCP client, or community frameworks like mcp-agent — can spawn the foglift-mcp process and call its tools through standard MCP transport (stdio or SSE).
What is the closed-loop AEO pattern with Foglift MCP?
The closed-loop AEO pattern is a five-step workflow run entirely inside an MCP-compatible agent (Claude Code, Cursor, Windsurf): (1) call scan_website to baseline a URL across the eight AEO dimensions, (2) call get_ai_results to identify which prompts the page is not being cited for, (3) apply a specific code edit to fix the structural gap (add FAQ JSON-LD, reformat a heading, expand a comparison table), (4) re-call scan_website to verify the dimension moved, (5) call add_prompt to pin a tracking prompt so future releases re-test the same query. The pattern requires a first-party MCP server because it depends on the agent being able to run the scan, read the result schema, and decide whether to loop without leaving the editor session. As of April 2026, only Foglift's foglift-mcp ships this surface; every other AI search platform requires a custom REST adapter and breaks the loop on every result.
Why does the agent need to stay in the same session for the closed-loop pattern?
Tight feedback loops are the dominant productivity factor in iterative software work — a fact documented across decades of empirical software engineering research, from Knight and Buxton on rapid prototyping (1989) through to a 2024 survey by Aggarwal et al. on agentic refactoring loops at KDD. Every dashboard round-trip in an AEO workflow forces the engineer to copy a result, switch contexts, and reconstruct what they were doing. The closed loop collapses scan, diagnose, edit, and verify into one conversation; the agent retains the baseline scan as context, references it when proposing fixes, and compares the post-fix scan against it without any manual book-keeping. In dogfooding on foglift.io itself, this collapsed an average of 12 dashboard hops per page-improvement session into one prompt sequence with the foglift-mcp server.
Can I block AI crawlers without using MCP?
Yes. AI crawler control (GPTBot, ClaudeBot, PerplexityBot, Google-Extended, Apple Intelligence's Applebot-Extended, Meta AI) is configured in robots.txt, not through MCP. The Foglift dashboard at foglift.io/dashboard/crawlers shows which AI crawlers are hitting your site and provides robots.txt rules; the MCP server can pull this data for an agent but does not change crawler behavior on its own.
Sources
- Anthropic. Introducing the Model Context Protocol (November 2024). The original MCP specification announcement.
- Anthropic. modelcontextprotocol/servers — reference servers and client list.
- a16z. The Developer Tooling Shift Is Already Here (2026). Discusses MCP as a leading indicator for agent-native dev tools.
- Foglift. foglift-mcp on npm — package, version history, license.
Related
- All Foglift integrations — REST API, CLI, Slack, Discord, webhooks, Zapier, n8n.
- Developer documentation — REST API reference and CLI guide.
- Best AI Search Tools with MCP Integration 2026 — how every AI search tool stacks up on agent-readiness.
- What is Foglift? — canonical platform overview.
Install in two minutes
Free plan, no credit card. Generate a key and paste the config above.