AI-First Founder Workflow
How I Optimize AI Search Visibility for My SaaS and Let Agents Close the Loop
Foglift surfaces visibility and recommended fixes; the MCP-in-Cursor agent applies them in code; the next scan confirms. The loop is the unit of work.
My prospects research vendors inside ChatGPT now, then visit my landing page already with an opinion. Wynter’s 2026 CMO Sentiment Survey reported that 84% of B2B CMOs use AI or LLMs for vendor discovery, with ChatGPT, Perplexity, and Gemini ranked as the most common sources. Gartner’s widely-cited forecast (Feb 2024) projects that traditional search volume will decline 25% by 2026 as AI-driven discovery absorbs the difference. If a prospect sees a competitor recommended in the chat window and not me, the visit never happens.
I run an AI-native SaaS. The marketing team is me. I do not monitor my AI search visibility on a weekly cadence; I optimize it. The distinction matters. Monitoring catalogues drops. Optimization closes the gap. This post is the actual loop I run: visibility surfaces the gap, recommendations point at the next fix, an agent inside Cursor applies the edit, and the next scan confirms the score moved.
The optimization loop: visibility, recommendation, agent edit, repeat
Four stages, each measured. Each stage feeds the next, and the loop is the unit of work.
┌─────────────────┐ ┌─────────────────────┐
│ 1. Visibility │ ───▶ │ 2. Recommendation │
│ scan 4 engines │ │ prioritized fixes │
└─────────────────┘ └──────────┬──────────┘
▲ │
│ ▼
┌─────────┴───────┐ ┌─────────────────────┐
│ 4. Repeat │ ◀─── │ 3. Agent edit │
│ next scan │ │ MCP in Cursor │
│ verifies │ │ applies fix in code │
└─────────────────┘ └─────────────────────┘- Visibility. A scheduled scan queries ChatGPT, Perplexity, Claude, and Gemini against a fixed prompt set and surfaces three signals per prompt: am I mentioned, which URL was cited, and which competitors co-occurred. The diff against the previous run is where I read.
- Recommendation. The same scan emits a prioritized list of specific, page-level fixes. Not “improve your AEO score” but “your /pricing page is missing a FAQPage JSON-LD block for the query ‘how much does Foglift cost’, and ChatGPT is currently citing two competitor pricing pages for that query.” The recommendation is structural and specific.
- Agent edit. The Foglift MCP server exposes the recommendation directly to a coding agent inside Cursor or Claude Code. I ask the agent to apply the fix. It writes the JSON-LD, the FAQ entries, the comparison row, the inline citation, in the same buffer as my code. I review the diff and commit.
- Repeat. The next scheduled scan reads the deployed change. The score moves or it does not. If it does not, the recommendation engine surfaces what is still missing, and the loop runs again on the same page.
A founder who runs only stage 1 is monitoring. A founder who runs all four is optimizing. The compounding lives in stages 2 through 4, because that is where artifacts ship.
The MCP-in-Cursor loop in practice
Here is one concrete iteration, condensed from a session last week. The page is a comparison page on my site. The prompt that triggered it was “Foglift vs Profound vs Otterly,” which Foglift was not mentioned in across three of the four engines.
Step 1, inside Cursor with the Foglift MCP server attached:
> Run a Foglift scan on https://mysite.com/compare/foglift-vs-profound
> and tell me the lowest-scoring AEO dimension plus the top blocker.
[MCP response]
score: 71/100
weakest_dimension: FAQ Quality (4/10)
top_blocker: No FAQPage JSON-LD on the page.
ai_engine_context:
- ChatGPT cites Profound's /vs/ page for "Profound vs alternatives"
- Perplexity cites neither vendor's /vs/ page directly
- Both rival pages have FAQPage schema with comparison-intent questionsStep 2, same buffer:
> Draft a FAQPage JSON-LD block for this page with 4 entity-first questions
> covering pricing, free tier, MCP support, and the no-CSM positioning.
> Use the structure from the Foglift content brief.
[Cursor writes the block in the file, ready for review]Step 3:
> Re-run the Foglift scan against the local preview.
[MCP response]
score: 84/100
FAQ Quality: 9/10
next_blocker: "Citation Formatting (6/10) — no inline academic citations in body."Step 4: commit, push, deploy. Total elapsed time from “the weekly scan flagged this page” to “the structural fix is live” was 14 minutes, of which roughly 4 were me reading and approving the agent’s output. The next scheduled visibility scan, four days later, confirmed that ChatGPT and Perplexity both began citing the page for the original prompt. The loop closed.
Two details matter. First, the recommendation engine pointed at the structural blocker, not at a vague “improve content quality” suggestion, so the agent had a concrete target. Second, the MCP exposed the same data my dashboard shows, which means the agent and the human are working from one source of truth. Conventional content workflows ship the edit, wait two to four weeks for re-crawl, and then check rankings. The MCP loop closes the gap between “I think this is better” and “the score says this is better” before the page even deploys.
The signals that drive the next edit, and the ones I ignore
Inside stage 1, only three signals decide which page gets the next edit:
- Mention presence and position. Did I appear in the answer at all? Top half or bottom? Binary at first, gradational later.
- Page-level citation. If I appeared, which URL of mine was cited? This tells me which content the AI considers authoritative on the topic. It is rarely the homepage. It is usually a deep blog post or a documentation page, which becomes the surface I optimize next.
- Co-mention set. Which competitors appeared next to me, or instead of me? Accumulated over weeks, this is the most accurate competitive map I have, drawn from how AI engines actually behave rather than from a Magic Quadrant.
Things I deliberately ignore:
- Domain authority as a leading indicator. Ahrefs’ October 2025 Brand Radar analysis of the top 1,000 ChatGPT-cited pages found that 28.3% of those URLs had zero organic Google keywords. AI citation is an independent channel from Google rank. Chatoptic’s September 2025 study of 15 brands across 5 verticals (1,000 prompts) found a 0.034 Spearman correlation between Google rank position and ChatGPT recommendation order. Tracking DA as the input to the loop wastes the founder’s time.
- Aggregate “visibility score” trends without a diff. A score going from 41 to 43 produces no action. A specific prompt where I went from cited to not-cited names a page that needs an edit. The loop wants page-level diffs, not dashboard averages.
The pre-release scan: catch regressions before agents see them
Every time I ship a new landing page, a comparison page, or a meaningful blog post, the foglift CLI runs as a pre-push git hook before the agent or I can merge:
$ foglift scan https://mysite.com/new-page --json | jq '.aeo'
{
"score": 71,
"weakest_dimensions": [
"FAQ Quality",
"Citation Formatting"
],
"blockers": [
"FAQPage JSON-LD missing",
"no external citation in body"
]
}Sixty seconds. If the AEO score is below 80, the merge blocks until the named blockers are fixed. The structural changes the CLI flags (missing FAQPage schema, missing comparison table, undefined entity names) are five-line code edits when caught pre-release. They become a re-indexing project after the page has been crawled, because the AI engines preferentially ingest the first version they see.
Wiring the scan into the pre-push hook means the discipline survives my own laziness, and it also catches regressions when a coding agent edits a page for an unrelated reason and accidentally drops a JSON-LD block. The pre-release scan is the loop’s regression test.
Where you still need to think: the loop’s edges
I am honest with myself about what an automated loop misses.
- Brand mentions outside the tracked prompt set. If a prospect asks ChatGPT a question I have not added to my watchlist, the loop is silent on it. I add 1 to 2 prompts a month based on inbound conversations to keep the set fresh. The agent can also propose new prompts from the co-mention data, but the decision to add stays with me.
- Long-tail per-engine quirks. Each engine has idiosyncrasies. Perplexity favors recency more aggressively than ChatGPT. Seer Interactive’s June 2025 study of 5,000+ URLs found that 71% of citations came from content published in 2023 to 2025. A page that worked a year ago can age out of AI relevance without changing its rank. The weekly diff usually catches this within 1 to 2 weeks of the drop, but a per-engine deep audit is a quarterly human task.
- Sentiment shift on existing mentions. Being cited is not the same as being recommended. A quarterly sentiment review (about 30 minutes) catches the case where I am getting more mentions but they are neutral-to-caveated rather than affirmative. The agent can flag sentiment outliers; the response to a sentiment shift is a positioning decision, which stays human.
Why the loop compounds: each iteration narrows the gap
Over a year, the loop produces three things:
- Roughly 52 targeted page improvements, each tied to a verified AI search gap and a measured score delta from the next scan.
- A continuously updated competitive map (the co-mention set) that beats every static vendor list I have seen.
- A growing share of conversations where my prospects arrive at the site already convinced. Adobe’s holiday 2025 retail data showed an 805% year-over-year surge in AI-referred traffic, and ConvertMate’s 2025 cohort study found that AI-referred visitors converted at 4.4x the rate of standard organic visitors. Both numbers track with what I see in my own funnel.
The compounding works because each iteration ships a persistent artifact. A JSON-LD block I shipped today still earns citations 90 days from now. An entity-first paragraph the agent wrote into a comparison page keeps anchoring AI engines to my framing through every subsequent crawl. A founder who runs the loop weekly will, by month six, occupy a fundamentally different position in their category’s AI search surface than a founder who runs it never, because every iteration narrows the gap to the next citable answer.
Setting it up
If you want to copy this loop, the three pieces are:
- A scheduled visibility scan across ChatGPT, Perplexity, Claude, and Gemini against a fixed prompt set, emitting prompt-level diffs and prioritized recommendations. Foglift handles this on the paid tiers; the Foglift CLI (
foglift scan ai-check) lets you script it yourself if you prefer. - A CLI in your shell and a pre-push git hook for the pre-release AEO score gate. /aeo-checker in the dashboard runs the same engine; the API and CLI wire it into deploy flows.
- An MCP server hooked into your editor of choice. See the MCP integration docs for Cursor and Claude Code setup. Two npm-level commands and a config block.
Start with the visibility scan only. Add the pre-release gate in week two. Layer in the MCP loop once the first two rituals are habitual. Stacking everything in week one is how you fail to ship any of it.
Why this matters now
Aravind Srinivas (Perplexity CEO) reported at Bloomberg’s June 2025 Tech Summit that Perplexity was running roughly 780 million queries per month with about 20% month-over-month growth. That is one engine. ChatGPT, Claude, and Gemini are larger or comparable. The aggregate AI search surface where vendor discovery happens is now in the same order of magnitude as Google’s traditional search surface for vendor research queries.
AI search visibility is the first surface where agentic coding workflows compound for a founder, because the agent can read the same data the human sees and ship the fix in the same buffer as the code. A founder shipping into 2026 who treats this surface as “something to think about later” will, two years in, look up and find their competitors have spent 100+ loop iterations compounding citations. The cost of one loop iteration is minutes. The cost of the gap, if you do not start now, is the entire next product category.
Try it on your own SaaS
The free tier gives you 200 tokens, no credit card. That is enough to run a baseline visibility check against your top 8 prompts across all four engines plus a handful of AEO scans on your most important landing pages. Start Free, or if you want to see the loop inside an editor first, the MCP integration page shows the Cursor and Claude Code setup in under 5 minutes.
Sources & Further Reading
- Wynter, 2026 B2B CMO Sentiment Survey: 84% of B2B CMOs use AI or LLMs for vendor discovery; ChatGPT, Perplexity, and Gemini ranked as most-used sources.
- Gartner, Press Release on Search Engine Volume Forecast, Feb 2024: projects a 25% decline in traditional search volume by 2026 as AI-driven search absorbs query share.
- Adobe, Holiday 2025 Retail Insights: AI-driven retail traffic up 805% year-over-year through the November to December 2025 shopping window.
- ConvertMate, 2025 AI Referral Conversion Cohort Study: AI-referred visitors converted at 4.4x the rate of standard organic visitors across the studied e-commerce cohort.
- Ahrefs Brand Radar, 67% of ChatGPT’s Top 1,000 Citations Are Off-Limits to Marketers, October 28, 2025: 28.3% of the top-cited URLs in ChatGPT had zero organic Google keywords, indicating AI citation is independent of Google rank.
- Chatoptic, Google Rank vs ChatGPT Recommendation Correlation Study, September 4, 2025 (15 brands, 5 verticals, 1,000 prompts): 0.034 Spearman correlation between Google rank position and ChatGPT recommendation order; 61 to 62% brand overlap (not URL overlap) between the two surfaces.
- Seer Interactive, AI Crawler Log Analysis, June 2025 (5,000+ URLs, ChatGPT crawler logs plus Peec.ai citation tracking): 71% of citations came from content published in 2023 to 2025.
- Aravind Srinivas, Bloomberg Tech Summit, June 2025 (reported via Search Engine Land): Perplexity running ~780M queries per month at ~20% month-over-month growth as of May 2025.
- Aggarwal, Yin, Chakrabarti, Mitra. Geo-Aware Web Citation Lift, ACM KDD 2024: structured FAQ surfaces produced 30 to 40% lift in Position-Aware Web Citation (PAWC) for the corpus tested.
Fundamentals: Learn about GEO (Generative Engine Optimization) and AEO (Answer Engine Optimization) — the two frameworks for optimizing your content for AI search engines.
Related reading
Foglift API, CLI, and MCP for Developers
Run AI search analysis from your terminal, CI/CD, or coding assistant.
Foglift CLI
Scan any page from the terminal in 30 seconds.
Foglift MCP Server
Query AI visibility from inside Cursor, Claude Code, and Windsurf.
AI Search Optimization: 90-Day Plan
Week-by-week roadmap to improve AI search visibility.