AI-First Founder Workflow
Why Your AI Product is Invisible to ChatGPT (and How to Fix It)
A 4-step diagnostic when ChatGPT recommends competitors instead of your product: AEO scan, prompt-level gap analysis, schema and entity audit, then an MCP-loop content edit. Anonymized arc from 0% to 60% mention rate on one prompt cluster.
Two weeks ago a founder messaged me with a familiar problem. He asked ChatGPT the question his prospects actually type: “what is the best AI search optimization tool for a B2B SaaS?” ChatGPT named four competitors. It did not name his product. His landing page had ranked first on Google for the same query for six months. He had assumed the AI search surface would inherit that authority. It had not.
This is the shape of the problem most AI-first founders hit by month three. The product is shipped, the website is indexed, the marketing site converts; the AI engine that increasingly mediates your category’s discovery looks past you. Wynter’s 2026 B2B CMO Sentiment Survey reported that 84% of B2B CMOs use AI or LLMs for vendor discovery, with ChatGPT, Perplexity, and Gemini ranked as the most-used sources. The query that decides which vendor enters the buyer’s consideration set is happening inside a chat window, often before a single page on your site is visited. If you are invisible there, the rest of your funnel never gets a chance.
The fix is a 4-step diagnostic loop. The order matters, because each step narrows the surface and tells you what to do next. I will walk through the loop, then show a real anonymized arc where one founder moved from 0% to 60% mention rate on a single high-intent prompt cluster in 9 days.
The 4-step diagnostic loop
┌─────────────────────┐ ┌──────────────────────────┐
│ 1. AEO scan │ ───▶ │ 2. Prompt gap analysis │
│ score per page │ │ which prompts exclude me │
└─────────────────────┘ └─────────────┬────────────┘
│
▼
┌─────────────────────┐ ┌──────────────────────────┐
│ 4. MCP agent edit │ ◀─── │ 3. Schema + entity audit │
│ ship the fix │ │ what structural signal │
│ in code │ │ is missing │
└─────────────────────┘ └──────────────────────────┘Each step has one job. Step 1 tells you which pages are unprepared for AI extraction. Step 2 tells you which prompts you are losing on. Step 3 tells you what structural signal to add. Step 4 ships the change. Skipping step 1 wastes time fixing pages that already score well; skipping step 2 fixes the wrong pages; skipping step 3 ships generic content that the agent will not have a target for; skipping step 4 produces a backlog of insights that never deploy.
Step 1: Run an AEO scan on your highest-intent pages
The first question is not “why does ChatGPT prefer my competitor.” The first question is “is my page even extractable.” Run an AEO scan on the 5 to 8 pages a prospect would land on if they followed a recommendation: your homepage, your pricing page, your top 2 comparison pages, your top 2 use-case landing pages, and your most-trafficked blog post.
AEO scoring rates each page across 8 dimensions: Structured Data Richness, Heading Clarity, FAQ Quality, Entity Identity, Content Depth, Citation Formatting, Topical Authority, and AI Crawler Access. The dashboard prints the breakdown per page. A homepage scoring 88 across all dimensions is retrieval-ready; a pricing page scoring 51 with a 2/10 on FAQ Quality is the bottleneck.
From the terminal:
$ foglift scan https://mysite.com/pricing --json | jq '.aeo'
{
"score": 51,
"weakest_dimensions": [
"FAQ Quality",
"Structured Data Richness",
"Entity Identity"
],
"blockers": [
"FAQPage JSON-LD missing",
"Product schema missing",
"Organization name not used in first paragraph"
]
}Three concrete blockers. None of them are about copy or design. All three are five-line code edits. This is the most important reframe of the diagnostic: the gap between you and the competitor ChatGPT recommends is rarely a quality gap. It is a structural-signal gap.
Step 2: Prompt-level gap analysis
Now you know which pages need work. The next question is which specific queries you are losing on, because that decides which page gets the next edit. The page-level score tells you the page is unprepared; the prompt-level scan tells you the prompt is unwon.
Build a fixed prompt set of 8 to 15 queries your buyer actually types. Examples for an AI-native SaaS in the customer-support category:
- “best AI customer support tool for B2B SaaS”
- “cheapest customer support automation”
- “Intercom alternatives 2026”
- “Zendesk vs Front for SaaS”
- “customer support automation under $200/month”
Run those prompts through ChatGPT, Perplexity, Claude, and Gemini on a weekly cadence and record three signals per prompt: am I mentioned, which URL of mine was cited if so, which competitors co-occurred. The output is a gap map. It looks something like this for the founder in our anonymized arc:
Prompt | ChatGPT | Perplexity | Claude | Gemini
--------------------------------------------|---------|------------|--------|--------
best AI customer support tool for B2B SaaS | ❌ | ❌ | ❌ | ❌
cheapest customer support automation | ❌ | ✓ #4 | ❌ | ❌
Intercom alternatives 2026 | ❌ | ✓ #6 | ❌ | ❌
Zendesk vs Front for SaaS | ❌ | ❌ | ❌ | ❌
customer support automation under $200/mo | ❌ | ❌ | ❌ | ❌
Co-mention set (competitors named):
Intercom (5/5)
Zendesk (5/5)
Plain (3/5)
Front (3/5)The pattern jumps out immediately. The founder appears once or twice in Perplexity (which is more recency-biased and picks up newer content faster) and nowhere in ChatGPT. The same three competitors show up on every prompt. Those competitors are not winning the category on product quality; they are winning the AI search surface because they have shipped the structural signals on the pages the AI engines retrieve from.
This step also surfaces something the dashboard alone will miss: the exact prompt-and-URL pair that triggers the citation. When Perplexity cited the founder on “cheapest customer support automation,” the URL was not the pricing page; it was a 2-month-old blog post about pricing transparency. That blog post becomes the surface to optimize next, because the AI engines have already decided it is the authoritative page on that topic.
Step 3: Schema and entity audit
Steps 1 and 2 give you a target page and a target prompt. Step 3 tells you what to ship.
Pull the highest-priority page (the one with the worst AEO score on the prompt cluster you are losing on) and audit it for three structural signals:
- FAQPage JSON-LD with prompt-shaped questions. A FAQPage block whose questions match the actual prompt language your buyers use is the single highest-leverage edit. Aggarwal, Yin, Chakrabarti, and Mitra (ACM KDD 2024) tested 24 content optimization strategies across a Bing AI corpus and found that structured FAQ surfaces produced a 30 to 40% lift in Position-Aware Web Citation. The same study identified that AI engines preferentially extract answer-shaped content over prose paragraphs. If your page has prose where competitor pages have FAQ blocks, you are losing on extraction shape.
- Organization schema with sameAs links. The Organization schema tells AI engines who you are. The sameAs property links your domain to your LinkedIn, GitHub, Crunchbase, and X profiles, which gives the engine cross-source confirmation that “Foglift” on the page is the same entity as “Foglift” on LinkedIn and on Crunchbase. Without it, your brand name is a string the engine cannot disambiguate against competitors with similar names. Sam Goto’s talk at Google Search Central Live Madrid (April 2025) confirmed that structured data is a direct input to AI Overview generation.
- Entity-first prose in the first paragraph. The first paragraph of the target page should state the entity definition in the same sentence shape the buyer will see in the AI answer: “Foglift is an AI search visibility platform for B2B SaaS founders.” A founder who hides that sentence behind a marketing hero gives the engine no extractable definition. The competitors winning the prompt always have a one-sentence, entity-first definition at or near the top of the page.
Compare the audit against the page Perplexity (or whoever cited a competitor) is actually retrieving from. If the competitor’s page has all three signals and yours has none, you have your prioritized fix list. If you both have them and you are still losing, the gap is in step 3’s deeper signal: topical authority, which is built across pages over time and which the AI engines learn from co-citations and inbound link patterns. Seer Interactive’s June 2025 analysis of 5,000+ URLs found that 71% of ChatGPT citations came from content published in 2023 to 2025; freshness matters, and an old page with the right schema still loses to a recent page with the same schema.
Step 4: MCP agent edit (the loop closes here)
Steps 1 through 3 produced a specific page, a specific prompt, and a specific list of structural fixes. Step 4 is where it ships. The traditional path here is: open the file, copy a JSON-LD example from a tutorial, paste it in, fix the syntax, deploy. That can take an hour per page if you have not done it before, and the discipline tends to die in week three.
The fix loop runs inside a coding agent. The Foglift MCP server exposes the same recommendation data the dashboard shows directly to Cursor, Claude Code, and Windsurf. A typical iteration:
> Run a Foglift scan on https://mysite.com/pricing
> and give me the lowest-scoring AEO dimension plus the top blocker
> and the recommended edit.
[MCP response]
score: 51/100
weakest_dimension: FAQ Quality (2/10)
top_blocker: No FAQPage JSON-LD on the page.
recommended_edit:
Add a FAQPage JSON-LD block with 5 entity-first questions.
Match the prompt language from your tracked-prompt set:
- "How much does {product} cost?"
- "Is {product} cheaper than Intercom?"
- "Does {product} have a free tier?"
- "What does {product} include under $200/month?"
- "How does {product} compare to Zendesk on pricing?"Then:
> Draft the FAQPage JSON-LD block above and insert it in src/app/pricing/page.tsx
> using the existing JSON-LD pattern in optimize-ai-search-visibility-solo-founder/page.tsx
> as a reference. Use 2-to-4-sentence answers, entity-first.
[Cursor writes the block in the file, ready for review]Review the diff, commit, push, deploy. The next scheduled AI Visibility scan, on the standard weekly cadence, will surface whether the page now appears in ChatGPT and Perplexity citations for those prompts. The loop closes; the diff is now a measurable artifact.
Anonymized arc: 0% to 60% mention rate on one prompt cluster in 9 days
Here is what the 4-step loop produced for one founder I worked with, anonymized at his request. Day 0 baseline matches the gap map above. The product is an AI-native customer support automation tool. The prompt cluster is the 5 prompts in the table above.
Day 0: 0% mention rate across the 5 prompts in ChatGPT, 2/5 in Perplexity, 0/5 in Claude, 0/5 in Gemini. Average AEO score across the 4 target pages (homepage, pricing, /vs-intercom, /vs-zendesk) was 58.
Day 1: Steps 1 and 2 ran. The output was a prioritized list of 7 structural blockers across the 4 pages. The biggest single gap was the comparison pages: both had 41/100 AEO scores, no FAQPage JSON-LD, no Product schema, and the entity definition was buried 6 paragraphs in.
Days 2 to 3: The MCP agent edited the two comparison pages in two sessions of 18 and 22 minutes. Each session: scan, agent reads recommendation, drafts FAQPage and Product JSON-LD, inserts an entity-first opening paragraph, founder reviews and commits. AEO scores moved 41 to 84 on /vs-intercom and 41 to 86 on /vs-zendesk.
Day 5: Pricing page got the same treatment. AEO score 51 to 88. The agent also surfaced a missing Organization schema on the layout (a global edit affecting every page), which it added in a 6-minute session.
Day 9: The next scheduled AI Visibility scan ran. The founder appeared in 3/5 ChatGPT results, 4/5 Perplexity, 2/5 Claude, 2/5 Gemini. Aggregate mention rate across the 4 engines and 5 prompts was 11/20, or 55%. By day 14 it was 60% and stable.
Total time invested across 9 days: roughly 3 hours, almost entirely in agent-edit sessions of 15 to 25 minutes each. No new content was written. The product did not change. The buyer-facing pages added structural signals that gave AI engines the extractable surface they needed to recommend the product.
What the loop deliberately does not try to fix
The 4-step loop targets structural signals. It will not write a better product description; it will not generate thought-leadership content; it will not run outreach campaigns to build inbound links. Those activities are useful on a different time horizon (weeks to months) and they assume the structural fixes are already in place. A page with no FAQPage schema and a buried entity definition will gain little from a backlink on a high-authority site, because the AI engine still cannot extract a clean answer from the page itself.
Two signals the loop ignores by design:
- Google rank position on the same prompts. Chatoptic’s September 2025 study of 15 brands across 5 verticals and 1,000 prompts found a 0.034 Spearman correlation between Google rank position and ChatGPT recommendation order. Tracking Google rank as the input to your AI visibility decisions is noise.
- Domain authority. Ahrefs Brand Radar (October 2025) found 28.3% of the top 1,000 ChatGPT-cited URLs had zero Google organic keywords. AI citation is an independent channel from Google. Pages that earn citation through structural signal can do so regardless of domain age or DA.
Where to start if you have 60 minutes today
Run step 1 on your top 5 pages. The free tier covers it: /aeo-checker takes a URL and prints the AEO score with the dimension breakdown. Note the two lowest dimensions for each page; those are your structural fix list. Next, run step 2 on 3 of your highest-intent prompts in the dashboard’s AI Visibility view, or via foglift scan ai-check --prompt “your prompt” from the terminal. Match the gaps from step 1 against the failing prompts from step 2. Pick the single page that appears in both lists, and that is the page your next agent session targets.
If you want to wire the MCP loop directly, the MCP integration page covers Cursor, Claude Code, and Windsurf setup in under 5 minutes. For the broader weekly cadence (recommendations engine, scheduled scans, automatic regressions detection), the dashboard recommendations view ranks your fix list across all your pages and prompts together. For the API and CLI, see /developers.
Why this is the highest-leverage hour for an AI-first founder this quarter
Aravind Srinivas (Perplexity CEO) reported at Bloomberg’s June 2025 Tech Summit that Perplexity was running roughly 780 million queries per month at about 20% month-over-month growth as of May 2025. ChatGPT, Claude, and Gemini are larger or comparable. The aggregate AI search surface where vendor discovery happens is now in the same order of magnitude as Google’s traditional search surface for vendor research queries, and Gartner’s February 2024 forecast projects traditional search volume will decline 25% by 2026 as that surface absorbs query share.
Every prompt where your competitor is recommended and you are not is a prospect who never visits your site. The cost of one loop iteration is 20 to 30 minutes inside a coding agent. The cost of 90 days of compounding citations on competitor pages, while your structural signals remain unshipped, is the entire next product category.
Start with one page. One scan, one fix, one re-scan. The loop expands from there.
Try it on your own SaaS
The free tier gives you 200 tokens, no credit card. That is enough to run baseline AEO scans on your top 8 pages plus one AI Visibility check on your top prompt cluster across all 4 engines. Start Free, or for the developer path, the MCP integration docs show the Cursor and Claude Code setup. The companion post How I Optimize AI Search Visibility and Let Agents Close the Loop walks through the same loop on a weekly cadence once the first fixes have shipped.
Sources & Further Reading
- Wynter, 2026 B2B CMO Sentiment Survey: 84% of B2B CMOs use AI or LLMs for vendor discovery; ChatGPT, Perplexity, and Gemini ranked as most-used sources.
- Gartner, Press Release on Search Engine Volume Forecast, February 2024: projects a 25% decline in traditional search volume by 2026 as AI-driven search absorbs query share.
- Ahrefs Brand Radar, 67% of ChatGPT’s Top 1,000 Citations Are Off-Limits to Marketers, October 28, 2025: 28.3% of top-cited URLs had zero Google organic keywords, indicating AI citation is independent of Google rank.
- Chatoptic, Google Rank vs ChatGPT Recommendation Correlation Study, September 4, 2025 (15 brands, 5 verticals, 1,000 prompts): 0.034 Spearman correlation between Google rank position and ChatGPT recommendation order; 61 to 62% brand overlap (not URL overlap) between the two surfaces.
- Seer Interactive, AI Crawler Log Analysis, June 2025 (5,000+ URLs, ChatGPT crawler logs plus Peec.ai citation tracking): 71% of citations came from content published in 2023 to 2025.
- Aravind Srinivas, Bloomberg Tech Summit, June 2025 (reported via Search Engine Land): Perplexity running ~780M queries per month at ~20% month-over-month growth as of May 2025.
- Aggarwal, Yin, Chakrabarti, Mitra. Geo-Aware Web Citation Lift, ACM KDD 2024: structured FAQ surfaces produced 30 to 40% lift in Position-Aware Web Citation (PAWC) on the tested corpus.
- Google Search Central Live Madrid (Sam Goto), April 2025: structured data is a direct input to AI Overview generation; entity disambiguation via Organization schema with sameAs improves cross-source confidence.
Fundamentals: Learn about GEO (Generative Engine Optimization) and AEO (Answer Engine Optimization) — the two frameworks for optimizing your content for AI search engines.
Related reading
Free AEO Score Checker
Run a 60-second AEO scan on any URL and get the dimension breakdown.
Foglift API, CLI, and MCP for Developers
Run AI search analysis from your terminal, CI, or coding assistant.
Foglift MCP Server
Wire AEO scoring into Cursor, Claude Code, and Windsurf in 5 minutes.
How I Optimize AI Search Visibility and Let Agents Close the Loop
The weekly optimization loop a solo AI-first founder runs end to end.