Meta Tag Analyzer for AI Search
See how ChatGPT, Claude, and Perplexity will render your page in their citation panels. Score the meta block for AI pickup, not just Google.
Why meta tags matter for AI pickup
Generic meta-tag tools score for Google. AI engines consume the same tags differently. The four patterns below are where most pages lose AI citations even with a clean Google audit.
Citation panels truncate around 200 chars
Perplexity source cards and Claude citation panels render the meta description directly. Anything beyond 200 characters truncates, so long marketing descriptions lose their punchline. Trim the lede.
Open Graph is a structured-summary block
ChatGPT, Claude, and Perplexity parse og:title / og:description / og:image as a supplemental signal alongside body text. A page with full OG coverage gets cited more often than the same page with body-only signals, even when the body is identical.
article:author and og:type tilt source-card confidence
ChatGPT's browse-mode citation card surfaces author and date when present. Pages with these fields get attributed at the source level (the article wrote X) rather than the domain level (the site says X). Source-level attribution gets cited more often.
Title that duplicates H1 verbatim wastes a signal
When the title and H1 carry identical text, AI engines extract one signal where they could have extracted two. Differentiate the title (add a brand suffix or topic clarifier) and let the H1 carry the topical phrase.
How AI engines render meta tags differently
ChatGPT, Claude, Perplexity, and Google AI Overviews each parse the same meta block differently. Most generic meta-tag testers do not surface these differences. The five patterns below are where pages with a clean Google audit still lose AI citations.
- •Citation panel character limits. Perplexity source cards truncate the meta description around 160 characters. Claude citation panels around 200. ChatGPT browse-mode source cards around 220. The same description that reads cleanly in a Google SERP can be cut mid-sentence in the citation panel an AI engine actually shows the user. Front-load the factual punchline before character 160 if you want it surfaced everywhere.
- •og:image renders directly. AI engines do not resolve relative paths against the page URL the way a browser does. A relative og:image (/img/hero.png) renders as a broken card thumbnail. Use absolute URLs only (https://example.com/img/hero.png). The risk-flag system surfaces this as a high-priority fix because a missing thumbnail visibly degrades the source card a user sees.
- •article:author and article:published_time tilt source-card confidence. ChatGPT's browse-mode source card surfaces author and date when present, lifting the citation from domain-level (the site says X) to source-level (the article wrote X). Source-level attribution gets cited more often. Pages without these fields are still ingestible, but they cite at a lower confidence weight.
- •Marketing fluff in the description gets deprioritized. AI engines extract factual descriptions over promotional ones. Phrases like best-in-class, ultimate, leading, world-class, cutting-edge, and exclamation marks signal sales copy. The score rewards factual, abstract-style descriptions in the 100 to 200 character range. When two competing pages have similar topical relevance, the one that reads like a factual summary wins the citation.
- •Title that duplicates H1 verbatim wastes a signal. AI engines extract title and H1 as two distinct signals. When both carry identical text, one signal is wasted. Differentiate the title with a brand suffix, a category clarifier, or a year. Let the H1 carry the topical phrase. Pages that do this get cited under both keyword variations.
What a high-AI-pickup meta block looks like
The block below scores 95+ on the AI Pickup Score: factual description front-loaded before character 160, absolute og:image URL, article authorship + freshness fields populated, and title differentiated from H1 with a category suffix.
<title>Meta Tag Analyzer for AI Search | Foglift</title>
<meta name="description" content="Free analyzer that scores any URL on how AI engines render its meta tags. Inspects title, description, OG, article authorship, and canonical for ChatGPT, Claude, and Perplexity citation panels." />
<meta property="og:title" content="Meta Tag Analyzer for AI Search" />
<meta property="og:description" content="Score any URL on how AI engines render its meta tags. Free, no signup required." />
<meta property="og:image" content="https://foglift.io/og/meta-tag-analyzer.png" />
<meta property="og:type" content="article" />
<meta property="article:author" content="Foglift" />
<meta property="article:published_time" content="2026-05-09T17:00:00Z" />
<link rel="canonical" href="https://foglift.io/tools/meta-tag-analyzer" />Two patterns that frequently break this block in practice: a relative og:image path renders as a broken thumbnail in citation cards (use absolute URLs only), and an outdated article:published_time can lower how confidently AI engines surface the page when freshness is part of the query.
The 5 AI Pickup dimensions
Each dimension scores out of 20, summing to a 0 to 100 AI Pickup Score. The score is local to this analyzer. It is not the same as the Foglift Website Audit's AI Readiness Score, which evaluates the whole page across content, schema, performance, and crawler access.
- Title quality
- Title present (10), 30 to 60 characters (5), and distinct from the H1 (5). Rewards a tight, scannable title that carries different language than the page heading so AI engines extract two signals instead of one.
- Description for AI citation
- Description present (10), 100 to 200 character sweet spot (5), and factual phrasing without marketing fluff or exclamation marks (5). Calibrated to the Perplexity 160-character and Claude 200-character truncation points.
- Open Graph completeness
- Four points each across og:title, og:description, og:image, og:type, and og:url. AI engines parse the OG block as a structured-summary signal alongside body text. A page with full OG coverage cites more reliably than the same page with body-only signals.
- Article authorship + freshness (or Type + author signal)
- Adapts to the page shape. Article-shaped pages score article:author (8), article:published_time (6), and article:modified_time (6). Non-article pages score og:type populated (12) and an explicit author signal (8). Product pages, landing pages, and category pages do not get penalized for missing article:* fields.
- Canonical + indexability
- Canonical present (10) and not blocked by robots noindex (10). Zeros to 0 if noindex is set, because the page is ineligible for AI citations regardless of how strong the rest of the meta block is. The risk-flag system surfaces noindex as the single highest-priority fix.
Frequently Asked Questions
What is an AI Pickup Score for meta tags?
A 0 to 100 grade across 5 dimensions: title quality, description for AI citation, Open Graph completeness, article authorship and freshness, and canonical or indexability. The score is local to the meta-tag analyzer. It is not the same as the Foglift Website Audit's AI Readiness Score, which weighs the full page across content, schema, performance, and crawler access.
Why a separate score for AI pickup instead of just SEO?
Google scores meta tags for SERP rendering. AI engines consume the same tags differently. Perplexity and Claude truncate descriptions at 160 to 200 characters in their citation panels. ChatGPT's source card weights article:author and og:type heavily. og:image must be an absolute URL or the citation card renders blank. None of these patterns affect a Google audit, so a Google-only score misses them.
Why does the AI Pickup Score deduct points for marketing fluff in the description?
AI engines extract factual descriptions more often than promotional ones. Phrases like "best in class", "ultimate", "leading", "amazing", "world-class", and exclamation marks signal sales copy. The body of evidence: when AI engines can choose between two competing pages with similar topical relevance, they prefer the description that reads like a factual abstract over the one that reads like ad copy. The score rewards factual phrasing in the 100 to 200 character range.
Why does the score care about title duplicating H1?
AI engines extract title and H1 as two distinct signals. When both carry the same text verbatim, one of those signals is wasted. Differentiate the title (add a brand suffix, a category clarifier, or a year) and let the H1 carry the topical phrase. Pages that do this well get cited under both keyword variations rather than just one.
What if my page is not an article? Do article:author and og:type=article still apply?
The score detects whether the page is article-shaped (article:author or og:type=article or article:published_time present). If not, dimension 4 switches to a lighter rubric: og:type populated (any value, including website) earns most of the points, with a small bonus for an explicit author signal. Product pages, landing pages, and category pages do not need article:* fields to pass.
Why does og:image have to be an absolute URL?
ChatGPT, Claude, and Perplexity render the og:image directly in their citation cards. They do not resolve relative paths against the page URL the way a browser would. A relative og:image ("/img/hero.png") renders as a broken image. Use absolute URLs ("https://example.com/img/hero.png"). The score flags this as a risk.
Why does the score zero the indexability dimension when robots noindex is set?
noindex tells search engines and AI engines to exclude the page from results entirely. The page is not eligible for AI citations regardless of how good the meta block is. The risk flag surfaces this as the highest-priority fix. Remove the directive if the page should be discoverable, or leave it alone if the page is intentionally private (drafts, internal tools, paywalled content).
How is this different from Google's meta tag testing tools?
Google's tools answer: will Google render this meta block correctly in the SERP. This tool answers: will ChatGPT, Claude, and Perplexity pick up this meta block in their citation panels. The two questions overlap on the basics (title and description must exist) but diverge on the patterns that matter for AI ingestion: citation-panel character limits, og:image absolute URL requirements, article:author weighting, and marketing-fluff detection.
Sources & Further Reading
Primary specifications and documentation that ground the dimensions and risk flags above. Truncation lengths, citation-panel rendering, and freshness weighting are observed during product testing and may shift as AI engines update their UI; the underlying field semantics are anchored in the specs below.
- Open Graph Protocol. Canonical specification for
og:title,og:description,og:image,og:type, andog:url, including the article subtype witharticle:authorandarticle:published_time. - Schema.org Article type. Canonical definition of
author,datePublished, anddateModifiedproperties referenced by AI source cards alongside the OG article fields. - Google Search Central, “Control your snippets in search results”. Meta description handling, the ~155 character snippet length guidance, and the cases where Google rewrites snippets from body content.
- Google Search Central, “Control your title links in search results”. How Google generates and rewrites title links, including title-versus-H1 selection logic that informs the title-duplicates-H1 risk flag.
- Google, “Robots meta tag, data-nosnippet, and X-Robots-Tag specifications”. Canonical reference for
noindexsemantics and AI-assistant access (Google-Extended). - WHATWG HTML Living Standard, §4.2 Document metadata. Canonical specification for the
<title>element,meta name="description", andlink rel="canonical". - Foglift product testing. Citation-panel character cuts, og:image rendering, and authorship/freshness weighting described above are observed across ChatGPT browse mode, Claude web search, and Perplexity source cards. Numbers are approximate, not benchmarked against a fixed sample, and may shift as engines update their UI.