Scan my site
GEO · AEO · Answer-Engine Visibility

Measure how ChatGPT, Claude, Gemini & 25+ assistants cite your site — not just how Google ranks it.

Traditional SEO optimises for a list of blue links. AI SEO optimises for the answer itself. We score your site across 10 retrieval clusters — semantic fit, intent coverage, entity authority, trust, citation surface, structure, freshness and more — and verify your visibility across a 30-assistant panel spanning frontier, enterprise, regional and developer-focused models.

Free · no sign-up · ~15s · fully operable without API keys
10signal clusters
30AI assistants checked
70+RAG-readiness checks
0keyword stuffing
How it works

A decade of SEO playbooks doesn't survive retrieval-augmented generation.

LLMs don't rank — they retrieve, chunk, weigh and cite. We model that pipeline end-to-end and score your site on the signals that actually decide whether you land inside the answer.

01

Intent decomposition

Your brand and domain are expanded into 8–24 natural-language questions a buyer would actually ask — direct, category, comparison, problem, how-to, pricing, objection and long-tail.

02

30-assistant visibility panel

Every question hits the full panel: ChatGPT, Claude, Gemini, DeepSeek, Perplexity, Grok, Mistral, Cohere, Qwen, Copilot, Meta AI, Kimi, ERNIE, You.com, Phind and 15 more. Assistants with public APIs get a live call; the rest are estimated from entity and brand signals and clearly labelled.

03

10 retrieval clusters

Semantic fit (embeddings), intent coverage, trust & reliability, content quality, entity authority, citation surface, structure, freshness, cross-reference breadth and tool actionability — each weighted by its real retrieval impact.

04

Prioritised remediation list

Every weak signal becomes a fix scored by expected lift (weight × gap). You see the highest-leverage work first — not a 200-item audit that never gets read.

The scoring model

A three-layer pipeline, not a single weighted sum.

A composite score alone misses half the picture. LLM retrievers reject pages before they score them, and penalise obvious spam after. We model all three layers.

// LAYER 1 — hard filters (the gate)
if (wordCount < 80
 || semantic < 0.08
 || intent   < 0.10
 || trust    < 0.10) reject()

// LAYER 2 — 10-cluster weighted composite
COMPOSITE =
  Semantic_Relevance    × 0.25
+ Intent_Coverage       × 0.15
+ Trust_Reliability     × 0.15
+ Content_Clarity       × 0.10
+ Entity_Authority      × 0.10
+ Citation_Potential    × 0.08
+ Structure_Parsability × 0.07
+ Freshness             × 0.05
+ Cross_Reference       × 0.03
+ Tool_Actionability    × 0.02

// LAYER 3 — penalty engine (multiplicative)
FINAL = COMPOSITE × Spam_Multiplier
      × 0.7 + Live_LLM_Blend × 0.3
The ten clusters

What we actually measure, cluster by cluster.

Semantic relevance 0.25

Embedding similarity with buyer-intent queries. Synonym richness, topical focus, Q-to-A alignment, heading-content coherence, long-tail coverage.

Intent coverage 0.15

Direct, category, comparison, how-to, pricing, problem, objection slots. Problem→solution flow, edge cases, persona, decision-stage balance, risk answers, next-question anticipation.

Trust & reliability 0.15

Numbers, sources, confident voice, low hedging, specificity, terminology consistency, no internal contradictions, domain jargon, original-research markers.

Content quality 0.10

Hierarchy, lede, scannability, paragraph length, Flesch, lexical diversity, information density, redundancy, examples, scenarios, logical connectives, cognitive load.

Entity authority 0.10

Organization & Person JSON-LD, sameAs, @id, Wikipedia entity page, LLM knowledge recognition, co-citation, niche vocabulary, disambiguation signals.

Citation potential 0.08

Stats, tables, step-lists, FAQ pairs, outbound references, author byline, framework mentions, rule-based phrasing, if-then decision criteria, decisive answers.

Structure & parsability 0.07

FAQPage / HowTo / Article JSON-LD, canonical, sitemap, robots.txt, GPTBot/ClaudeBot policy, llms.txt, heading hierarchy, clean HTML, multilingual reach.

Freshness 0.05

dateModified, Last-Modified, publish recency, current-year references and explicit version-update markers in the copy.

Cross-reference 0.03

Mentions on Wikipedia, Reddit, Medium, G2, GitHub, YouTube, Hacker News, Product Hunt. The off-site authority LLMs weight heavily.

Tool / actionability 0.02

CTAs, pricing surface, interactive calculators, API documentation, TTFB, server-rendered content — LLMs route action queries to pages that can fulfil them.

New

Tell AI assistants your site is worth a fresh look

Just published or updated something? In one click we quietly let the crawlers we can reach know — and we double-check that the AI assistants your customers use (ChatGPT, Claude, Gemini, Perplexity and others) can still open your pages without being blocked.

  • One-click nudge to search partners — no signup, no key fiddling
  • Health check for every major AI crawler — spot silent blocks before they cost you visibility
  • See exactly what we sent and what each server replied — no smoke and mirrors
Ping my site
FAQ

Precise questions, direct answers.

Is this the same as traditional SEO?

No. Traditional SEO optimises for Google's ranked list. AI SEO (GEO / AEO) optimises for the answer itself — what LLMs retrieve, chunk and cite. Different pipeline, different signals, different winners.

Do I need API keys?

No. The tool runs end-to-end without any keys — heuristic fallbacks cover every layer. Drop OPENAI_API_KEY, ANTHROPIC_API_KEY or GEMINI_API_KEY into .env to activate live probing, embedding-based semantics and AI-written fix rewrites.

Why ten clusters and not seven?

Because real retrieval pipelines split signals more finely than classical SEO does. Trust, intent coverage, entity authority and tool actionability each behave differently enough that collapsing them together produces false positives. Separating them surfaces genuine leverage.

What does "expected lift" mean on the fix list?

Each fix is scored weight × (1 − current_score) — the maximum score gain from closing that gap. Fixes are ranked by that lift, so the top item is the highest-ROI change available.

Can I download the report?

Yes. Every scan exposes a "Download PDF" button at the top of the report — it bakes the whole audit (score ring, 10 clusters, RAG readiness, fix list) into a single file you can share or archive.

Does the score update after a content change?

Yes, typically within minutes — the next scan picks up your updated dateModified, new copy, added tables and rewritten headings. Live LLM probe results lag days to weeks while retrieval indexes refresh.