Intent decomposition
Your brand and domain are expanded into 8–24 natural-language questions a buyer would actually ask — direct, category, comparison, problem, how-to, pricing, objection and long-tail.
Traditional SEO optimises for a list of blue links. AI SEO optimises for the answer itself. We score your site across 10 retrieval clusters — semantic fit, intent coverage, entity authority, trust, citation surface, structure, freshness and more — and verify your visibility across a 30-assistant panel spanning frontier, enterprise, regional and developer-focused models.
LLMs don't rank — they retrieve, chunk, weigh and cite. We model that pipeline end-to-end and score your site on the signals that actually decide whether you land inside the answer.
Your brand and domain are expanded into 8–24 natural-language questions a buyer would actually ask — direct, category, comparison, problem, how-to, pricing, objection and long-tail.
Every question hits the full panel: ChatGPT, Claude, Gemini, DeepSeek, Perplexity, Grok, Mistral, Cohere, Qwen, Copilot, Meta AI, Kimi, ERNIE, You.com, Phind and 15 more. Assistants with public APIs get a live call; the rest are estimated from entity and brand signals and clearly labelled.
Semantic fit (embeddings), intent coverage, trust & reliability, content quality, entity authority, citation surface, structure, freshness, cross-reference breadth and tool actionability — each weighted by its real retrieval impact.
Every weak signal becomes a fix scored by expected lift (weight × gap). You see the highest-leverage work first — not a 200-item audit that never gets read.
A composite score alone misses half the picture. LLM retrievers reject pages before they score them, and penalise obvious spam after. We model all three layers.
// LAYER 1 — hard filters (the gate)
if (wordCount < 80
|| semantic < 0.08
|| intent < 0.10
|| trust < 0.10) reject()
// LAYER 2 — 10-cluster weighted composite
COMPOSITE =
Semantic_Relevance × 0.25
+ Intent_Coverage × 0.15
+ Trust_Reliability × 0.15
+ Content_Clarity × 0.10
+ Entity_Authority × 0.10
+ Citation_Potential × 0.08
+ Structure_Parsability × 0.07
+ Freshness × 0.05
+ Cross_Reference × 0.03
+ Tool_Actionability × 0.02
// LAYER 3 — penalty engine (multiplicative)
FINAL = COMPOSITE × Spam_Multiplier
× 0.7 + Live_LLM_Blend × 0.3
Embedding similarity with buyer-intent queries. Synonym richness, topical focus, Q-to-A alignment, heading-content coherence, long-tail coverage.
Direct, category, comparison, how-to, pricing, problem, objection slots. Problem→solution flow, edge cases, persona, decision-stage balance, risk answers, next-question anticipation.
Numbers, sources, confident voice, low hedging, specificity, terminology consistency, no internal contradictions, domain jargon, original-research markers.
Hierarchy, lede, scannability, paragraph length, Flesch, lexical diversity, information density, redundancy, examples, scenarios, logical connectives, cognitive load.
Organization & Person JSON-LD, sameAs, @id, Wikipedia entity page, LLM knowledge recognition, co-citation, niche vocabulary, disambiguation signals.
Stats, tables, step-lists, FAQ pairs, outbound references, author byline, framework mentions, rule-based phrasing, if-then decision criteria, decisive answers.
FAQPage / HowTo / Article JSON-LD, canonical, sitemap, robots.txt, GPTBot/ClaudeBot policy, llms.txt, heading hierarchy, clean HTML, multilingual reach.
dateModified, Last-Modified, publish recency, current-year references and explicit version-update markers in the copy.
Mentions on Wikipedia, Reddit, Medium, G2, GitHub, YouTube, Hacker News, Product Hunt. The off-site authority LLMs weight heavily.
CTAs, pricing surface, interactive calculators, API documentation, TTFB, server-rendered content — LLMs route action queries to pages that can fulfil them.
Just published or updated something? In one click we quietly let the crawlers we can reach know — and we double-check that the AI assistants your customers use (ChatGPT, Claude, Gemini, Perplexity and others) can still open your pages without being blocked.
No. Traditional SEO optimises for Google's ranked list. AI SEO (GEO / AEO) optimises for the answer itself — what LLMs retrieve, chunk and cite. Different pipeline, different signals, different winners.
No. The tool runs end-to-end without any keys — heuristic fallbacks cover every layer. Drop OPENAI_API_KEY, ANTHROPIC_API_KEY or GEMINI_API_KEY into .env to activate live probing, embedding-based semantics and AI-written fix rewrites.
Because real retrieval pipelines split signals more finely than classical SEO does. Trust, intent coverage, entity authority and tool actionability each behave differently enough that collapsing them together produces false positives. Separating them surfaces genuine leverage.
Each fix is scored weight × (1 − current_score) — the maximum score gain from closing that gap. Fixes are ranked by that lift, so the top item is the highest-ROI change available.
Yes. Every scan exposes a "Download PDF" button at the top of the report — it bakes the whole audit (score ring, 10 clusters, RAG readiness, fix list) into a single file you can share or archive.
Yes, typically within minutes — the next scan picks up your updated dateModified, new copy, added tables and rewritten headings. Live LLM probe results lag days to weeks while retrieval indexes refresh.