Overall score
Weighted average of four section scores — Technical (25%), Content (25%), Experience (20%), Answer Engine (30%). Rounded to a whole number.
Technical
Per-check pass rate across crawl, schema, robots, sitemap, redirects, canonical, and internal linking signals. Each check is binary: pass = 1, fail = 0. Score = (passes / total checks) × 100.
Content
Per-check pass rate across H1 presence, meta description, readability, and keyword coverage. Same pass/fail logic as Technical.
Answer Engine (GEO)
We ask four LLMs (OpenAI, Anthropic, Gemini, Perplexity) eight standardized prompts about your market and brand, then measure (a) how often your brand is mentioned, (b) whether it's positioned as a recommendation, and (c) whether your canonical URLs appear in the cited sources. The three sub-scores combine into one GEO score.
Captured citations
Count of source URLs that LLMs returned across our 8 benchmark prompts. We do not HEAD-check each URL yet — that's on the roadmap.
Semantic gaps
Prompts where no LLM response mentioned your brand, out of the 8 prompts in our benchmark set.