A standardized diagnostic across 2,000+ curated prompts and six AI engines — GPT-4, Claude, Gemini, Perplexity, Microsoft Copilot and Google AI Overviews. Three core KPIs: citation rate, share of model, sentiment drift (RDI formula). Outcome: a quantified baseline and a prioritized roadmap.

“You can't optimize what you don't measure across all six surfaces.”
2,000+ curated prompts across category, comparison, problem, jobs-to-be-done and brand intents. Designed to surface where your brand appears — and where it doesn't.
GPT-4, Claude, Gemini, Perplexity, Microsoft Copilot, Google AI Overviews. Each model queried in primary languages and locales relevant to your markets.
Citation rate (share of answers citing your brand), share of model (your share vs. competitors), sentiment drift (RDI formula — sentiment weighted by source authority and detection confidence).
Wikidata status, Schema @id graph, sameAs cluster, knowledge-panel readiness. Where the entity is broken or fragmented — and what it takes to consolidate.
Citation share comparison against 3-5 named competitors. Who owns which prompt clusters, where the white spaces are, where the citation is defensible.
Findings are grouped into quick wins (4-8 weeks), structural improvements (3-6 months) and strategic shifts (6-12 months). Every recommendation traces to a data signal.
40-60 pages: methodology, citation baseline, share of model, sentiment, entity status, competitive map, prioritized roadmap. Designed for board-level review.
Full prompt-response dataset, sentiment scoring, source attribution. Available as CSV for analysis or in a Looker Studio dashboard for your team.
Half-day workshop with your team: walk-through, Q&A, prioritization session. Recorded and documented. Optional follow-up after 4 weeks.
Quick wins, structural improvements, strategic shifts. With effort estimates, owner suggestions and expected impact ranges per item.
3 to 4 weeks end-to-end. Week 1: prompt-matrix design and kickoff. Week 2: full data collection across the six models. Week 3: analysis and entity diagnosis. Week 4: report, dashboard and workshop.
GPT-4, Claude, Gemini, Perplexity, Copilot, AI Overviews. Default primary languages: English, German. Additional locales (FR, ES, IT, NL, TR) included on request — additional scope for cross-market analysis.
The audit is a one-off baseline. After it, most clients move into an LLM Citation Monitoring retainer — weekly tracking across the same prompts and models, with monthly board-level reports.
Every finding traces to a concrete signal and is grouped into quick wins, structural improvements and strategic shifts. The roadmap includes effort estimates and impact ranges — so prioritization is straightforward.
30-minute scope call: we define prompts, models and competitors for your audit.