The numbers are well known, but their strategic depth is often missed. According to analyses by SparkToro (Rand Fishkin) and Similarweb, 58-65% of all Google searches in the US and EU now end without a click on an organic or paid result. The user gets the answer in the SERP — inside a featured snippet, a knowledge panel, an AI Overview — and closes the tab.

Classical SEO reporting treats this state as a loss. Clicks fall, CTR falls, organic sessions in GA4 fall. Marketing dashboards turn red. CFOs ask uncomfortable questions. And many SEO teams try to win back a game with tactics no longer measurable inside the existing system.

The structural mismeasurement

The central thinking error: CTR was developed as a proxy for reach in a world where the click was the only proof of successful mediation. In today's SERP, reach is a different quantity. A user who reads the answer in a featured snippet and associates the brand logo with the product has more engagement with the brand than a click onto a dull bounce-page experience.

What classical analytics does not measure:

"Traffic is not the purpose of search; it is a metric from an era when we had no better proxy for market relevance. In AI search we need more precise measures — and we have them."

The new KPI set: seven metrics that actually matter

Drawing on work with enterprise clients and academic research on AI-search impact, a set of seven metrics has emerged that captures the actual market effect of search comprehensively.

1. Share of Model

The percentage of relevant prompts where your brand is mentioned in LLM answers — compared with the competitor set. Measured through prompt audits against ChatGPT, Claude, Perplexity and Gemini. Benchmark: tier-1 brands in established categories reach 25-45% Share of Model.

2. Prompt Visibility Index (PVI)

A weighted average of mention frequency, positioning in the answer (at the start? main recommendation? secondary mention?) and sentiment context. The PVI condenses prompt-audit data into a single trackable value.

3. Brand Search Lift (BSL)

Monthly growth of brand queries relative to non-branded organic traffic. When SEO measures work but bring no direct clicks, the effect shows up in rising brand searches. Data sources: Google Search Console (brand query filter), Google Trends, Ahrefs Brand Index.

4. Citation Rate

The share of a category's top queries where your brand appears as a citation in AI Overviews or LLM answers. Unlike Share of Model: this is about source attribution with a link, not mere mention.

5. SERP-footprint coverage

The percentage of SERP area (measured in pixels) occupied by your brand in the top 10 results — including organic snippets, featured snippets, knowledge panels, sitelinks, news boxes. A visual reach index that avoids CTR blindness.

6. Attribution-adjusted ROAS

A ROAS calculation that attributes indirect traffic (direct type-in, later brand searches, cross-device conversions) to SEO activity — based on controlled incrementality tests over 4-12 week windows.

7. LLM-referred traffic

Session traffic with referrer from chat.openai.com, perplexity.ai, claude.ai, gemini.google.com. Still small in volume, but with very high conversion rates (typically 3-8× higher than the organic average, because intent is more qualified). Grows 15-35% per quarter depending on industry.

65%

Zero-click rate in US searches (SparkToro 2025)

3-8×

Higher conversion rate of LLM-referred traffic vs. organic

+23%

Brand-search lift from SERP-feature dominance (12-month cohort)

Implementation framework: measurement in practice

Phase 1: Establish baseline (month 1)

Phase 2: Tracking infrastructure (month 2)

Phase 3: Attribution testing (months 3-6)

Operator Insight

The stakeholder communication that makes the difference

Many SEO teams fail not on the new metrics but on communication with CFO and CMO. CEOs do not want to see seven new KPIs — they want business impact. The solution: one north-star KPI ("Category Share of Mind in AI") with three supporting metrics (Share of Model, Brand Search Lift, LLM-referred revenue). Complexity in the operator dashboard, simplicity in executive reporting.

Why most agencies do not deliver this framework

The honest diagnosis: the classical SEO agency economy is built on deliverables that are easy to bill — keywords, rankings, clicks. The new KPIs require continuous prompt monitoring, attribution modelling and business-intelligence integration. That is more labour-intensive, demands different skills and cannot be poured into standard reporting templates.

Enterprise organizations that keep thinking with their SEO partners in CTR and rankings are subsidizing a measurement philosophy that no longer reflects the reality of search. The consequence is rarely an abrupt collapse — it is a slow divergence between measured and actual market relevance.

The attribution-mathematics chapter: the complete calculation

An example that shows the scope. A B2B SaaS domain measures, in its classical dashboard, a month with 12,400 organic sessions, 183 MQLs, CPL of €112. Pipeline value per sales attribution: €184,000. SEO budget €18,000/month. ROI per finance: 10.2×. A satisfied report.

Real-world calculation with complete attribution:

Directly measured organic sessions:                          12,400
  of which brand queries:                                      4,880
  of which non-brand:                                          7,520

Dark funnel (methodologically estimated):
+ Zero-click impressions with brand exposure (AIO):         38,200
+ LLM-generated mentions (ChatGPT/Claude/Perplexity):      ~14,600
+ Perplexity citation traffic (direct, no referrer):           ~920
+ "Direct" sessions that are LLM-induced:                    ~2,200

Total brand-exposure events:                                68,320

Corrected effective CPM (vs. display benchmark €12):
€18,000 / 68,320 exposures × 1000 = €263 CPM raw
- but: branded-exposure quality ≈ 4.5× display baseline
→ effective CPM: ~€58 (vs. display CPM €12-22)

+ Attribution-corrected MQLs (incl. indirect contribution):    +41
→ Effective MQLs: 224 (not 183)
→ Effective CPL: €80 (not €112)
→ Effective ROI: 14.8× (not 10.2×)

That is the difference between "SEO has become expensive" (classical measurement) and "SEO is under-reported by 45%" (full attribution). Feed the executive board ROI figures from a 2019 methodology and you systematically underinvest in the channel with the best leverage.

The Share-of-Model calculation in detail

Share of Model (SoM) is the central leading indicator for brand presence in generative engines. The formula:

SoM = (brand mentions / total brand mentions in the prompt set) × 100

Prompt set:    100-300 category-relevant prompts
Per prompt:    3-5 repetitions to smooth stochasticity
Per model:     separate calculation (GPT, Claude, Gemini, Perplexity)
Aggregate SoM: weighted average by real user distribution

Weighting (as of 2026, EU):
GPT (OpenAI):     52%
Gemini:           22%
Claude:           11%
Perplexity:       10%
Other:             5%

Benchmark from our portfolio: category leaders have SoM > 35%, healthy challengers 15-25%, invisible brands < 5%. The curve is non-linear: a jump from 8% to 15% is much easier than from 25% to 35%. The last 10 percentage points cost, in our experience, roughly 3× as much content and entity work as the first 10.

PVI scoring: the Prompt Visibility calculation

The Prompt Visibility Index (PVI) aggregates three signals per prompt:

PVI_prompt = (0.5 × Mention) + (0.3 × Position) + (0.2 × Sentiment)

where:
Mention  = 1 if brand mentioned, else 0
Position = 1.0 (first mention) / 0.6 (middle) / 0.3 (late)
Sentiment = 1.0 positive / 0.5 neutral / 0.0 negative / -0.5 hedge

PVI_portfolio = Σ(PVI_prompt × prompt_weight) / Σ(prompt_weight)
prompt_weight = business value × search frequency

A PVI_portfolio > 0.55 indicates resilient brand presence. Values under 0.25 should trigger a reputation or content alarm.

Tutorial: the executive dashboard in four weeks

Week 1 — connect data sources

GA4 with BigQuery export, GSC with BigQuery export, prompt-monitoring tool (Profound/Otterly) via API, brand-search volume via DataForSEO API, CRM export via warehouse pipeline. Orchestrate with a simple Python script or with dbt.

Week 2 — metric layer

In a BI layer (Looker/Metabase/Tableau) define the seven KPIs as first-class metrics: SoM, PVI, BSL, CiteRate, SERP footprint, ROAS-adj, LLM-ref. Each metric gets a clear definition, data source and refresh cadence.

Week 3 — dashboards

Two layers: an executive dashboard (three main tiles, clear trend arrows, year-on-year comparison) and an operator dashboard (all seven KPIs, segmented by product line/market/query type). Target: 30 seconds to comprehension for the executive, three minutes for the operator.

Week 4 — review cadence

Monthly reviews with a fixed agenda template: KPI movements, root cause, next levers. Integrate into existing marketing-controlling meetings. No separate "SEO meeting" anymore — the metrics belong in overall marketing reporting.

Common mistakes when building the new KPI system

Operator Insight

The 90-day rule for attribution

All the new KPIs stabilize at the earliest after 90 days. The first 6-8 weeks are full of noise: prompt stochasticity, LLM update cycles, training refresh. Adjust panicked inside that window and you destroy the signal. Discipline: a rolling 90-day window for all trend statements, no decisions based on weekly data.

Conclusion

Zero-click is not a problem to be solved. It is a new reality that requires an adapted measurement system. Brands that move their measurement system today will know clearly in two years where they stand — while their competitors keep lamenting "declining clicks" and no longer recognize the actual market impact of their search strategy.

The question is not: "How do I get the clicks back?" It is: "How do I measure what now moves the market?"