The conversation around AI Overviews is often emotional — caught between panic and indifference. The data now allows a calm view. Based on aggregated Google Search Console (GSC) data from more than 500 enterprise domains across DACH, the EU and the US between June 2024 and February 2026, a clear pattern emerges: informational queries lose between 28 % and 41 % of their organic click volume — while impressions stay constant or even rise, and Position 1 rankings remain unchanged.

This article puts the data in context, shows which query clusters are hit hardest, and outlines a new measurement framework that enterprise brands can use to steer the real impact.

A structural shift — not an algorithm update

The decisive difference from every SEO event of the past 20 years: AI Overviews are not a ranking factor. They are a SERP structure change. Google generates an answer based on several selected sources and places it above the classical blue links. The user gets the answer — and often stops clicking.

A 2024 study by Authoritas (18,000 queries, 160 keywords) and Seer Interactive's 2025 analysis (10,000 keywords) concur: when AI Overviews appear, the click-through rate on Position 1 drops by a median of 34.5 %. In specific query types (How-To, definitions, comparisons) declines of up to 64 % are documented.

47%

of informational queries show AI Overviews (Feb 2026)

−34.5%

average CTR loss on Position 1 when AIO appears

+18%

impression growth despite the click decline

Which query types are most affected

Absorption is not evenly distributed. The strongest drops show up in the following clusters:

Significantly less affected — or unaffected: transactional queries, brand queries, local queries with map intent and highly specific B2B long-tails. Google shows AIO less often here, because the uncertainty about the right generative answer is too high.

The hidden upside: brand citation instead of a blue link

What gets overlooked in the click panic: AI Overviews cite sources — visibly, with logo and brand name. An analysis by Profound.ai (January 2026, 1.2 million prompts) shows that brands cited in AIO see a substantial brand-recall lift of 23 % to 41 % — measurable through post-exposure brand-lift tests. The click is lost, brand perception is gained.

That is a fundamental shift in the function of search results: from direct traffic source to brand-building channel. Brands cited in AIO buy awareness that classical display campaigns cannot deliver with such precision. Brands not cited disappear from perception — even with a Position 1 ranking intact.

"The classical SEO KPI set increasingly measures the wrong layer. Keep optimizing clicks while competitors invest in AI citations and you lose brand perception — slowly but structurally."

The new KPI framework: four metrics enterprise brands need now

Based on 18 months of client work, SUMAX has established a set of four central KPIs that measure AI-search impact precisely:

1. Citation Rate

The percentage of relevant target queries in which the brand appears as a source inside AI Overviews. Measurable through tools such as Profound and Otterly.ai, or manually via query sampling. Target metric: at least 35 % for Tier-1 brands, 55 %+ for category leaders.

2. Brand Mention Density (BMD)

The share of generative answers in which the brand is mentioned by name — even without a direct citation. LLMs often paraphrase content without linking the source. These "orphan mentions" are invisible to classical SEO but build recognition. Tracking via share-of-voice tools across ChatGPT, Claude, Perplexity and Gemini.

3. Passage Coverage Score

What share of your top content passages is recognized by generative systems as citable? Measured through passage-extraction tests: simulated RAG queries against your own content, scored for semantic density, entity coverage and quotability.

4. Attribution Lift

The measurable branded-search uplift after AIO visibility work. Since direct clicks are lost, impact must be measured through indirect signals: brand-search volume (Ahrefs, Semrush), direct type-in traffic (GA4) and controlled incrementality tests across time windows.

Operator insight

The invisible winners

In our enterprise cohorts a striking minority (around 8 % of domains) more than compensates the click loss with an outsized branded-search uplift. Common pattern: consistent entity work, structured data on the scale of Schema.org/DefinedTerm + FAQPage, and strong authority signals in LLM training sources (Wikipedia, Wikidata, qualified trade media). These brands lose clicks — and gain brand relevance.

What enterprise brands should do now — concretely

The strategic response to AI Overviews is not defensive. It is a realignment of content and measurement systems.

Step 1: run a query-cohort analysis

Segment your top 1,000 keywords by query type (informational, transactional, navigational, commercial). For each cohort, analyse the 12-month CTR trajectory. Clusters with falling CTR despite stable position are your AI Overview losses.

Step 2: citation readiness audit

Test every affected top page against LLM citation criteria: passage structure, entity density, schema implementation (especially DefinedTerm, HowTo, FAQPage, Article with author and publisher), author expertise signals and referencing of primary data.

Step 3: rebuild the measurement system

Introduce new KPIs in parallel to the classical reporting chain. Monthly monitoring of citation rate and brand mention density for the 200 most important target queries. Integration into executive dashboards.

Step 4: reshape content prioritization

Do not abandon content for lost informational queries — restructure it. Focus on citable passages, original data and studies, concrete numbers and examples, clear definitions in short paragraphs. LLMs prefer structured, factual, unmoralized information.

The mathematics behind the click loss — an exact calculation

Anyone assessing AI Overview impact seriously needs a precise formula. The common CTR-delta calculation is too coarse — it ignores impression shifts and SERP-feature interactions. We therefore use the following AIO Absorption Rate (AAR):

AAR = 1 − (CTR_post × Impr_post) / (CTR_pre × Impr_pre × Q_norm)

with:
CTR_pre/post   = click-through rate before/after AIO introduction
Impr_pre/post  = impression volume before/after AIO introduction
Q_norm         = query-volume normalization factor (seasonality, trend)
Q_norm         = (Search_Volume_post / Search_Volume_pre) of the keyword in Google Trends

A concrete example from an e-commerce domain (Home & Living, February 2026): the keyword "how do I descale a coffee machine" had 48,000 impressions/month pre-AIO at 31 % CTR (Position 1). Post-AIO: 57,000 impressions at 11.2 % CTR. Q_norm via Google Trends: 1.08. Plugged in:

AAR = 1 − (0.112 × 57,000) / (0.31 × 48,000 × 1.08)
    = 1 − 6,384 / 16,070
    = 1 − 0.397
    = 0.603  →  60.3 % absorption

That means: despite a stable Position 1 and rising search volume, 60 % of clicks are lost. For executive-level reporting, AAR is far more robust than simple CTR deltas because it neutralizes growth.

Industry benchmarks: how hard does it hit whom?

Impact varies considerably by vertical. From our cohort data (N = 512 enterprise domains, June 2024 – February 2026):

54%

Health & medical (informational)

47%

Finance & insurance (definitions)

42%

Home & DIY (How-To)

38%

B2B SaaS (comparison queries)

19%

E-commerce (transactional)

8%

Travel (booking queries)

The spread follows a clear logic: the higher the information need and the lower the transaction risk, the higher the absorption. Google places AIO where answer confidence is high — for YMYL topics (Your Money Your Life), citation rate remains high despite medical queries because the disclaimer policy favours reputable sources.

Citation readiness: the 12-point checklist

Not every page is citable. We have developed a scoring model that quantifies the likelihood of an AIO citation across 12 signals. Each item is scored 0-10, the total ranges from 0-120. Above 85 points the empirical citation probability sits above 60 %.

  1. First-passage definition — does the first paragraph contain a clear, citable definition (≤ 60 words)?
  2. Entity density — at least one named entity per 80 words (tool: Google NLP API).
  3. Schema.org coverageArticle, DefinedTerm, FAQPage, HowTo validly implemented.
  4. Author schemaauthor with sameAs to Wikipedia/Wikidata/LinkedIn.
  5. Primary data — at least one original number, study or survey.
  6. Heading hierarchy — H2/H3 carry question intent (including interrogatives).
  7. Passage length — paragraphs of 40-120 words (RAG retrievers prefer that range).
  8. Date signalsdatePublished + dateModified, visible in the frontend.
  9. Reference density — outbound links to authoritative primary sources (.gov, .edu, professional bodies).
  10. Reading grade — Flesch Reading Ease between 55-70 (clear but not childish).
  11. Table/list structure — structured comparison elements for comparison intent.
  12. E-E-A-T signals — author bio, credentials, publisher authority, review layer.

The score is validated quarterly against the actual AIO citation rate. In our portfolio it correlates at r = 0.78 — significantly higher than classical SEO scores (authority, content score), which reach only r = 0.31 against AIO citation.

Tutorial: a five-day path to your first AIO citation baseline

The transition looks big — it is not. A disciplined approach delivers a solid baseline in a single working week. The following sequence has become standard in our client work:

Day 1 — define the query universe

Export the top 500 keywords from GSC by clicks plus the top 200 by impressions. Deduplicate, classify by query type (regex on interrogatives, modifiers). Result: a prioritized keyword set with intent labels.

Day 2 — AIO presence scan

Automated query of every keyword through a SERP API (e.g. DataForSEO, ValueSERP) with the ai_overview feature flag. Result: a boolean AIO marker per keyword, plus the list of cited domains.

Day 3 — CTR delta analysis

Export 12 months from GSC, segment pre/post AIO show-date. Compute AAR per keyword using the formula above. Filter: every keyword with AAR > 0.35 → loss portfolio.

Day 4 — competitor citation mapping

For the loss portfolio: who is cited? Aggregate at domain level, identify the top 10 citation winners in your topical environment. Analyse common content patterns (passage structure, entity use, schema).

Day 5 — prioritization matrix

Plot every loss query in a 2×2 matrix: AAR (x-axis) × business value (y-axis, EUR/month). Focus set: top 20 by value × AAR. For these pages: compute the citation readiness score. The gap between target (85+) and actual defines the content backlog for the next quarter.

Operator insight

The third paragraph decides

Our passage-extraction tests show: in 73 % of cases LLMs prioritize a paragraph from the first three sections after the H1 for citation. Hiding the core answer behind a hero intro, a table or a CTA makes you technically invisible — even with an organic Position 1. Aggressive first-fold optimization for users hurts machine readability. The best compromise: a short, citable core statement within the first 200 words, followed by visual treatment.

The attribution side effect: branded-search lag

An often-overlooked consequence: the brand-recall effect of AIO citations does not materialize immediately. Our cohort studies show a typical lag of 14-42 days before AIO citation translates into measurable branded-search growth. Attribution Lift must therefore be measured across rolling 90-day windows, not month-over-month. Monthly reporting misses the effect entirely.

The correct Attribution Lift formula:

AL = (BranSV_t90 − BranSV_pre90) / CiteVol_t90 × 1000

with:
BranSV_t90    = branded search volume in the 90-day following period
BranSV_pre90  = branded search volume in the prior period (equal-length window)
CiteVol_t90   = cumulative citation impressions in the same window
Result        = branded-search lift per 1,000 AIO citation impressions

Portfolio benchmark: an AL > 4.5 signals healthy brand translation. AL < 1.5 means the brand is cited but not remembered — often a sign of weak brand distinctiveness in the content (too little brand name, too interchangeable a voice).

Typical mistakes that amplify the measurement error

Conclusion: the question every CMO must ask now

Not: "How do I win back the lost clicks?" That is the game Google no longer offers. But: "How does my brand become the preferred source when an AI formulates the answer?"

The answer to that question decides which brands dominate the next decade of search — and which slowly disappear from perception, even with stable rankings intact.