The question every SEO team finds itself asking after the third core update of 2023 at the latest: why do we keep losing rankings on YMYL-adjacent topics despite technically clean execution, strong backlinks and topically relevant content? The answer is rarely the article — it is the ontology behind it. Google does not evaluate E-E-A-T per document in isolation; it evaluates the article in the context of the responsible author entity and the publishing organization entity. An article without an attributable author with verifiable expertise starts structurally below the signal threshold, no matter how well the content is written.
E-E-A-T as a cumulative author score
The historical confusion starts with the name. E-A-T was originally introduced as a framework in Google's Search Quality Evaluator Guidelines — a document for the human raters who score test SERPs on Google's behalf. Since the 2022 revision, the guidelines make it explicit that Expertise, Authoritativeness and Trustworthiness are primarily evaluated per creator — not per document. An article inherits E-E-A-T from the author. An author accumulates E-E-A-T across their entire publication footprint. That is the structural difference that keeps many technically strong content teams below competitor rankings.
The Experience dimension added in 2022 makes the logic even more explicit: Experience describes whether the author writes from their own practical experience — a signal that can only be judged at the author level, not at the article level. An article on laser eye surgery written by an operating ophthalmologist carries a different Experience weight than the same text written by a content writer without medical practice. Google can only recognize that when the author entity is machine-readably linked to the corresponding credentials, published expert articles and verifiable roles.
E-E-A-T accumulates per creator, not per article
A strong author entity lifts the entire content output
In Health / Finance / Legal, a structural ranking gate
The seven layers of a strong author entity
Layer 1 — Stable @id URI. Every author receives a canonical Person schema instance at a stable URI — typically https://brand.com/#author-john-doe or as a dedicated author page /team/john-doe/. The @id is the anchor node every Article schema references through the author property. Without that stable reference, author mentions remain lexical strings without semantic linkage — a structural weakness Google has penalized more sharply since the 2023 Knowledge Graph revision.
Layer 2 — Person schema with a complete property set. The schema block carries name, jobTitle, description, url, image (with ImageObject nested plus caption and license), sameAs array (5–15 authoritative third-party profiles), knowsAbout (topic tags for subject-matter expertise), knowsLanguage, worksFor (as an Organization @id reference), alumniOf (also @id-linked) and hasCredential for formal qualifications. Collectively, these properties form the machine-readable biographical signature. The denser and more referenced the property set, the higher the entity-resolution confidence.
Layer 3 — sameAs cluster across authoritative platforms. sameAs is the single most important property. It links the internal @id to external, verifiable identities. For an SEO strategist, that typically means LinkedIn, Crunchbase, GitHub, YouTube channel, speaker-platform profiles (Notist, SpeakerHub), Wikipedia article (where it exists), Wikidata Q-ID. For academic authors, add ORCID, Google Scholar, ResearchGate, Semantic Scholar, Crossref author profiles, VIAF. For journalists, add Muck Rack, author profiles in trade media, ProLit profiles, Mastodon accounts. What matters is not volume but consistency: identical name, identical photo, identical role description, identical company reference, consistent biographical statements across every platform. A single deviation (an incorrect company assignment on one trade-media profile) significantly weakens entity resolution for Google.
Layer 4 — Wikidata item (where notability allows). For authors with external notability, the Wikidata item with a Q-ID is the structurally most important node — Google's Knowledge Graph draws heavily on Wikidata, and LLMs (ChatGPT, Claude, Gemini) have Wikidata deep in their training data. The item should contain at least 15 maintained properties with external references. For content writers without external notability, work on sameAs and corroboration matters more — a forced Wikidata item without references is deleted and leaves a deletion history.
Layer 5 — Credential provenance with EducationalOccupationalCredential. The decisive lever for YMYL topics. Schema.org credential objects link to the issuing institution, describe the competency type (degree, certification, license) and reference verification URLs. Example: hasCredential on a Person pointing to an EducationalOccupationalCredential of type "MD" from a specific medical school with a recognizedBy reference to a medical board. This chain is machine-readable for Google and is often the difference between visibility and invisibility in YMYL-relevant rankings.
Layer 6 — Corroboration through trade-media mentions. External, independent confirmation of the author's role — an author cited as a source in trade media, appearing as a speaker at industry conferences, contributing as a guest author in authoritative publications accumulates corroboration signals. Every external confirmation should link back to the author's own @id — ideally through Schema.org author linkage on the third-party publication, practically often through a consistent name fingerprint plus sameAs coherence.
Layer 7 — Publishing principles and organizational context. Schema.org publishingPrinciples links to a page describing the editorial standards of the publishing organization — fact-checking processes, correction policies, conflict-of-interest rules. For YMYL publishers, this is now essentially mandatory. The author entity is thereby placed in the context of an organization entity with explicit quality standards — a structural E-E-A-T signal that individual articles cannot produce.
| Layer | Component | Effort | Effect on E-E-A-T |
|---|---|---|---|
| 1 | Stable @id URI | Low (one-time) | Foundation — without it, no entity resolution |
| 2 | Person schema with property set | Low (template) | High — machine-readable signature |
| 3 | sameAs cluster (10–15 profiles) | Medium (consistency) | Very high — core of entity resolution |
| 4 | Wikidata item with Q-ID | High (notability required) | Very high for LLM citations |
| 5 | Credential provenance (hasCredential) | Medium | YMYL-critical — otherwise no signal |
| 6 | Corroboration (trade media) | High (ongoing) | High — external authority validation |
| 7 | Publishing principles + org context | Low (one-time) | Medium–High for YMYL publishers |
How strong are your author entities?
30 minutes of live analysis of your main authors: KG resolution, an LLM test across 20 biographical prompts, sameAs coherence check. Output: a prioritized 90-day list.
Building the author entity: a 120-day protocol
Days 1–15 — audit and name-collision check. Collect every existing mention of the author, identify name collisions with other namesakes (Google name search, LinkedIn, ResearchGate). For relevant collisions, define a disambiguation strategy: introduce a middle name, carry an academic title consistently, anchor a role descriptor as a suffix. Baseline documentation of every existing third-party profile, including inconsistent data.
Days 16–45 — clean up and extend the sameAs cluster. Establish consistency across every existing profile: identical, high-resolution photo, identical name, identical role description, identical company assignment. Add missing authoritative profiles (ORCID for publishing authors, Crossref author profile for scholarly publications, Muck Rack for journalists). Target: 10–15 consistent third-party profiles as sameAs candidates.
Days 46–75 — schema implementation and @id graph. Implement Person schema with a complete property set on the own domain, build a canonical author page with @id anchor, reference Article schemas across every publication to the author @id. Update the sitemap for the author page and strengthen internal linking from every article to the author page.
Days 76–90 — Wikidata item (if notability is given). Preparation: collect 10–15 independent references for the properties to be documented. Create the item with at least 15 properties (P31, P106, P108, P69, P1416, external identifiers). Invite third-party editors to review. Activate a watchlist for drift control.
Days 91–120 — corroboration initiative. Targeted placement of author mentions across three to five authoritative trade-media outlets with consistent biographical facts. Document speaking appearances at industry conferences and feed them into the credential chain. Set up and link the organization's publishing principles page.
The author entity in the LLM era
The relevance of the author entity escalates with LLM search. When a user asks ChatGPT, Claude or Perplexity, "Who is the leading expert on topic X?", the models draw on entity-graph signals, sameAs coherence and corroboration density. Authors with a strong structured presence get cited consistently; authors without remain either invisible or hallucinated. This is not an abstract problem: in advisory practice, we regularly see cases where LLMs misattribute authors — assigning one author the articles of another, or misrepresenting their role. The cause is almost always an inconsistent or missing author entity structure.
For content brands, that has strategic consequences. Instead of treating author bylines as a cosmetic feature, authors must be maintained as long-term entity assets. A brand with three strong author entities in its field accumulates E-E-A-T signals that isolated anonymous articles will never reach. The investment in author visibility pays off three times over: Google rankings, AIO citations and LLM citations — all draw from the same entity substrate.
Common mistakes when building the author entity
Six patterns that recur stubbornly in advisory practice and act as structural limits.
First: ghostwriter collisions. When the byline author is not the actual author, the Experience signal breaks. Google and LLMs detect inconsistencies between declared credentials and the actual publication pattern. Transparency (ghostwriter disclosures, co-author structure) beats fiction.
Second: author page without @id anchor. A bio page at /team/john-doe/ without an explicit schema @id anchor works for humans, not for machines. The @id anchor with a #person fragment is the actual node Article schemas must reference.
Third: weak sameAs cluster. Three third-party profiles are not enough. Below ten sameAs references, entity resolution stays weak. And: inconsistent third-party profiles (outdated role on LinkedIn) hurt more than missing additional profiles.
Fourth: credential chain without references. EducationalOccupationalCredential objects without a recognizedBy or verifiable URL reference are worthless signals. Every credential needs an anchor back to the issuing institution.
Fifth: Wikidata item without maintenance. One-time Wikidata items left unmaintained for years drift through third-party edits and become a liability rather than an asset through outdated facts. Quarterly maintenance is mandatory.
Sixth: missing organizational context. An author entity without a worksFor reference to an Organization entity is inconsistent. Google expects an anchoring to a publisher context with its own E-E-A-T signals.
Measurement: quantifying the strength of an author entity
Four indicators with concrete tests. (a) Google Knowledge Graph Search API: the author is returned as a node with their own @id and score — if not, entity resolution is missing. (b) LLM resolution test: a prompt matrix of 20 biographical questions across ChatGPT, Claude and Gemini — are the answers consistent, correct and specific? (c) sameAs coherence audit: an automated crawl across every declared third-party profile, comparing attributes against the master declaration — deviations are reported as breaks. (d) Citation rate in the own field — 200–500 field-specific prompts across a multi-model tracker, measuring author citation over time.
These four metrics together form the author-entity strength score, which we track as a sub-metric for subject-matter authors as standard inside LLM Citation Monitoring.
Conclusion: authors are assets, not attributes
The structural shift of the last three years is not that E-E-A-T became more important — it always was. The shift is that E-E-A-T has become machine-readably testable. Google reads author entities from schema, LLMs read them from structured data and entity graphs, AIO draws from both. Brands that treat authors as attributes of articles build content infrastructure on a fragile foundation. Brands that understand authors as standalone, long-term entity assets accumulate E-E-A-T signals that radiate across the entire content output and compound across classical search and LLM citations alike.
The operational consequence: author-entity build-out belongs as a dedicated workstream in every serious SEO program. Not as an SEO detail, but as a strategic investment with a three-year return. The effort per author is clearly quantifiable — 120 days of structured work plus quarterly maintenance. The return is in higher YMYL rankings, higher AIO citation rate, consistent LLM citation, and a brand whose subject-matter expertise is machine-readably verifiable.