56 tools across 8 categories — what I actually use, with versions, purpose, frequency of use and reason for selection. No affiliate links. No sponsored mentions. No marketing claims. The list is revised quarterly; outdated tools come off.
When I boot up in the morning, this set opens automatically. Without any one of them, my work would be substantially impaired. The order is deliberate — by frequency and strategic weight.
The authoritative data source for impressions, clicks, rankings and indexing. API export into BigQuery.
The standard crawler for technical audits. Local, scriptable, custom extraction.
The backlink index with unmatched depth. Essential for competitor delta analysis.
The data warehouse for everything: GSC, GA4, crawler logs, LLM visibility runs.
An in-house Python pipeline across GPT/Claude/Gemini/Perplexity — monthly cross-model audits.
The foundation of every audit. Locally executable tools for raw-data integrity. No cloud-only solutions where exact traceability matters.
The standard crawler for technical audits. Configurable to the smallest detail: JavaScript rendering, JSON-LD extraction, custom search, custom extraction via XPath/CSS, API integration with GSC & GA4.
A complement to Screaming Frog, especially for JavaScript-heavy sites. Rendering analysis and the hint system surface blind spots.
For enterprise crawls > 10M URLs, where local tools hit limits. Scheduled crawls, cross-domain reports, change monitoring.
Render tree, network waterfall, Lighthouse audits, performance profiling. Irreplaceable for debugging sessions.
Precise performance analysis across different regions, devices and throttling profiles. Filmstrip visualisation for LCP debugging.
For automated CWV monitoring. Aggregated lab + field data via CrUX, scriptable across whole site sets.
Where standard tools stop and bespoke setups begin. Cross-model testing in 2026 is mandatory, not optional.
An in-house Python stack (OpenAI, Anthropic, Google Gemini, Perplexity Sonar APIs). Automated brand-query evaluation across 500–2,000 curated prompts, five runs per model. Output to BigQuery with an NER + sentiment pipeline.
A commercial platform for AI-visibility tracking. Brand-mention rate and citation share across models. For clients who do not want to run their own setup.
Supplementary: AI search visibility with a focus on Perplexity and ChatGPT search. Strong keyword-cluster views.
For enterprise clients with complex cross-market setups. Agent-based monitoring of brand mentions across all major LLMs.
For content analysis, QUEST checks and competitor paraphrase tests. Project-scoped chats with persistent context.
For structured content transformation, longer context windows (200k+) and stronger semantic coherence than GPT-4o on specialised topics.
Direct API access to Perplexity answers with citation metadata. For automated citation-rate measurement.
Open-source NLP for named-entity recognition in mention audits. Locally executable, GDPR-compliant, fine-tunable on industry vocabulary.
Rankings, backlinks, content gaps, visibility indexes. What underpins classical SEO reporting — and serves as a baseline for the GEO layer.
Primarily for backlink profiles, authority signal analysis and competitor delta. Index depth remains unmatched in 2026. API integration into dashboards.
Visibility index for DACH markets. Particularly valuable for historical data going back to 2008 and German competitor analysis.
A complement for keyword research, SERP-feature analysis and content gap work in English-language markets. Strong in US/UK, weaker in smaller-language markets.
The only authoritative source for impressions, CTR, rankings and indexing. API export into BigQuery for cross-dimensional analysis beyond the 16-month limit.
Do not underestimate this one: Bing feeds ChatGPT Search. Index status, keyword performance and URL inspection alongside GSC.
Daily rank tracking with SERP-feature differentiation (AIO? PAA? Shopping?). Alert-based rather than report-based — that is what makes it valuable.
White-label reporting for clients, a clean API, SERP screenshots as evidence. A complement to Nightwatch.
A content editor with semantic-density scoring and competitor overlay. Good for briefings, not for blind optimisation.
Without clean attribution, no valid claim. The standard: keep the raw data, do not rely on aggregated tool reports.
Not the interface — the BigQuery raw export. Indispensable for serious attribution work. Sampling and tool-aggregation errors go away.
A GDPR-compliant alternative for DACH engagements with strict data-protection requirements. Full data sovereignty.
The data warehouse for everything: GSC, GA4, log files, AI-visibility results, crawler data. SQL-driven analysis, ML models on top.
The data-transformation layer between BigQuery and dashboards. Versioned SQL models, tests, documentation.
For client dashboards: GSC, GA4, BigQuery, Ahrefs and Sistrix all connected. Update frequency: daily. Weekly and monthly reports automated.
Ad-hoc analyses, custom attribution models, cohort studies. pandas, NumPy, scikit-learn, OpenAI SDK. Notebook-based reproducibility.
For local analysis of medium-sized log files (1–100 GB) without BigQuery overhead. Extremely fast, SQL-native.
For internal analysis dashboards with embedded JS/SQL. When Looker becomes too rigid.
Structured data is the foundation of all GEO work. Without rigorous validation, the entity chain breaks at the first link.
The primary validator for all JSON-LD implementations. Catches structural errors that the Google Rich Results Test misses.
Eligibility check for rich results. Shows which features (FAQ, HowTo, Article, Breadcrumb) can be activated per URL.
For entity consolidation, Knowledge Graph anchoring and cross-language sameAs mapping. SPARQL queries against Wikidata.
Direct querying of Google's Knowledge Graph per entity. Shows immediately whether a brand is anchored.
Entity-based content optimisation. Automated semantic analysis and internal linking at the entity level.
Structured extraction from web pages with its own knowledge graph. For competitor entity mapping.
Continuous watching, not periodic checking. Early signals are the most valuable — and most often missed.
Server logs aggregated and segmented by bot user agent (Googlebot, GPTBot, ClaudeBot, PerplexityBot). HTTP status-code analysis for 429 dead-zone detection.
For client setups with high-volume logs (10M+ per day). Live dashboards, alerts on crawler anomalies.
Brand monitoring across press, social, forums and Reddit. Sentiment classification, RDI input, early reputation-drift detection.
A complement to Talkwalker: stronger in print and TV press, for enterprise engagements with a classical PR component.
Simple brand-mention alerts. Not precise, but hyper-fast — a useful early-warning layer for unknown brand mentions.
Status monitoring of client domains including robots.txt change detection. Alerts on unexpected disallow additions.
Page change detection for competitor monitoring. Catches content updates, new landing pages and schema changes.
Not glamorous, but critical. Where insights get translated into client knowledge.
Knowledge base, client projects, playbooks, research notes. The single source of truth for all textual artefacts.
For entity maps, topical maps, sitemap diagrams and report layouts. Collaborative visualisation of strategies.
Project & task management for SEO engagements. Scheduled recurring tasks for monthly audits.
For audit walkthroughs and briefing videos. Faster to explain than written reports; clients appreciate it.
Local personal knowledge management. A Zettelkasten for research insights, frameworks and case learnings.
Without automation, SEO does not scale in 2026. Every recurring analysis belongs in a script.
An AI-native IDE for Python pipelines, SQL models and bespoke analyses. Replaces VS Code for ML/NLP workflows.
Agentic coding for complex refactoring, batch operations and SEO audits. The primary terminal setup on macOS.
Versioning for scripts, dashboards and client configurations. Actions for scheduled crawls and reports.
The edge layer for client projects: WAF rules for AI crawlers, cache headers, A/B tests, JSON API deployment.
For landing-page experiments and micro-sites where fast deploys matter. Next.js preview URLs as a test environment.
Browser automation for complex crawls, login-protected pages and SERP screenshots. Python bindings preferred.
The automation layer between tools. Alerts → Slack, crawl completions → Linear tasks, Peec AI data → BigQuery.
Credential management for 100+ client tool accounts. API keys strictly per team vault.
Last updated: April 2026 · Next review: July 2026 · The stack evolves with the market — new tools are added, obsolete ones removed.