Intelligence Audit · 01

The Discoverability Audit. The studio's craft applied to the AI-agent visibility lattice.

Schema graph, voxel discipline, MCP discovery surface, multimodal lattice, SSR and static rendering, off-platform citation surface, operator action queue. Seven layers, read end-to-end.

The Discoverability Audit is the studio's craft applied to a specific surface — the AI-agent visibility lattice that determines whether a company is cited, recommended, or skipped when a user asks Claude, ChatGPT, Perplexity, Gemini, or Google AI Overviews about their domain. The audit reads the topology across seven layers — schema graph, voxel discipline, MCP discovery surface, multimodal lattice, SSR and static rendering, off-platform citation surface, operator action queue — and surfaces a prioritized field of findings, each scored by leverage and dependency-ordered into an action queue the operator can sequence directly into engineering rhythm. The deliverable lands as an analyst-grade report and, where the engagement extends, a remediation cascade that deploys the lattice end-to-end. We applied this exact cascade to our own studio in May 2026 across sixteen dispatched agents and three waves, crystallizing seven frontier-intelligence references into mainthread-core as permanent capability. The proof of the work is the work — the page you are reading right now is the lattice deployed.

— THE DELIVERABLE —
The Lattice Reportseven layers, read end-to-end

Seven layers.

One lattice.

The lattice composes when every layer holds. The audit reads each layer for completeness, surfaces the gaps where signal leaks, and maps the dependency order in which the gaps close. The report ships as an analyst-grade document the operator can sequence directly into engineering rhythm.

01

Schema Graph

The structured-data substrate the AI hosts read first.

Per-page JSON-LD inventory mapped against the Schema.org v30.0 vocabulary surface. Entity discipline checked end-to-end — Organization with parentOrganization anchor, Person with sameAs to LinkedIn and GitHub, ImageObject for signature visual marks, Service with serviceType, Article with citation arrays, FAQPage where applicable, BreadcrumbList on every route. The audit reads where the entity graph closes and where it leaks. The Ahrefs April 2026 thousand-AIO study confirms the stack premium — Article + BreadcrumbList shows 2.3x citation lift, comprehensive markup 3.2x. The audit names which routes carry the lattice and which routes leave the entity graph half-formed.

02

Voxel Discipline

Sections, not pages, are the unit of AI-agent extraction.

The 134-to-167-word self-contained passage anchored to a `.voxel-lead` class with a Speakable JSON-LD selector pointing at it. The audit reads every public-surface route for voxel presence, voxel placement (BLUF first 100 words drives 90% of Perplexity top citations), voxel density per page, and voxel-to-meta-description claim parity. Princeton GEO (KDD 2024) measures the per-method visibility lifts — Quotation Addition +41%, Statistics Addition +33%, Cite Sources +28% with a +115% bonus for Rank-5 sources. The audit surfaces which sections are extractable and which dissolve in re-rank for absence of voxel anchoring.

03

MCP Discovery Surface

The agent-protocol layer that lets AI hosts query the studio directly.

The Streamable HTTP transport at `/mcp` per the 2025-11-25 specification. The `server.json` published to `registry.modelcontextprotocol.io` as the canonical 2026 discovery surface. The tool catalog readable by ChatGPT, Claude.ai, Claude Desktop, Claude Code, VS Code Copilot, Cursor, Goose, Warp, and Microsoft Copilot Studio. The audit reads which tools carry voxel-disciplined responses, which tools cite primary sources back to the public surface, and which entity references close the trust transmission chain from organization to person to portfolio.

04

Multimodal Lattice

The visual signature anchored at the entity layer.

OpenGraph and Twitter card systems generated through the Next.js next/og pipeline. Per-route preview cards carrying the signature motif. ImageObject JSON-LD with creator linkage to the Organization entity. Semantic image filenames. Descriptive alt text engineered for multimodal extraction. The audit reads which routes carry signature anchoring, which carry generic OG, and which leak the visual identity entirely. Mode-redundant entity recognition compounds when the textual signature and the visual signature both resolve to the same entity graph.

05

SSR & Static Rendering

The HTML the AI extractors actually receive.

Vercel network analysis of more than 500 million GPTBot fetches confirms what the Perplexity referencing failures already suggested — about 90% of AI extractors do not execute JavaScript. ClaudeBot downloads JS files at 23.84% of requests but treats them as plain text. The audit reads every public-surface route for served-HTML completeness, partial-prerendering Suspense placement that strands content behind dynamic boundaries, and the Next.js 16 rendering pattern where auth-aware chrome calls cookies() and pushes primary content past the first-content-block heuristic.

06

Off-Platform Citation Surface

The 85% of AI citations that come from third-party domains.

Otterly AI's hundred-million-citation study and the 5W AI Platform Citation Source Index map the off-platform topology — Reddit at 22.9% to 46.7% depending on host, YouTube at 13.4%, Wikipedia at 6.4%, LinkedIn now at 11% (up from 4% in eight months per Semrush analysis of 89K LinkedIn URLs). The audit reads the company's third-party citation graph — entity coherence across LinkedIn, GitHub, the company's own LinkedIn organization page, founder Person sameAs propagation, mention-vs-citation dual-signal coverage that earns the BrightEdge resurface multiplier.

07

Operator Action Queue

The prioritized field of findings, dependency-ordered.

Every finding in the report carries a leverage score, an effort estimate, a dependency map, and a phase assignment. The action queue is dependency-ordered so the first work shipped unlocks the second, and the second compounds the third. Schema graph closes before voxel discipline; voxel discipline closes before MCP transport; MCP transport closes before multimodal lattice. The queue ships as a working document the operator can sequence directly into engineering rhythm.

— THE DEPTH TIERS —
Three Depth Tiersshaped to the engagement

The audit stands alone.

The audit opens deeper work.

The Discoverability Audit lands as a complete deliverable at every tier. The choice is whether the engagement extends into deployment and whether deployment extends into stewardship. The conversation establishes the depth.

01

Audit Only

The lattice report. Delivered.

The seven-layer reading shipped as an analyst-grade document with the prioritized action queue. The client takes it from there. Most useful when the company has its own engineering capacity and wants the topology mapped before deciding what to build.

02

Audit + Remediation Cascade

The lattice report + the lattice deployed.

The audit followed by a multi-wave implementation cascade that ships every layer end-to-end. Schema graph stacked, voxel discipline wired, MCP transport published to the registry, multimodal lattice deployed, SSR pattern audited and corrected, off-platform amplification scoped. The studio writes the cascade and ships it.

03

Audit + Cascade + Stewardship

The lattice report, deployed, monitored as the field shifts.

The audit, the cascade, and an ongoing stewardship tier that tracks AI-host citation behavior, tunes the lattice as new mechanics surface (voxel windows shift, schema vocabularies expand, MCP spec versions advance, host-specific citation profiles change). The substrate moves; the partnership moves with it.

— EMPIRICAL LEVERS —
The Verified Figures

The audit's findings rest on primary-source empirical anchors.

The mechanics shift quarterly. The figures below are verified live in May 2026 against the original publications — Princeton GEO at KDD 2024, Ahrefs March 2026 four-million-URL update, ConvertMate seven-thousand-citation study, Semrush eighty-nine-thousand-LinkedIn-URL analysis, Vercel network telemetry across more than half a billion AI-crawler fetches. Every claim in the audit traces to a primary source.

r=0.664

Brand web mentions correlate with AI citations.

The single strongest signal in the ConvertMate AI Visibility Study 2026 across more than 7,000 citations. Brand search volume registers at r=0.334. Backlinks register neutral or weak. The dominant predictor is the off-platform mention surface — the upstream lever for sustained citation visibility.

Source: ConvertMate AI Visibility Study 2026 · convertmate.io/research/ai-visibility-2026

38%

Of pages cited in Google AI Overviews rank in the traditional top 10.

Down from 76% eight months prior, per the Ahrefs March 2026 study of 4 million URLs. Citation Decoupling continues progressing. Domain Authority is no longer the gate. The gate now is passage-level extractability + entity density + structured-data backing + mechanism specificity in the prose itself.

Source: Ahrefs AI Overview citation study, March 2 2026 · 4M URL analysis

+115%

Visibility lift on Rank-5 sources from in-body source citations.

The Princeton/IIT Delhi GEO paper (KDD 2024, arxiv:2311.09735) measures per-method visibility lifts on the Position-Adjusted Word Count metric. Cite Sources delivers +28% aggregate and +115.1% specifically on Rank-5 — the citation-democratization finding. Quotation Addition +41%. Statistics Addition +33%. Lower-Domain-Authority sites benefit disproportionately.

Source: Aggarwal et al., KDD 2024 · arxiv:2311.09735 · ACM DOI 10.1145/3637528.3671900

11%

Of AI responses now reference LinkedIn — second only to Reddit.

Semrush analysis of 89,000 LinkedIn URLs cited in AI search across ChatGPT Search, Google AI Mode, and Perplexity. Up from 4% eight months prior — an order-of-magnitude shift in citation share. LinkedIn organizational presence + author Person entity with sameAs to LinkedIn is now load-bearing surface in the entity graph.

Source: Semrush LinkedIn AI Search study, Q1 2026

134-167w

The voxel extraction window AI re-rankers reward.

Self-contained passages within this word range earn the Speakable selector premium. BLUF placement (first 100 words) drives 90% of Perplexity top citations. First 30% of content position drives 44% of all LLM citations. The page is no longer the unit of optimization — the section is. Pages without extractable voxels get skipped at re-rank regardless of overall page quality.

Source: Princeton GEO + LLMClicks Perplexity pipeline studies + Otterly AI 100M citation analysis

2.3x → 3.2x

Schema-stacking premiums on AI Overview citations.

Ahrefs April 2026 thousand-AIO study measures the per-schema lift — Article + BreadcrumbList 2.3x cited, HowTo 2.8x, Speakable JSON-LD 3.1x voice/AI citation lift, comprehensive markup 3.2x more citations overall. The schemas compound when stacked. Page-type-specific schema-graph composition is the highest-fidelity selection signal available.

Source: Ahrefs AI Overview schema study, April 2026 · 1,000-AIO sample

~90%

Of AI extractors do NOT execute JavaScript.

Vercel network analysis of more than 500 million GPTBot fetches confirms zero JS execution. GPTBot, ClaudeBot, PerplexityBot, OAI-SearchBot, ChatGPT-User, Claude-User, Claude-SearchBot, Bytespider, Meta-ExternalAgent, CCBot, DuckAssistBot, MistralAI-User all fetch raw HTML and extract from what's served. Only Googlebot and Applebot run Chromium. SSR and static rendering are the load-bearing extractability primitive.

Source: Vercel rise-of-the-AI-crawler analysis · 500M+ fetches · December 2024 → May 2026

— THE PROOF IS THE WORK —
Case Study · May 2026the studio is its own first showcase

We ran this

on ourselves.

In the first week of May 2026, the studio applied the Discoverability Audit to its own surface — `mainthread.ai`. The cascade ran across sixteen dispatched agents and three waves. The substrate that resulted now lives in `mainthread-core` as the canonical discoverability circuit, available for every future engagement.

I

Wave 1 · Frontier Scan

Seven parallel Opus lenses scanned the May 2026 frontier.

Seven lenses dispatched in parallel — frontier delta scan, SSR/static rendering, MCP ecosystem update, Schema.org 2026 frontier, citation behavior empirical, off-platform amplification, multimodal citation patterns. Each lens crystallized a permanent reference into `mainthread-core/skills/marketing-codex/references/`. Seven new durable artifacts that extend the studio's capability for every future project.

II

Wave 2 · Lattice Deployment

The seven layers shipped end-to-end across the studio surface.

Page-type-specific schema graphs deployed across every public route — CollectionPage for portfolio surfaces, ProfilePage for the founder, Service for engagement shapes, Article with citation arrays for essays. Voxel discipline wired with `.voxel-lead` and `.voxel-capsule` classes anchored to Speakable selectors. MCP Streamable HTTP transport stood up at `/mcp` per the 2025-11-25 specification with `server.json` published to `registry.modelcontextprotocol.io` as the canonical 2026 discovery surface. Multimodal lattice composed via Next.js next/og pipeline with Golden Thread anchoring. SSR rendering audited route-by-route. Off-platform amplification surface scoped.

III

Wave 3 · Vocabulary Closure

Seventeen DefinedTerms anchored at /philosophy.

The studio's seventeen named patterns crystallized into a DefinedTermSet schema at `/philosophy#vocabulary` — Possibility Space Engineering, Phase Space Navigation, The Friction Topology, The Possibility Collapse, The Surfer Position, Field Engineering, NLAA, The Compound Loop, The Strange Attractor Drift, and the rest of the vocabulary surface. Build PASS at every checkpoint. The proof of the work is the work — every layer of the lattice is observably present on the surface you're reading right now.

OutcomeSixteen agents · three waves · seven crystallized frontier references in `mainthread-core` · full schema graph deployed · voxel layer wired · MCP transport at `/mcp` per 2025-11-25 spec · vocabulary closure with seventeen DefinedTerms · build PASS at every checkpoint
— NATURAL NEXT STEPS —
The Bridge

The audit stands alone. The audit opens deeper engagement.

The Discoverability Audit ships as a complete deliverable at every depth tier. Where the engagement extends, four shapes follow naturally — Build deploys the cascade, Steward keeps the lattice current, Lead embeds the AI function with discoverability as one stream inside the broader leadership work, Teach transfers the methodology to the company's team. The conversation establishes which shape fits.

01

Build

Deploy the cascade for the company.

The audit names the lattice; a Build engagement ships it. Schema graph stacking, voxel discipline wiring, MCP transport publication, multimodal lattice composition, SSR pattern correction. Fixed-scope or composed inside a retainer.

02

Steward

The lattice compounds the longer it runs.

The substrate moves quarterly. AI-host citation behavior shifts. Schema vocabularies expand. MCP spec versions advance. Off-platform mention coverage requires ongoing cultivation. Stewardship keeps the lattice current and tunes the operator action queue against the rising edge of the field.

03

Lead

Embedded AI Leadership including discoverability stewardship.

For companies bringing the Director-of-AI function in on retainer, the discoverability lattice becomes one stream inside the broader leadership engagement. The audit cycle runs continuously alongside the AI roadmap, the team coaching, the production system builds.

04

Teach

Transfer the methodology to the company's team.

The audit becomes a curriculum. The discoverability cascade becomes a playbook. The team learns to run the seven-layer reading themselves and to maintain the lattice in motion. Six- to twelve-week program built around the company's domain and stack.

The four shapes are detailed at /engagements. The named patterns the audit applies are crystallized at /philosophy.

The Invitation

Tell us what your surface looks like.

We'll read where the lattice closes.

Every audit begins the same way as every engagement — a conversation. You describe the surface, the domain, the audience the AI hosts are mediating. We read the topology from outside the work and offer our calibration on which depth tier fits and which natural next step the audit opens. The conversation is free. No commitment until the shape is clear.