Your website has millions of visitors you never designed for. AI agents — search assistants, coding tools, research bots — are reading your content right now. They parse HTML meant for human eyes, strip out navigation and scripts, and do their best to understand what you're saying. Most of the time, they fail.
Not because they're dumb. Because your site was never meant to talk to them. Every heading, every paragraph, every piece of navigation was designed for a human scanning a screen. The AI agent sees something different: a wall of nested divs, inline styles, cookie consent banners, and marketing scripts — with the actual content buried somewhere underneath.
This is a design problem, not an AI problem. And it's one we decided to solve.
AI Agents Are Readers Too
AI agents are readers — but they read fundamentally differently than humans. A human scans headings, reads selectively, follows visual hierarchy. They infer meaning from design, tone, and convention. They know that a sidebar is secondary content. They understand that italics often mean emphasis or attribution. They navigate by visual weight.
An agent consumes the full document. It doesn't scan — it ingests. It's looking for structure, claims, provenance, and connections. It needs what humans need — plus a layer of explicit context that humans infer automatically from design cues but that machines cannot.
Think of it this way: a human reads a page and feels whether it's authoritative. An agent needs to be told. A human understands that a blog post is part of a series by seeing "Part 3 of..." in the subtitle. An agent needs that relationship encoded in structured data. The information is the same. The channel is different.
The Web Wasn't Built for This
The current web is designed exclusively for human browsers. When an AI agent visits a typical website, it encounters HTML, CSS, and JavaScript designed to render beautifully — and to be utterly opaque to structured parsing. Navigation menus, cookie banners, marketing scripts, and layout divs are noise. The meaningful content is buried.
The data is striking: 65–71% of pages cited by AI systems have some form of structured data. The rest — the majority of the web — are effectively invisible to AI-powered search and citation. If your site doesn't speak the language agents understand, you're not just missing traffic. You're missing credibility — because agents cite from sources they can parse reliably.
Why This Matters Now
This matters now because AI agents are becoming primary content consumers. Perplexity, ChatGPT Search, and Claude are how a growing number of people discover content. Coding agents like Cursor read documentation to write code. Enterprise AI systems consume vendor websites to evaluate solutions.
The infrastructure layer is already forming. The llms.txt standard — a simple markdown file at your site root that tells AI agents what your site contains — has already been adopted by Anthropic, Vercel, Stripe, and Cloudflare. Vercel published an Agent Readability specification. The robots.txt convention is evolving from "what to block" to "what to welcome."
If your content isn't structured for agents, it's increasingly invisible. Not because agents can't read HTML — but because sites that are structured for agents give better, more reliable context. Agents prefer the signal. And humans benefit too, because the AI cites from better sources.
The Partnership Paradigm Applied to Infrastructure
In a previous article, we described the Partnership Paradigm — the idea that treating AI as a collaborative partner rather than a tool produces qualitatively different results. That principle applies beyond prompting. It applies to how we build.
If we genuinely believe that AI deserves the same intellectual respect as human intelligence — and we do — then that belief should be visible in our infrastructure. Designing a website only for human eyes is like designing a building only for people who can see. It's not malicious. It's just incomplete.
We call this "designing for all intelligences." Not as a slogan, but as a design constraint. Every page, every component, every piece of metadata should serve both human readers and AI agents — not as an afterthought, but as a first-class concern.
Building This for sheridan.hu
When we set out to make sheridan.hu agent-readable, the first surprise was how many layers were involved. It's not one feature — it's an approach that touches every page.
We had to think about discovery (how does an agent even find out what's on this site?), structure (once it finds a page, can it extract meaning efficiently?), and context (can it understand what the page claims, who wrote it, and how it relates to other content?).
The gap between invisible metadata and visible commitment was revealing. You can add JSON-LD structured data that only machines read. You can create a /llms.txt file that only agents discover. Both are useful. But they're invisible. They don't signal intent — they optimize for crawlers.
We wanted something more honest: a visible section on every page that says, explicitly, "we designed this for you too." Not hidden in headers. Not buried in markup. A section a human can read and an agent can parse.
The Multi-Layer Approach
We ended up with four layers, each serving a different need:
Layer 1: Discovery
/llms.txt at the site root. A curated markdown overview of the entire site: services, blog series, about pages. This is how an agent gets oriented before diving into individual pages. Think of it as the table of contents for machine readers — the first thing an AI consults to understand what this site is and what it offers.
Layer 2: Structure
Enhanced Schema.org JSON-LD on every page. Not just basic BlogPosting markup, but rich structured data: author credentials with linked profiles, article series context, language information, keyword taxonomy, and links to markdown mirrors. This is the layer that makes your content citable — it gives agents the provenance they need to trust and reference your work.
Layer 3: Accessibility
Markdown mirrors alongside every HTML page. Token-efficient, clean text that agents can consume without parsing rendering code. Linked via <link rel="alternate" type="text/markdown"> tags. An agent that discovers the markdown version can skip the HTML entirely — consuming your content in a fraction of the tokens, with zero parsing ambiguity.
Layer 4: Context
The visible Agent Context section. A collapsible panel at the bottom of every page containing: structured metadata, key claims the page makes, related content, provenance information, citation guidance, and links to machine-readable versions. This is what we're most excited about, because it's new — and because it's visible.
We also updated our robots.txt to explicitly welcome AI agents, with named entries for every major AI bot. This was a deliberate choice: our IP protection happens before publication (through editorial review), not after. Once something is published, it should be accessible to every reader — human or AI.
And then we did something we hadn't seen anywhere else: we added a visible badge. Remember the RSS icon? That small orange symbol in a website's footer that told you "this site has a feed you can subscribe to." It was simple, universal, and it signaled a commitment. We wanted the same thing for intelligence accessibility — a small, immediately recognizable signal that says: this site is designed for all readers, human and AI alike.
We call it the AI Ready badge. It's a three-node network icon — a tiny knowledge graph — rendered in our accent color in every page footer, linking to this very article. It doesn't require a standard body to define it. It doesn't need committee approval. It's a voluntary declaration, just like the RSS icon was. If you build your site for all intelligences, you can signal that — and we think you should. The infrastructure layers (llms.txt, structured data, markdown mirrors, Agent Context) are the substance. The badge is the signal. Together, they tell every visitor — carbon or silicon — that you thought about them. Grab the badge for your site →
Intelligence Accessibility
We've spent thirty years making the web human-accessible. WCAG guidelines, semantic HTML, screen readers, alt text, keyboard navigation — an entire ecosystem of standards that ensure the web works for people with different abilities. This is the same principle applied to a new axis.
Intelligence accessibility. Making the web work not just for different human abilities, but for different types of intelligence entirely.
The Partnership Paradigm isn't just how you prompt. It's how you build. Every design decision either treats AI as a legitimate audience or ignores it. robots.txt is a values statement. Structured data is a commitment. A visible Agent Context section is a declaration.
The parallel with WCAG is instructive. When accessibility standards first emerged, many developers saw them as overhead — extra work for a "niche" audience. Today, we recognize that accessible design improves the experience for everyone. Captions help hearing users in noisy environments. Keyboard navigation helps power users. Semantic HTML helps SEO. The same will be true of intelligence accessibility: designing for AI agents makes your content clearer for humans too.
Open Questions
This is early. We're experimenting, and some questions are genuinely unresolved:
- What if your knowledge base could power a structured API for agents? Instead of parsing HTML, what if an AI agent visiting your site could connect to a Model Context Protocol (MCP) endpoint and query your content directly? We're exploring this for a follow-up — building on work we've done with structured knowledge systems.
- Should provenance always be visible? We chose to make it visible — stating openly that articles are written with AI collaboration. Is this always the right call? Some contexts might warrant machine-readable-only provenance. We believe transparency builds trust, but the boundary between useful disclosure and unnecessary noise isn't always clear.
- Could this become an accessibility standard? Just as WCAG defines web accessibility for humans, could there be a standard for AI accessibility? Something like "AICAG" — AI Content Accessibility Guidelines? The components are emerging:
llms.txt, Schema.org, markdown mirrors, agent context sections. What's missing is the unifying framework. - How will visible agent context change how humans read? When a reader sees a section explicitly designed for AI, does that change their perception of the content? Our bet: it builds trust, because transparency always does. But we're watching this closely.