The short answer. We just shipped a free AEO audit at fifteenthmeridian.com/tools/aeo-audit. It runs 12 real checks against any URL, scores each one on a tiered rubric (not pass or fail), and tells you exactly what's missing for AI engines to cite your site. No email gate, no signup, no demo data. This post explains what each check is, why it matters, and how to fix the common gaps.
Every "free SEO audit" tool I've used in the last decade has the same problem. It's either a 90-second pitch for a paid tier, or it's a binary checklist that gives you 100/100 the second you have a meta description and a sitemap. Neither tells you what AI engines like ChatGPT, Perplexity, Claude, or Google AI Overviews actually look at when they decide whether to quote your site as a source.
So we built one. It runs 12 checks, awards partial credit on every one, and gives you a 0 to 88 score that maps to four tiers: Strong, Decent, Behind, and Invisible. Here's how it works.
The audit fetches the URL you submit, plus your domain's llms.txt, robots.txt, and sitemap.xml. Then it runs 12 checks across three buckets: structured data, AI-specific signals, and authority and content.
Why this bucket weighs so much: AI engines extract claims from JSON-LD before they extract them from prose. Schema is the cleanest, most-quoted signal you can ship. Skipping it is the single biggest reason small-to-mid brands aren't being cited.
The FAQ check is the one most tools get wrong. We only credit FAQPage schema when there's matching visible Q&A on the page. Shipping FAQ JSON-LD without visible questions violates Google's guidelines, can trigger a manual action, and AI engines have started discounting it too. The audit will catch that and dock you instead of rewarding the spam.
The llms.txt check is graded on six sub-criteria: file present (3pt), reasonable size between 200 bytes and 100KB (1pt), H1 with site name (2pt), summary block or intro paragraph (2pt), at least 2 H2 sections (2pt), at least 5 markdown-formatted links (2pt). A barebones one-line file scores 3 out of 12. A properly structured guide following the llmstxt.org spec scores 12.
The robots.txt check looks at six tracked AI agents (GPTBot, ClaudeBot, PerplexityBot, Google-Extended, CCBot, anthropic-ai) and awards proportional credit. Block all six = invisible in AI answers. Block two = 5 out of 8 points and a clear list of which to unblock first.
Author signal is tiered: 0 for nothing, 4 for a basic <meta name="author"> tag, 8 for full Person schema with a description and bio. Meta description is graded on length, not just presence (120 to 160 chars hits the SERP sweet spot). Sitemap quality is graded on URL count and lastmod presence, not just file existence.
SpeakableSpecification tells voice assistants and AI engines which DOM regions to read aloud or quote first. BreadcrumbList helps AI engines understand site hierarchy and link to the right level when they cite. Both are easy wins most sites skip.
Total possible: 88 points. The score is converted to a percentage and bucketed into four tiers.
The tiering matters because a single number can be misleading. Two sites at 70% can have very different gap profiles. The report shows you the per-check breakdown grouped by severity (Critical, Important, Nice to have) so you know what to fix first.
Three honest limits up front.
It's a single-URL audit, not a site crawl. Run it on your homepage, your top three service pages, and a recent blog post for a fuller picture. Domain-level signals (llms.txt, sitemap, robots) are checked once per audit; page-level signals (H1, schema, meta description, content depth) are specific to the URL you submit.
It can't see your actual citation rate. The audit grades whether your site is set up to be quotable, not whether AI engines are actually quoting you. For real citation tracking, you need to monitor branded queries across ChatGPT, Perplexity, Google AI Overviews, and Claude over time. That's a separate motion.
It doesn't grade content quality. A perfect 100 with thin or generic content still won't get cited. Schema and llms.txt make you eligible to be quoted; the content itself decides whether you're worth quoting. The audit can flag content depth (under 500 words) but it can't tell you whether your writing is good.
The audit is at fifteenthmeridian.com/tools/aeo-audit. No email, no signup, no demo data. Drop in a URL, get a report in under 10 seconds. If you score below 65%, the report will tell you exactly which checks to fix first and what the breakdown means.
For full transparency: we use it ourselves before shipping any major page. As of this post, the Meridian15 homepage scores 85% (Strong tier). Two of the docked points are because the homepage isn't a Q&A page, so adding fake FAQ schema would be dishonest and against the audit's own rules. The third docked point is content depth, which we'll address in a future pass. We'd rather ship an honest 85 than a fake 100.
If you want help closing the gaps on your own pages, our SEO and AEO retainer covers schema buildout, llms.txt structure, AI mention tracking, and the content production that turns a Decent score into a Strong one. Reach out if you want to talk through it.
Twelve real signals AI engines look at when deciding whether to cite a source: llms.txt structure, AI crawler permissions in robots.txt, JSON-LD schema presence, FAQPage schema with quality answers, Organization or LocalBusiness depth, Article or BlogPosting completeness on editorial pages, named author and E-E-A-T signal, single H1, meta description length, Open Graph tags, sitemap.xml quality, and substantive content depth. Each check is scored on a tiered rubric, not pass or fail.
Three differences. First, it grades AEO-specific signals (llms.txt structure, AI crawler permissions, FAQPage schema with quality answers) that traditional SEO audits ignore. Second, every check is tiered with partial credit, so a barebones llms.txt does not score the same as a properly structured one. Third, no email gate, no signup, no demo data: drop in a URL and get a real report in under 10 seconds.
llms.txt is the highest-leverage AEO signal a site can ship in 2026. It is a guide file at the site root that tells AI engines what your site is about and which pages to prioritize when answering queries. The audit grades llms.txt across six criteria (file present, reasonable size, H1, summary block, at least two H2 sections, at least five markdown links) so a real, structured guide scores 12 out of 12 and a barebones one-line file scores 3.
No. The audit only credits FAQPage schema when there is matching visible Q&A on the page. Shipping FAQ JSON-LD without visible questions and answers violates Google's structured data guidelines and can trigger a manual action. AI engines parse for matched-content signals as well. The legitimate path to FAQ points is adding a real visible FAQ section with three to five questions, then mirroring it in JSON-LD.
Run it before shipping any major page or template change, and quarterly on existing high-priority pages (homepage, top service pages, pillar blog posts). AEO signals drift: schema can break in a CMS update, robots.txt changes can lock out new AI crawlers, llms.txt can fall out of sync with new content. A quarterly cadence catches drift before it costs citations.
No. The audit grades whether your site is set up to be quotable, not whether it is being quoted. For actual citation tracking, you need to monitor branded queries across ChatGPT, Perplexity, Google AI Overviews, and Claude over time. AI mention tracking is part of our SEO and AEO retainer; the free audit covers the structural setup.
Free AEO audit
Twelve real checks. Tiered scoring. No email gate. Drop in a URL and get a report in under 10 seconds.
Run the Audit