Home

AI search visibility audit

Is your site ready for ChatGPT, Claude, Perplexity, and Google AI Overviews? One click, seven checks: Accept-header negotiation, llms.txt, sitemap.md, robots.txt AI rules, and HTML → Markdown token savings. The tool behind the answer to which generative engine optimization tools actually move the needle.

URL
Ready

Enter a URL to start your first audit.

What gets checked
  • Accept: text/markdown negotiation
  • Vary: Accept header
  • <link rel="alternate" type="text/markdown">
  • /llms.txt spec-compliance
  • /sitemap.md presence
  • robots.txt AI bot coverage
  • HTML → Markdown token savings

How the audit works

  1. 1

    Fetch with agent fingerprints

    We replay Claude Code and Cursor requests to see what your server returns.

  2. 2

    Inspect headers and surfaces

    Accept negotiation, Vary, <link rel="alternate">, llms.txt, sitemap.md, robots.txt AI rules.

  3. 3

    Convert + tokenize

    Run mdream HTML → markdown on your page and count the token delta.

  4. 4

    Score + grade

    Weighted 0-100 score with per-check remediation links to our other tools.

Why audit?

Every generative engine optimization tool and answer engine optimization tool you've seen measures some version of these checks. We surface them directly against your live URL, flag the broken ones, and link remediation to our llms.txt generator , accept-header tester , and token counter . No fluff, no dashboard sales motion.

Frequently Asked Questions

Seven checks across content negotiation, discovery, and efficiency: Accept: text/markdown negotiation, Vary: Accept, <link rel="alternate" type="text/markdown">, /llms.txt presence and spec validity, /sitemap.md, robots.txt AI policy coverage, and HTML → Markdown token savings.

Generative engine optimization tools and answer engine optimization tools all converge on the same few signals: can agents fetch you, can they parse you efficiently, do you have structured discovery surfaces. This audit measures those signals directly against your live site rather than inferring them from keywords.

Token counts use OpenAI's official cl100k_base BPE vocabulary (exact for GPT-4 and a very close approximation for Claude). The savings percentage is real, billing-grade for those models.

Audits run in-flight on Cloudflare Workers. The report is returned to your browser; we log aggregate stats for rate-limiting but not the report body.

Related