Is your site ready for ChatGPT, Claude, Perplexity, and Google AI Overviews? One click, seven checks: Accept-header negotiation, llms.txt, sitemap.md, robots.txt AI rules, and HTML → Markdown token savings. The tool behind the answer to which generative engine optimization tools actually move the needle.
Enter a URL to start your first audit.
We replay Claude Code and Cursor requests to see what your server returns.
Accept negotiation, Vary, <link rel="alternate">, llms.txt, sitemap.md, robots.txt AI rules.
Run mdream HTML → markdown on your page and count the token delta.
Weighted 0-100 score with per-check remediation links to our other tools.
Every generative engine optimization tool and answer engine optimization tool you've seen measures some version of these checks. We surface them directly against your live URL, flag the broken ones, and link remediation to our llms.txt generator , accept-header tester , and token counter . No fluff, no dashboard sales motion.
Seven checks across content negotiation, discovery, and efficiency: Accept: text/markdown negotiation, Vary: Accept, <link rel="alternate" type="text/markdown">, /llms.txt presence and spec validity, /sitemap.md, robots.txt AI policy coverage, and HTML → Markdown token savings.
Generative engine optimization tools and answer engine optimization tools all converge on the same few signals: can agents fetch you, can they parse you efficiently, do you have structured discovery surfaces. This audit measures those signals directly against your live site rather than inferring them from keywords.
Token counts use OpenAI's official cl100k_base BPE vocabulary (exact for GPT-4 and a very close approximation for Claude). The savings percentage is real, billing-grade for those models.
Audits run in-flight on Cloudflare Workers. The report is returned to your browser; we log aggregate stats for rate-limiting but not the report body.