llms.gist

About llms.gist

What this is

llms.gist shows you how AI tools describe your product. Audit your URL, see what ChatGPT and Claude say, and get a downloadable fix file that patches the context LLMs are missing.

The problem

LLMs answer millions of “what should I use for X?” questions every day. The description they give of your product decides whether you get recommended, ignored, or confused with a competitor. Most of the time, they get it wrong — and no one tells you.

AI coding tools have the same problem. They read your HTML but not your design decisions, rejected alternatives, or boundaries — so they guess. The result is code that looks right but implements the wrong thing.

How it works

You give us a URL. We audit your product through every major LLM and score the output across positioning, category accuracy, feature claims, and audience fit. Gaps become a downloadable llms.gist file you can drop into your repo or paste into Cursor, Claude Code, or ChatGPT.

Run an audit any time you ship a meaningful change — a new positioning paragraph, a new feature, a competitor launch — and see how the AI tools your buyers use respond.

Built on an open standard

The fix file is a .gist, an open format for AI-readable product context. Anyone can author one by hand or with the Claude Code skill. llms.gist produces one as the output of every audit. The spec is MIT-licensed and tool-agnostic.

Connection to aiuxdesign.guide

Pattern references in the fix file link to aiuxdesign.guide, which documents 36 AI UX patterns from shipped products.

Who built this

Built by Imran. Feedback and spec contributions welcome via GitHub.