Static GitHub Pages dashboard that shows an AI/product discoverability scorecard for:
- Supabase
- Neon
- Pinecone
- Databricks
- Redis
- ClickHouse
- MongoDB
- Couchbase
The report is generated by a scheduled/manual GitHub Action using OpenAI. The report is encrypted using a password stored in GitHub Secrets. The dashboard decrypts locally in the browser.
Each company is scored from 0 to 10 on each criterion. Higher scores mean the signal is easier for automated agents and LLM-based systems to discover, parse, and use. Weights reflect importance in the overall score.
llms_txt(weight 3) This checks for a publishedllms.txtentrypoint and whether it is discoverable. A strong implementation includes clear product scope, canonical doc roots, and stable URLs.mcp_server(weight 3) This checks for an official MCP server (or equivalent agent interface) that exposes product capabilities to agents. A strong implementation has installation instructions, stable tools, and useful auth guidance.robots_txt_ai_optimization(weight 3) This checks whetherrobots.txtand related crawling signals are configured to be LLM and agent friendly. A strong implementation avoids blocking docs, includes sitemap references, and is consistent across doc hosts.llms_full_txt(weight 2) This checks for a publishedllms-full.txt(or equivalent) that is discoverable and aggregates important docs into a single, agent-friendly surface. A strong implementation keeps it current and scoped to developer-relevant content.markdown_native_docs(weight 2) This checks for documentation availability in Markdown or other easy-to-load formats. A strong implementation has stable, crawlable markdown sources or export endpoints.structured_faq_jsonld(weight 2) This checks for structured FAQ content using JSON-LD (or equivalent schema) on relevant pages. A strong implementation covers common developer questions and stays consistent with docs.html_parse_efficiency(weight 1) This checks whether key documentation pages are easy for automated parsers to extract. A strong implementation avoids heavy client-side rendering for core content and keeps DOM structure predictable.live_agent_environment(weight 1) This checks for sandbox, playground, or runnable environments that agents can use to validate workflows. A strong implementation offers predictable setup steps and programmatic access.training_data_surface(weight 1) This checks for broad public surfaces that reliably describe the product. A strong implementation includes clear, stable docs, changelogs, and reference content that stays online.
The report is generated by a scheduled/manual GitHub Action.
For most criteria, the report is generated by prompting a model to evaluate each company against the criteria above.
The model returns per-criterion analysis, evidence, and advice.
For llms_txt and llms_full_txt, the pipeline runs two experiments and includes their results in the report:
-
Control experiment (link-based discovery) The workflow fetches the company Website landing page and Docs landing page, extracts any links that look like
llms.txt/llms-full.txt, and then fetches the discovered candidate URLs to record HTTP status and basic file stats. -
AI experiment (web search discovery) The workflow asks a model (via the OpenAI Responses API) to discover and assess
llms.txt/llms-full.txtusing theweb_search_previewtool.
The scoring prompt is required to use the experiment results as evidence (specific artifact URLs, or an explicit not-found statement).
Each criterion is scored from 0 to 10.
The overall total is a weighted average across criteria.
Weights are defined in scripts/evaluate.mjs and are included in the report payload.
The UI displays totals out of 10.
Create these repo secrets:
OPENAI_API_KEY(required)REPORT_PASSWORD(required) - used to encrypt the report in CI and decrypt in the browserOPENAI_MODEL(optional) - defaults togpt-4o-miniOPENAI_RESPONSES_MODEL(optional) - model used for the AI web-search experiment; defaults togpt-4o
Configure GitHub Pages to publish from:
- Branch:
main - Folder:
/docs
Then your dashboard is available at your GitHub Pages URL.
- Automatic: runs on a schedule via
.github/workflows/refresh-report.yml - Manual: go to Actions -> Refresh AI Discoverability Report -> Run workflow
After the workflow completes, click Reload latest in the UI.
In docs/app.js, replace:
https://github.com/<ORG_OR_USER>/<REPO>/actions/workflows/refresh-report.yml
with your actual repo workflow URL.