Repo for langfuse.com. Built with Fumadocs and Next.js App Router.
Pre-requisites: Node.js 22, pnpm v9.5.0
To use Node 22 (e.g. with nvm): nvm install 22 then nvm use (or nvm use 22). The repo includes an .nvmrc so nvm use picks 22 automatically.
- Optional: Create env based on .env.template
- Run
pnpm ito install the dependencies. - Run
pnpm devto start the development server on localhost:3333
All Jupyter notebooks are in the cookbook/ directory. For JS/TS notebooks we use Deno, see Readme in cookbook folder for more details.
To render them within the documentation site, we convert them to markdown using jupyter nbconvert, move them to the right path in the content/guides/cookbook/ directory where they are picked up by Fumadocs.
Steps after updating notebooks:
- Ensure you have uv installed
- Run
bash scripts/update_cookbook_docs.sh(uv will automatically handle dependencies) - Commit the changed markdown files
Note: All .md or .mdx files that contain "source: content/guides/cookbook/ are automatically generated from Jupyter notebooks. Do not edit them manually — they will be overwritten. Always edit the Jupyter notebooks and run the conversion script.
We store all images in the public/images/ directory. To use them in the markdown files, use the absolute path /images/your-image.png.
We use a bucket on Cloudflare R2 to store all video. It is hosted on https://static.langfuse.com/docs-videos. Ping one of the maintainers to upload a video to the bucket and get the src.
To embed a video, use the Video component and set a title and fixed aspect ratio. Point src to the mp4 file in the bucket.
To embed a "gif", actually embed a video and use gifMode (<Video src="" gifMode />). This will look like a gif, but at a much smaller file size and higher quality.
- Fumadocs (docs framework)
- Next.js App Router
- shadcn/ui
- Tailwind CSS
Interested in stack of Q&A docs chatbot? Checkout the blog post for implementation details (all open source)
The docs site includes four interconnected features designed to make documentation accessible to LLMs and AI tools:
-
Markdown URL endpoints (
.mdsuffix): Append.mdto any URL (e.g.,/docs.md) to get raw markdown. Built at compile time viascripts/copy_md_sources.jswhich copies all.mdx/.mdfiles fromcontent/topublic/md-src/as static.mdfiles with inlined MDX components. -
Copy as Markdown button: UI button on docs pages that fetches the
.mdendpoint and copies to clipboard for pasting into ChatGPT/Claude/Cursor. -
Export as PDF links: API endpoint
/api/md-to-pdfthat fetches markdown from.mdURLs and converts to PDF using Puppeteer. Used on legal pages (terms, privacy, DPA, etc.). -
MCP Server: Model Context Protocol server at
/api/mcpwith three tools:searchLangfuseDocs: RAG search via Inkeep APIgetLangfuseDocsPage: Fetches specific page markdown from.mdURLsgetLangfuseOverview: Returnsllms.txtoverview
All three user-facing features (Copy, PDF, MCP) depend on the same foundation of pre-built static markdown files, making them fast, cacheable, and reliable. See RESEARCH-LLM-FEATURES.md for detailed implementation details.
Run pnpm run analyze to analyze the bundle size of the production build using @next/bundle-analyzer.
