Skip to content

Instantly share code, notes, and snippets.

@karpathy
Created April 4, 2026 16:25
Show Gist options
  • Select an option

  • Save karpathy/442a6bf555914893e9891c11519de94f to your computer and use it in GitHub Desktop.

Select an option

Save karpathy/442a6bf555914893e9891c11519de94f to your computer and use it in GitHub Desktop.
llm-wiki

LLM Wiki

A pattern for building personal knowledge bases using LLMs.

This is an idea file, it is designed to be copy pasted to your own LLM Agent (e.g. OpenAI Codex, Claude Code, OpenCode / Pi, or etc.). Its goal is to communicate the high level idea, but your agent will build out the specifics in collaboration with you.

The core idea

Most people's experience with LLMs and documents looks like RAG: you upload a collection of files, the LLM retrieves relevant chunks at query time, and generates an answer. This works, but the LLM is rediscovering knowledge from scratch on every question. There's no accumulation. Ask a subtle question that requires synthesizing five documents, and the LLM has to find and piece together the relevant fragments every time. Nothing is built up. NotebookLM, ChatGPT file uploads, and most RAG systems work this way.

The idea here is different. Instead of just retrieving from raw documents at query time, the LLM incrementally builds and maintains a persistent wiki — a structured, interlinked collection of markdown files that sits between you and the raw sources. When you add a new source, the LLM doesn't just index it for later retrieval. It reads it, extracts the key information, and integrates it into the existing wiki — updating entity pages, revising topic summaries, noting where new data contradicts old claims, strengthening or challenging the evolving synthesis. The knowledge is compiled once and then kept current, not re-derived on every query.

This is the key difference: the wiki is a persistent, compounding artifact. The cross-references are already there. The contradictions have already been flagged. The synthesis already reflects everything you've read. The wiki keeps getting richer with every source you add and every question you ask.

You never (or rarely) write the wiki yourself — the LLM writes and maintains all of it. You're in charge of sourcing, exploration, and asking the right questions. The LLM does all the grunt work — the summarizing, cross-referencing, filing, and bookkeeping that makes a knowledge base actually useful over time. In practice, I have the LLM agent open on one side and Obsidian open on the other. The LLM makes edits based on our conversation, and I browse the results in real time — following links, checking the graph view, reading the updated pages. Obsidian is the IDE; the LLM is the programmer; the wiki is the codebase.

This can apply to a lot of different contexts. A few examples:

  • Personal: tracking your own goals, health, psychology, self-improvement — filing journal entries, articles, podcast notes, and building up a structured picture of yourself over time.
  • Research: going deep on a topic over weeks or months — reading papers, articles, reports, and incrementally building a comprehensive wiki with an evolving thesis.
  • Reading a book: filing each chapter as you go, building out pages for characters, themes, plot threads, and how they connect. By the end you have a rich companion wiki. Think of fan wikis like Tolkien Gateway — thousands of interlinked pages covering characters, places, events, languages, built by a community of volunteers over years. You could build something like that personally as you read, with the LLM doing all the cross-referencing and maintenance.
  • Business/team: an internal wiki maintained by LLMs, fed by Slack threads, meeting transcripts, project documents, customer calls. Possibly with humans in the loop reviewing updates. The wiki stays current because the LLM does the maintenance that no one on the team wants to do.
  • Competitive analysis, due diligence, trip planning, course notes, hobby deep-dives — anything where you're accumulating knowledge over time and want it organized rather than scattered.

Architecture

There are three layers:

Raw sources — your curated collection of source documents. Articles, papers, images, data files. These are immutable — the LLM reads from them but never modifies them. This is your source of truth.

The wiki — a directory of LLM-generated markdown files. Summaries, entity pages, concept pages, comparisons, an overview, a synthesis. The LLM owns this layer entirely. It creates pages, updates them when new sources arrive, maintains cross-references, and keeps everything consistent. You read it; the LLM writes it.

The schema — a document (e.g. CLAUDE.md for Claude Code or AGENTS.md for Codex) that tells the LLM how the wiki is structured, what the conventions are, and what workflows to follow when ingesting sources, answering questions, or maintaining the wiki. This is the key configuration file — it's what makes the LLM a disciplined wiki maintainer rather than a generic chatbot. You and the LLM co-evolve this over time as you figure out what works for your domain.

Operations

Ingest. You drop a new source into the raw collection and tell the LLM to process it. An example flow: the LLM reads the source, discusses key takeaways with you, writes a summary page in the wiki, updates the index, updates relevant entity and concept pages across the wiki, and appends an entry to the log. A single source might touch 10-15 wiki pages. Personally I prefer to ingest sources one at a time and stay involved — I read the summaries, check the updates, and guide the LLM on what to emphasize. But you could also batch-ingest many sources at once with less supervision. It's up to you to develop the workflow that fits your style and document it in the schema for future sessions.

Query. You ask questions against the wiki. The LLM searches for relevant pages, reads them, and synthesizes an answer with citations. Answers can take different forms depending on the question — a markdown page, a comparison table, a slide deck (Marp), a chart (matplotlib), a canvas. The important insight: good answers can be filed back into the wiki as new pages. A comparison you asked for, an analysis, a connection you discovered — these are valuable and shouldn't disappear into chat history. This way your explorations compound in the knowledge base just like ingested sources do.

Lint. Periodically, ask the LLM to health-check the wiki. Look for: contradictions between pages, stale claims that newer sources have superseded, orphan pages with no inbound links, important concepts mentioned but lacking their own page, missing cross-references, data gaps that could be filled with a web search. The LLM is good at suggesting new questions to investigate and new sources to look for. This keeps the wiki healthy as it grows.

Indexing and logging

Two special files help the LLM (and you) navigate the wiki as it grows. They serve different purposes:

index.md is content-oriented. It's a catalog of everything in the wiki — each page listed with a link, a one-line summary, and optionally metadata like date or source count. Organized by category (entities, concepts, sources, etc.). The LLM updates it on every ingest. When answering a query, the LLM reads the index first to find relevant pages, then drills into them. This works surprisingly well at moderate scale (~100 sources, ~hundreds of pages) and avoids the need for embedding-based RAG infrastructure.

log.md is chronological. It's an append-only record of what happened and when — ingests, queries, lint passes. A useful tip: if each entry starts with a consistent prefix (e.g. ## [2026-04-02] ingest | Article Title), the log becomes parseable with simple unix tools — grep "^## \[" log.md | tail -5 gives you the last 5 entries. The log gives you a timeline of the wiki's evolution and helps the LLM understand what's been done recently.

Optional: CLI tools

At some point you may want to build small tools that help the LLM operate on the wiki more efficiently. A search engine over the wiki pages is the most obvious one — at small scale the index file is enough, but as the wiki grows you want proper search. qmd is a good option: it's a local search engine for markdown files with hybrid BM25/vector search and LLM re-ranking, all on-device. It has both a CLI (so the LLM can shell out to it) and an MCP server (so the LLM can use it as a native tool). You could also build something simpler yourself — the LLM can help you vibe-code a naive search script as the need arises.

Tips and tricks

  • Obsidian Web Clipper is a browser extension that converts web articles to markdown. Very useful for quickly getting sources into your raw collection.
  • Download images locally. In Obsidian Settings → Files and links, set "Attachment folder path" to a fixed directory (e.g. raw/assets/). Then in Settings → Hotkeys, search for "Download" to find "Download attachments for current file" and bind it to a hotkey (e.g. Ctrl+Shift+D). After clipping an article, hit the hotkey and all images get downloaded to local disk. This is optional but useful — it lets the LLM view and reference images directly instead of relying on URLs that may break. Note that LLMs can't natively read markdown with inline images in one pass — the workaround is to have the LLM read the text first, then view some or all of the referenced images separately to gain additional context. It's a bit clunky but works well enough.
  • Obsidian's graph view is the best way to see the shape of your wiki — what's connected to what, which pages are hubs, which are orphans.
  • Marp is a markdown-based slide deck format. Obsidian has a plugin for it. Useful for generating presentations directly from wiki content.
  • Dataview is an Obsidian plugin that runs queries over page frontmatter. If your LLM adds YAML frontmatter to wiki pages (tags, dates, source counts), Dataview can generate dynamic tables and lists.
  • The wiki is just a git repo of markdown files. You get version history, branching, and collaboration for free.

Why this works

The tedious part of maintaining a knowledge base is not the reading or the thinking — it's the bookkeeping. Updating cross-references, keeping summaries current, noting when new data contradicts old claims, maintaining consistency across dozens of pages. Humans abandon wikis because the maintenance burden grows faster than the value. LLMs don't get bored, don't forget to update a cross-reference, and can touch 15 files in one pass. The wiki stays maintained because the cost of maintenance is near zero.

The human's job is to curate sources, direct the analysis, ask good questions, and think about what it all means. The LLM's job is everything else.

The idea is related in spirit to Vannevar Bush's Memex (1945) — a personal, curated knowledge store with associative trails between documents. Bush's vision was closer to this than to what the web became: private, actively curated, with the connections between documents as valuable as the documents themselves. The part he couldn't solve was who does the maintenance. The LLM handles that.

Note

This document is intentionally abstract. It describes the idea, not a specific implementation. The exact directory structure, the schema conventions, the page formats, the tooling — all of that will depend on your domain, your preferences, and your LLM of choice. Everything mentioned above is optional and modular — pick what's useful, ignore what isn't. For example: your sources might be text-only, so you don't need image handling at all. Your wiki might be small enough that the index file is all you need, no search engine required. You might not care about slide decks and just want markdown pages. You might want a completely different set of output formats. The right way to use this is to share it with your LLM agent and work together to instantiate a version that fits your needs. The document's only job is to communicate the pattern. Your LLM can figure out the rest.

@waydelyle
Copy link
Copy Markdown

waydelyle commented Apr 9, 2026

SwarmVault — another update, lots has changed. Karpathy's LLM Wiki gist is now the explicit inspiration in the repo itself. Since my last comment we've gone from v0.1.27 → v0.6.1, and the project has grown well beyond the original code-first framing.

What's new:

  • First-class personal knowledge ingest — transcripts (.srt, .vtt), Slack exports, email (.eml, .mbox), calendar files (.ics), EPUBs, CSV/TSV, XLSX, and PPTX are all now proper sources with parser-/library-backed extraction. Not just code repos anymore.
  • Guided source sessionsswarmvault source add --guide opens a resumable session with durable state under state/source-sessions/. One source at a time, evolving summaries, open questions, thesis tracking. Approval queue stages guided edits before they become canonical.
  • Configurable profilesswarmvault init --profile personal-research (or compose your own with presets like reader,timeline). Profiles decide dashboard packs, guided-session routing, and canonical-review behavior.
  • Managed sources + docs crawlswarmvault source add|list|reload|delete with a persistent registry, shallow git checkouts for public repos, and bounded same-domain docs crawls so recurring documentation sources stay fresh.
  • Contradiction detection — deterministic cross-source claim comparison with contradicts edges in the graph and a dedicated section in wiki/graph/report.md. swarmvault lint --conflicts surfaces them directly.
  • Markdown-first dashboards under wiki/dashboards/ for recent sources, timeline, contradictions, open questions — all readable in plain Obsidian, Dataview-enhanced when you want it.
  • Semantic hashing that ignores operational frontmatter churn, so compile/analysis caches stay stable while still invalidating on meaningful changes.
  • Large-graph overview mode in the graph viewer with deterministic sampling, plus --full for the complete canvas.
  • Kotlin, Scala, Lua, Zig, reStructuredText added to the code-aware ingest languages on top of the existing 12+.

Still local-first, still provider-agnostic (OpenAI, Anthropic, Gemini, Ollama, OpenRouter, Groq, Together, xAI, Cerebras, or the built-in heuristic provider for fully offline use). Still MIT licensed.

The core thesis from Karpathy's original gist — that the wiki itself is the durable, compounding artifact — has held up remarkably well in practice. Everything that's been built since is basically downstream of that idea.

Repo: https://github.com/swarmclawai/swarmvault

Contributions, issues, and use-case reports very welcome.

@skyllwt
Copy link
Copy Markdown

skyllwt commented Apr 9, 2026

Hey @karpathy — your LLM-Wiki idea really resonated with us.

We're a team from Peking University working on AI/CS research.

We didn't just build a wiki — we plugged it into the entire research pipeline as the central hub that every step revolves around.

The result is ΩmegaWiki: your LLM-Wiki concept extended into a full-lifecycle research platform.

What the wiki drives:
• Ingest papers → structured knowledge base with 8 entity types
• Detect gaps → generate research ideas → design experiments
• Run experiments → verdict → auto-update wiki knowledge
• Write papers → compile LaTeX → respond to reviewers
• 9 relationship types connecting everything (supports, contradicts, tested_by...)

The key idea: the wiki isn't a side product — it's the state machine. Every skill reads from it, writes back to it, and the knowledge compounds over time. Failed experiments stay as anti-repetition memory so you never re-explore dead ends.

20 Claude Code skills, fully open-source. Still early-stage but functional end-to-end. We're actively iterating — more model support and features on the way.

If you find it useful, a ⭐ would mean a lot! PRs, issues, and ideas all welcome — let's build this together.

https://github.com/skyllwt/OmegaWiki

OmegaWiki

@ControllableGeneration
Copy link
Copy Markdown

We've been working on this exact same vision for weeks, well before this post was published.

This is the result AI-Context-OS: https://github.com/alexdcd/AI-Context-OS.

To take this idea further, we built a local desktop app (Tauri + Rust + React) that turns any folder into an agnostic memory layer, adding these key improvements:

  • Progressive Memory: Uses YAML frontmatter with explicit depth levels (L0, L1, L2) so the agent only loads the necessary information density.
  • Active Governance: Local telemetry (SQLite) audits memory "health", detecting conflicts, redundancies, and suggesting cleanups to avoid context bloat.
  • Adapters & MCP: Neutral core files act as a router to auto-generate tool-specific rules (claude.md, .cursorrules, .windsurfrules), plus built-in MCP servers.

AI Context OS is in active development. Core features are stable and in daily use:

✅ Workspace setup and file ontology ✅ YAML frontmatter + L0/L1/L2 tiered content ✅ Hybrid 6-signal scoring engine (Rust) ✅ Intent-adaptive weight profiles ✅ Query expansion ✅ MCP server (stdio + HTTP/SSE) ✅ Multi-tool router with adapters (Claude, Cursor, Windsurf, Codex) ✅ Governance (decay, conflicts, consolidation, scratch TTL) ✅ Health score (5-component) ✅ Observability (SQLite, query history) ✅ Simulation view (preview context for any query) ✅ Journal (daily outliner, Logseq-style) ✅ Tasks (YAML-frontmatter tasks with state/priority) ✅ Graph visualization (memory connectivity) with community coloring ✅ Community detection (LPA + tag co-occurrence) feeding graph proximity score ✅ God nodes governance tab (importance mismatch detection) ✅ Backup/restore On the roadmap:

⬚ Local embedding model for true semantic scoring ⬚ Agents marketplace (installable agent templates) ⬚ Multi-workspace support ⬚ Import from Obsidian/Logseq

If you're looking to implement this model in a structured and auditable way, I invite you to check out the repo and share your feedback!

I glanced at your project and felt too heavy for llm.wiki-alike application.

@doublesecretlabs
Copy link
Copy Markdown

Love this! I built a Chrome extension companion for this — clips web pages to clean markdown with frontmatter and saves directly to Google Drive. Designed to feed the raw/ layer with no local sync needed. https://github.com/doublesecretlabs/llm-wiki-clipper

@ESJavadex
Copy link
Copy Markdown

🔥 Built an open-source implementation — Knowledge Forge

A functional, self-contained Node.js repo that implements this entire pattern:

  • Ingest markdown sources → auto-extracts concepts/entities
  • Wiki links ([[link]] syntax) with cross-referencing between pages
  • Index + Log — navigable catalog and append-only operation history
  • Lint pass — detects orphans, dangling links, missing frontmatter
  • Web UI — dark-themed SPA with sidebar, type filters, and search

Home view

Source page with wiki links

Concept page accumulating cross-references

Quick start:

git clone https://github.com/ESJavadex/knowledge-forge.git
cd knowledge-forge
npm install && npm run demo && npm start

Currently uses heuristic extraction (frequency + bigrams). Roadmap includes LLM-powered semantic extraction for much richer concept/entity discovery.

MIT licensed. Contributions welcome! 🚀

@cthulhu-ma
Copy link
Copy Markdown

cthulhu-ma commented Apr 9, 2026 via email

@uziiuzair
Copy link
Copy Markdown

This pattern maps closely to what I've been building with Continuity.

The three layers translate directly:

  • raw sources → immutable chat history in SQLite,
  • the wiki → typed memories with version history and a relationship graph,
  • the schema → system prompt composition that injects memories into every conversation.

The main extension: the knowledge base runs as an MCP server, so any MCP-compatible tool (Claude Code, Cursor, etc.) reads and writes to the same store. Cross-tool continuity without cloud sync.

A few additions beyond the pattern here:

  • Narrative synthesis: the LLM builds a holistic mental model with confidence scores, not just individual facts
  • Learning signals: corrections, rejections, and approvals are tracked as typed signals that feed back into narrative updates
  • Chat as write path: no explicit ingest step; structure emerges from conversation
  • Memory versioning: every change tracked with timestamps and reasons

The lint concept is something I want to steal. We have staleness detection via snapshot hashes but no deliberate audit workflow yet.

https://github.com/uziiuzair/continuity

@AutomaticHourglass
Copy link
Copy Markdown

I think this could be minified into perpetual thinking wikis. For example, you can fire up an empty wiki and say search and think on everything transformer related and create a wiki

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment