Skip to content

Instantly share code, notes, and snippets.

View krish240574's full-sized avatar

Kumar R. krish240574

  • Planet Earth
View GitHub Profile
@krish240574
krish240574 / llm-wiki.md
Created April 24, 2026 08:34 — forked from rohitg00/llm-wiki.md
LLM Wiki v2 — extending Karpathy's LLM Wiki pattern with lessons from building agentmemory

LLM Wiki v2

A pattern for building personal knowledge bases using LLMs. Extended with lessons from building agentmemory, a persistent memory engine for AI coding agents.

This builds on Andrej Karpathy's original LLM Wiki idea file. Everything in the original still applies. This document adds what we learned running the pattern in production: what breaks at scale, what's missing, and what separates a wiki that stays useful from one that rots.

What the original gets right

The core insight is correct: stop re-deriving, start compiling. RAG retrieves and forgets. A wiki accumulates and compounds. The three-layer architecture (raw sources, wiki, schema) works. The operations (ingest, query, lint) cover the basics. If you haven't read the original, start there.

@krish240574
krish240574 / llm-wiki.md
Created April 5, 2026 01:47 — forked from karpathy/llm-wiki.md
llm-wiki

LLM Wiki

A pattern for building personal knowledge bases using LLMs.

This is an idea file, it is designed to be copy pasted to your own LLM Agent (e.g. OpenAI Codex, Claude Code, OpenCode / Pi, or etc.). Its goal is to communicate the high level idea, but your agent will build out the specifics in collaboration with you.

The core idea

Most people's experience with LLMs and documents looks like RAG: you upload a collection of files, the LLM retrieves relevant chunks at query time, and generates an answer. This works, but the LLM is rediscovering knowledge from scratch on every question. There's no accumulation. Ask a subtle question that requires synthesizing five documents, and the LLM has to find and piece together the relevant fragments every time. Nothing is built up. NotebookLM, ChatGPT file uploads, and most RAG systems work this way.

@krish240574
krish240574 / HRIF v1.4 — Hyperagent Recursive Improvement Family Canonical Codex extraction and formal uplift of Hyperagents into an operator-generalized recursive-improvement family with archive-gated self-modification, residue, diversity preservation, falsification boundaries, and explicit source gratitude.
% ████████████████████████████████████████████████████████████████████████████████
%
% CODEX ΔΦ — HYPERAGENT RECURSIVE IMPROVEMENT FAMILY (HRIF v1.4)
% ────────────────────────────────────────────────────────────────────────────
% CANONICAL EXTRACTION + FORMAL UPLIFT OF METACOGNITIVE SELF-MODIFICATION
%
% VERSION
% ───────
% v1.4 — Operator-Generalized Formal Uplift · Additive Evolution
%
@krish240574
krish240574 / pi_tutorial.md
Created March 22, 2026 13:51 — forked from dabit3/pi_tutorial.md
How to Build a Custom Agent Framework with PI: The Agent Stack Powering OpenClaw

PI is a TypeScript toolkit for building AI agents. It's a monorepo of packages that layer on top of each other: pi-ai handles LLM communication across providers, pi-agent-core adds the agent loop with tool calling, pi-coding-agent gives you a full coding agent with built-in tools, session persistence, and extensibility, and pi-tui provides a terminal UI for building CLI interfaces.

These are the same packages that power OpenClaw. This guide walks through each layer, progressively building up to a fully featured coding assistant with a terminal UI, session persistence, and custom tools.

By understanding how to compose these layers, you can build production-grade agentic software on your own terms, without being locked into a specific abstraction.

Pi was created by @badlogicgames. This is a great writeup from him that explains some of the design decisions made when creating it.

The stack

@krish240574
krish240574 / VSDD.md
Created March 1, 2026 16:56 — forked from dollspace-gay/VSDD.md
Verified Spec-Driven Development

Verified Spec-Driven Development (VSDD)

The Fusion: VDD × TDD × SDD for AI-Native Engineering

Overview

Verified Spec-Driven Development (VSDD) is a unified software engineering methodology that fuses three proven paradigms into a single AI-orchestrated pipeline:

  • Spec-Driven Development (SDD): Define the contract before writing a single line of implementation. Specs are the source of truth.
  • Test-Driven Development (TDD): Tests are written before code. Red → Green → Refactor. No code exists without a failing test that demanded it.
@krish240574
krish240574 / claude-cli-experiments.md
Created February 28, 2026 18:29 — forked from danialhasan/claude-cli-experiments.md
Claude CLI Print Mode Experiments - Raw data from testing claude -p flag combinations (2026-01-08)

Claude CLI Print Mode Experiments

Date: 2026-01-08
CLI Version: 2.1.2
Total Experiments: 29

Raw experiment outputs from testing various claude -p flag combinations.


---
name: plan-exit-review
version: 2.0.0
description: |
Review a plan thoroughly before implementation. Challenges scope, reviews
architecture/code quality/tests/performance, and walks through issues
interactively with opinionated recommendations.
allowed-tools:
- Read
- Grep
@krish240574
krish240574 / SKILL.md
Created February 23, 2026 01:43 — forked from LuD1161/SKILL.md
codex-review - claude skill file
name codex-review
description Send the current plan to OpenAI Codex CLI for iterative review. Claude and Codex go back-and-forth until Codex approves the plan.
user_invocable true

Codex Plan Review (Iterative)

Send the current implementation plan to OpenAI Codex for review. Claude revises the plan based on Codex's feedback and re-submits until Codex approves. Max 5 rounds.

@krish240574
krish240574 / openclaw-50-day-prompts.md
Created February 23, 2026 01:43 — forked from velvet-shark/openclaw-50-day-prompts.md
OpenClaw after 50 days: all prompts for 20 real workflows (companion to YouTube video)

OpenClaw after 50 days: all prompts

Companion prompts for the video: OpenClaw after 50 days: 20 real workflows (honest review)

These are the actual prompts I use for each use case shown in the video. Copy-paste them into your agent and adjust for your setup. Most will work as-is or the agent will ask you clarifying questions.

Each prompt describes the intent clearly enough that the agent can figure out the implementation details. You don't need to hand-hold it through every step.

My setup: OpenClaw running on a VPS, Discord as primary interface (separate channels per workflow), Obsidian for notes (markdown-first), Coolify for self-hosted services.

@krish240574
krish240574 / nano_harness.py
Created February 21, 2026 17:01 — forked from burtenshaw/nano_harness.py
Nano Harness: Agent Harness in 223 lines
import json
import os
import re
import shlex
import subprocess
import sys
import time
import traceback
import urllib.error
import urllib.request