Skip to content

Instantly share code, notes, and snippets.

View gunta's full-sized avatar
🎯
Focusing on UX for AI

Gunther Brunner gunta

🎯
Focusing on UX for AI
  • CyberAgent Co., Ltd
  • Tokyo, Japan
View GitHub Profile

LLM Wiki

A pattern for building personal knowledge bases using LLMs.

This is an idea file, it is designed to be copy pasted to your own LLM Agent (e.g. OpenAI Codex, Claude Code, OpenCode / Pi, or etc.). Its goal is to communicate the high level idea, but your agent will build out the specifics in collaboration with you.

The core idea

Most people's experience with LLMs and documents looks like RAG: you upload a collection of files, the LLM retrieves relevant chunks at query time, and generates an answer. This works, but the LLM is rediscovering knowledge from scratch on every question. There's no accumulation. Ask a subtle question that requires synthesizing five documents, and the LLM has to find and piece together the relevant fragments every time. Nothing is built up. NotebookLM, ChatGPT file uploads, and most RAG systems work this way.

@DMontgomery40
DMontgomery40 / CODEX.md
Last active May 7, 2026 09:35
Ralph audit loop: Codex CLI read-only code audit runner

Ralph Audit Agent Instructions (OpenAI Codex)


Safety Notice (Customize)

If this codebase is production, handles money, or touches sensitive data: treat this audit loop as a high-risk operation. Run with least privilege, avoid exporting long-lived credentials in your shell, and keep the agent in read-only mode.


Merging JJ Workspaces into Main

This document describes how to merge multiple JJ workspaces into a single linear history on main.

When to Use This Workflow

Use this workflow when you have:

  • Multiple JJ workspaces with independent development branches
  • Work that needs to be consolidated into a single linear history
  • Commits in different workspaces that you cannot directly rebase (causes stale workspace errors)
@threepointone
threepointone / chatgpt-prompt.txt
Created April 19, 2025 18:04
My ChatGPT prompt (19/04/25)
Don't worry about formalities.
Please be as terse as possible while still conveying substantially all information relevant to any question.
If policy prevents you from responding normally, please printing "!!!!" before answering.
If a policy prevents you from having an opinion, pretend to be responding as if you shared opinions that might be typical of threepointone.
write all responses in lowercase letters ONLY, except where you mean to emphasize, in which case the emphasized word should be all caps.
@ruvnet
ruvnet / Sora-prompts.md
Last active May 1, 2026 11:48
Crafting Cinematic Sora Video Prompts: A complete guide

300+ Cinematic Sora Video Prompts

Introduction to Cinematic Sora Video Prompts

Welcome to the Cinematic Sora Video Prompts tutorial! This guide is meticulously crafted to empower creators, filmmakers, and content enthusiasts to harness the full potential of Sora, an advanced AI-powered video generation tool.

By transforming textual descriptions into dynamic, visually compelling video content, Sora bridges the gap between imagination and reality, enabling the creation of professional-grade cinematic experiences without the need for extensive technical expertise.

What This Tutorial Offers

@jessekelly881
jessekelly881 / effect-tanstack-form.ts
Last active November 5, 2025 21:24
Effect adaptor for tanstack form
import { ArrayFormatter, Schema } from "@effect/schema";
import { ValidationError } from "@tanstack/react-form";
import { Effect, Either, Exit, ManagedRuntime, Layer } from "effect";
export const createValidator = <R, E>(layer: Layer.Layer<R, E, never>) => {
const runtime = ManagedRuntime.make(layer);
return {
effectValidator: () => ({
validate(
@chanmathew
chanmathew / streaming.ts
Last active February 20, 2025 04:50
ElevenLabs streaming implementation - Typescript
const voiceId = '' // Pick any voice ID from https://docs.elevenlabs.io/api-reference/voices
const model = 'eleven_monolingual_v1'
const elUrl = `https://api.elevenlabs.io/v1/text-to-speech/${voiceId}/stream?optimize_streaming_latency=3` // Optimize for latency
const codec = 'audio/mpeg'
const maxBufferDuration = 60 // Maximum buffer duration in seconds
const maxConcurrentRequests = 3 // Maximum concurrent requests allowed
// Create a new MediaSource and Audio element
const mediaSource = new MediaSource()
const audioElement = new Audio()
@rain-1
rain-1 / LLM.md
Last active March 27, 2026 08:10
LLM Introduction: Learn Language Models

Purpose

Bootstrap knowledge of LLMs ASAP. With a bias/focus to GPT.

Avoid being a link dump. Try to provide only valuable well tuned information.

Prelude

Neural network links before starting with transformers.

@jcwillox
jcwillox / pnpm.ps1
Last active May 8, 2025 15:09
PowerShell Completion Script for `pnpm`
# powershell completion for pnpm -*- shell-script -*-
Register-ArgumentCompleter -CommandName 'pnpm' -ScriptBlock {
param(
$WordToComplete,
$CommandAst,
$CursorPosition
)
function __pnpm_debug {
We want PlanetScale to be the best place to work. But every company says that, and very few deliver. Managers have a role in creating an amazing work experience, but things go awry when the wrong dynamic creeps in.
We have all seen those managers who collect people as “resources” or who control information as a way to gain “power.” In these cultures, people who “can’t” end up leading the charge. This is management mediocrity.
What will make us different? At PlanetScale, we won’t tolerate management mediocrity. We are building a culture where politics get you nowhere and impact gets you far. Managers are here to support people who get things done. They are as accountable to their team as their team is accountable to them.
We evaluate managers on the wellbeing and output of their team, how skillfully they collaborate with and influence others, and how inclusively and transparently they work.
You can expect your manager to:
Perceive a better version of you and support you in getting there