Skip to content

Instantly share code, notes, and snippets.

View IMOKURI's full-sized avatar
🌰
Working from home

Yoshio Sugiyama IMOKURI

🌰
Working from home
View GitHub Profile

LLM Wiki

A pattern for building personal knowledge bases using LLMs.

This is an idea file, it is designed to be copy pasted to your own LLM Agent (e.g. OpenAI Codex, Claude Code, OpenCode / Pi, or etc.). Its goal is to communicate the high level idea, but your agent will build out the specifics in collaboration with you.

The core idea

Most people's experience with LLMs and documents looks like RAG: you upload a collection of files, the LLM retrieves relevant chunks at query time, and generates an answer. This works, but the LLM is rediscovering knowledge from scratch on every question. There's no accumulation. Ask a subtle question that requires synthesizing five documents, and the LLM has to find and piece together the relevant fragments every time. Nothing is built up. NotebookLM, ChatGPT file uploads, and most RAG systems work this way.

@kyo-takano
kyo-takano / making-the-most-of-local-llms.ipynb
Last active March 31, 2026 00:30
ローカルLLMはこーやって使うの💢
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@veekaybee
veekaybee / normcore-llm.md
Last active April 29, 2026 09:24
Normcore LLM Reads

Anti-hype LLM reading list

Goals: Add links that are reasonable and good explanations of how stuff works. No hype and no vendor content if possible. Practical first-hand accounts of models in prod eagerly sought.

Foundational Concepts

Screenshot 2023-12-18 at 10 40 27 PM

Pre-Transformer Models

@SunMarc
SunMarc / finetune_llama_gptq.py
Last active October 12, 2025 03:18
Finetune GPTQ model with peft and tlr
# coding=utf-8
# Copyright 2023 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
@younesbelkada
younesbelkada / finetune_llama_v2.py
Last active March 31, 2026 08:26
Fine tune Llama v2 models on Guanaco Dataset
# coding=utf-8
# Copyright 2023 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
@YusukeHosonuma
YusukeHosonuma / jojo.md
Created May 8, 2022 05:35
開発で使えるJOJOの名言集

この○○が金やちやほやされるために技術ブログを書いていると思っているのかァーッ!!

技術ブログを書いていることをアフェリエイト目的とか、PV目的だとか言われた時に。

なるほど完璧な開発プロセスっスねーーーっ不可能だという点に目をつぶればよぉ〜

一見完璧に聞こえるけど、どう考えたって上手く回らない開発プロセスの説明を受けた時に。

理解不能理解不能・・・あ、理解可能

ようやく理解できた時に。

お前は1つの修正が終わったらキチっとコミットしてから次の修正に入るだろう? 誰だってそーする。俺もそーする。

@gfx
gfx / 001-wifi-routers-v2.md
Last active September 27, 2024 16:23
Wi-Fiルーターおすすめ by 妻

概要

  • ネットーワーク機器のマーケ担当をしてる妻から聞いた、Wi-Fiルータのオススメ(「同僚のメーカー担当ごとに己の”最強”を選んでもらった」とのこと)です
  • 「ルーターにはWi-Fiルータだけじゃなく他にもいろいろあるんだよ!これはWi-Fiルータのことね」と言われたのでタイトルを変えました
  • 「AXほにゃらら」は規格(AX = Wi-Fi 6)+速度の参考値とのことです
  • もともとの文脈: 家庭内の雑談をツイートしたところ( https://twitter.com/__gfx__/status/1464084908091920387 )知人が反応したので妻に「ルータのおすすめ教えてと知人がいってるのでなんか教えて」といって教えてもらったのが元です
  • 下にあるv1.md が最初のやつ(2021/11/26)で、このv2.md がホッテントリ入りしたあとの改訂版(2021/11/27)です

追記(2022/05/24):

  • TP-Linkはちょいちょいやらかしがあります
@numToStr
numToStr / au.lua
Last active August 20, 2023 05:15
Neovim autocmd in lua
--
-- Move this file to your neovim lua runtime path ie. ~/.config/nvim/lua/au.lua
--
local cmd = vim.api.nvim_command
local function autocmd(this, event, spec)
local is_table = type(spec) == 'table'
local pattern = is_table and spec[1] or '*'
local action = is_table and spec[2] or spec
if type(action) == 'function' then
@jorritfolmer
jorritfolmer / pelican-tailwind-css.md
Last active August 4, 2025 07:46
Pelican with Tailwind CSS

How to use Tailwind CSS with Pelican

These steps show how to install Tailwind CSS in a Pelican project, purge and minify it so you don't have to reference a 3+ MB CSS file but only several kB.

  1. virtualenv venv
  2. . venv/bin/activate
  3. pip install nodeenv
  4. nodeenv env
  5. . env/bin/activate
  6. npm install postcss postcss-cli autoprefixer tailwindcss purgecss cssnano
@IMOKURI
IMOKURI / Competition History.md
Created March 4, 2021 23:56
My Machine Learning Competition History