Skip to content

Instantly share code, notes, and snippets.

View shimomurakei's full-sized avatar

shimomura kei shimomurakei

View GitHub Profile

LLM Wiki

A pattern for building personal knowledge bases using LLMs.

This is an idea file, it is designed to be copy pasted to your own LLM Agent (e.g. OpenAI Codex, Claude Code, OpenCode / Pi, or etc.). Its goal is to communicate the high level idea, but your agent will build out the specifics in collaboration with you.

The core idea

Most people's experience with LLMs and documents looks like RAG: you upload a collection of files, the LLM retrieves relevant chunks at query time, and generates an answer. This works, but the LLM is rediscovering knowledge from scratch on every question. There's no accumulation. Ask a subtle question that requires synthesizing five documents, and the LLM has to find and piece together the relevant fragments every time. Nothing is built up. NotebookLM, ChatGPT file uploads, and most RAG systems work this way.

how to leverage oracle's temping offers

free tier limits

The limits of the free tier say that you can create up to 4 instances.

  • x2 x86 instances (2core/1g)
  • x2 ampere instances (with 4core/24g spread between them)
  • 200GB total boot volume space across all intances (minimum of 50G per instance)

create your account

@shimomurakei
shimomurakei / macro_explanation.c
Created June 13, 2025 21:26 — forked from jdah/macro_explanation.c
explaining some C macro magic
// so a cool trick with macros in C (and C++) is that since macros inside of
// macros are stille evaluated by the preprocessor, you can use macro names as
// parameters to other macros (and even construct macro names out of out of
// parameters!) - so using this trick if we have some macro like
// this:
#include <stddef.h>
#define MY_TYPES_ITER(_F, ...) \
_F(FOO, foo, 0, __VA_ARGS__) \
@shimomurakei
shimomurakei / network_demo.c
Created June 13, 2025 21:26 — forked from jdah/network_demo.c
the world's most basic client/server
#include <stdio.h>
#include <string.h>
#include <arpa/inet.h>
#include <sys/socket.h>
#include <netdb.h>
#include <unistd.h>
static void server() {
// create socket
@shimomurakei
shimomurakei / .vimrc
Created June 13, 2025 21:23 — forked from jdah/.vimrc
jdh's NeoVim .vimrc
call plug#begin()
Plug 'drewtempelmeyer/palenight.vim'
Plug 'vim-airline/vim-airline'
Plug 'wlangstroth/vim-racket'
Plug 'sheerun/vim-polyglot'
Plug 'rust-lang/rust.vim'
Plug 'preservim/tagbar'
Plug 'universal-ctags/ctags'
Plug 'luochen1990/rainbow'
Plug 'vim-syntastic/syntastic'
@shimomurakei
shimomurakei / Local-Remote-Repository.txt
Created April 17, 2024 17:17 — forked from raylech1986it/Local-Remote-Repository.txt
Create a "local, remote repository" to study Github and distributed workflows
#
# Code to my YouTube video: https://youtu.be/72o8ByBKhwA
#
mkdir ~/remoteDemo/eren/ ~/remoteDemo/zeke/
sudo mkdir /Github
sudo chown -R ray:ray /Github
cd /Github
git init homeLab.git --bare
# this tutorial assumes conda and git are both installed on your computer
conda create -n tg python=3.10.9
conda activate tg
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
git clone https://github.com/oobabooga/text-generation-webui.git
cd text-generation-webui
pip install -r requirements.txt
# GPU only:
1. # create new .py file with code found below
2. # install ollama
3. # install model you want “ollama run mistral”
4. conda create -n autogen python=3.11
5. conda activate autogen
6. which python
7. python -m pip install pyautogen
7. ollama run mistral
8. ollama run codellama
9. # open new terminal
git clone https://github.com/haotian-liu/LLaVA.git
cd LLaVA
pip install --upgrade pip
pip install torch
pip install -e .
# for runpod: edit pod to have port 3000 and 100GB of temporary storage
python3 -m llava.serve.controller --host 0.0.0.0 --port 10000
# open new terminal, keep previous one open
python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path liuhaotian/llava-v1.5-13b
# open new terminal, keep previous one open
# must have conda installed
git clone https://github.com/joonspk-research/generative_agents.git
cd generative_agents
# open visual studio code, open gen agents folder
# within vscode, go to reverie/backend_server
# create new file utils.py
# copy/paste contents from github (below)
###
# Copy and paste your OpenAI API Key