Skip to content

Instantly share code, notes, and snippets.

View otakbeku's full-sized avatar
🚀
Journey to the top

Ais otakbeku

🚀
Journey to the top
View GitHub Profile
@veekaybee
veekaybee / normcore-llm.md
Last active March 17, 2026 18:16
Normcore LLM Reads

Anti-hype LLM reading list

Goals: Add links that are reasonable and good explanations of how stuff works. No hype and no vendor content if possible. Practical first-hand accounts of models in prod eagerly sought.

Foundational Concepts

Screenshot 2023-12-18 at 10 40 27 PM

Pre-Transformer Models

@vedantroy
vedantroy / rev_vit.py
Created September 12, 2022 10:24
Reversible VIT
from types import SimpleNamespace
import yahp as hp
import torch
from torch import nn
from torch.autograd import Function as Function
from einops import rearrange
from einops.layers.torch import Rearrange
# A one-file implementation of the reversible VIT architecture
@avinayak
avinayak / japanese_vocab_study_order.json
Created September 4, 2022 19:40
japanese_vocab_study_order.json
This file has been truncated, but you can view the full file.
{
"あっち": {
"index": 3,
"word": "あっち",
"kana": "",
"romaji": "",
"type": "Pronoun",
"jlpt": 5,
"meaning": "over there",
"card": "word",
@joeycastillo
joeycastillo / Focus.cpp
Created June 20, 2022 22:37
A simple Focus-based app for tracking progress on projects
#include "Focus.h"
#include "Arduino.h"
#include <algorithm>
Task::Task() {
}
View::View(Rect rect) {
this->frame = rect;
this->window.reset();
r = 4
channels = 8
x = torch.randn((1, channels, 64, 64))
_, _, h, w = x.shape
# we want a vector of shape 1, 8, 32, 32
x = rearrange(x, "b c h w -> b (h w) c") # shape = [1, 4096, 8]
x = rearrange(x, "b (hw r) c -> b hw (c r)", r=r) # shape = [1, 1024, 32]
reducer = nn.Linear(channels*r, channels)
x = reducer(x) # shape = [1, 1024, 8]
half_r = r // 2
@Chris-hughes10
Chris-hughes10 / train.py
Created January 27, 2022 16:58
timm blog - Training script using timm and PyTorch-accelerated
import argparse
from pathlib import Path
import timm
import timm.data
import timm.loss
import timm.optim
import timm.utils
import torch
import torchmetrics
@newcarrotgames
newcarrotgames / install.sh
Created June 25, 2021 14:24
Python code for running vqgan+clip on your own machine. Shamelessy taken from this colab: https://colab.research.google.com/drive/1go6YwMFe5MX6XM9tv-cnQiSTU50N9EeT#scrollTo=mFo5vz0UYBrF. USE AT YOUR OWN RISK!
# I _hope_ this is everything but there may be some other libs I forgot or you don't have,
# so just use pip to install them if vqgan_clip.py complains about missing modules.
# Make some folder, cd into it, and run this.
git clone https://github.com/openai/CLIP
git clone https://github.com/CompVis/taming-transformers
pip install ftfy regex tqdm omegaconf pytorch-lightning
pip install kornia
pip install stegano
apt install exempi
pip install python-xmp-toolkit

NFT Statement

Feb 26, 2021

I have decided to release a small set of single-edition NFTs on OpenSea.

I plan to place 100% of direct proceeds from the first sale toward the Bounty for More Eco-Friendly NFTs being developed by Artnome and GitCoin, and 25% of direct proceeds of each of the subsequent 9 artworks in this series toward the same fund.

You can bid/purchase/view the first minted artwork here, and the full series here. At the time of writing, I have only placed 1 artwork on auction, but will place the other 9 in the coming days/weeks.

//MIT License
//Copyright (c) 2021 Felix Westin
//Source: https://github.com/Fewes/MinimalAtmosphere
//Ported to GLSL by Marcin Ignac
#ifndef ATMOSPHERE_INCLUDED
#define ATMOSPHERE_INCLUDED
// -------------------------------------
@mingfeima
mingfeima / pytorch_performance_profiling.md
Last active April 11, 2025 15:38
How to do performance profiling on PyTorch

(Internal Tranining Material)

Usually the first step in performance optimization is to do profiling, e.g. to identify performance hotspots of a workload. This gist tells basic knowledge of performance profiling on PyTorch, you will get:

  • How to find the bottleneck operator?
  • How to trace source file of a particular operator?
  • How do I indentify threading issues? (oversubscription)
  • How do I tell a specific operator is running efficiently or not?

This tutorial takes one of my recent projects - pssp-transformer as an example to guide you through path of PyTorch CPU peformance optimization. Focus will be on Part 1 & Part 2.