Goals: Add links that are reasonable and good explanations of how stuff works. No hype and no vendor content if possible. Practical first-hand accounts of models in prod eagerly sought.
| from types import SimpleNamespace | |
| import yahp as hp | |
| import torch | |
| from torch import nn | |
| from torch.autograd import Function as Function | |
| from einops import rearrange | |
| from einops.layers.torch import Rearrange | |
| # A one-file implementation of the reversible VIT architecture |
| { | |
| "あっち": { | |
| "index": 3, | |
| "word": "あっち", | |
| "kana": "", | |
| "romaji": "", | |
| "type": "Pronoun", | |
| "jlpt": 5, | |
| "meaning": "over there", | |
| "card": "word", |
| #include "Focus.h" | |
| #include "Arduino.h" | |
| #include <algorithm> | |
| Task::Task() { | |
| } | |
| View::View(Rect rect) { | |
| this->frame = rect; | |
| this->window.reset(); |
| r = 4 | |
| channels = 8 | |
| x = torch.randn((1, channels, 64, 64)) | |
| _, _, h, w = x.shape | |
| # we want a vector of shape 1, 8, 32, 32 | |
| x = rearrange(x, "b c h w -> b (h w) c") # shape = [1, 4096, 8] | |
| x = rearrange(x, "b (hw r) c -> b hw (c r)", r=r) # shape = [1, 1024, 32] | |
| reducer = nn.Linear(channels*r, channels) | |
| x = reducer(x) # shape = [1, 1024, 8] | |
| half_r = r // 2 |
| import argparse | |
| from pathlib import Path | |
| import timm | |
| import timm.data | |
| import timm.loss | |
| import timm.optim | |
| import timm.utils | |
| import torch | |
| import torchmetrics |
| # I _hope_ this is everything but there may be some other libs I forgot or you don't have, | |
| # so just use pip to install them if vqgan_clip.py complains about missing modules. | |
| # Make some folder, cd into it, and run this. | |
| git clone https://github.com/openai/CLIP | |
| git clone https://github.com/CompVis/taming-transformers | |
| pip install ftfy regex tqdm omegaconf pytorch-lightning | |
| pip install kornia | |
| pip install stegano | |
| apt install exempi | |
| pip install python-xmp-toolkit |
Feb 26, 2021
I have decided to release a small set of single-edition NFTs on OpenSea.
I plan to place 100% of direct proceeds from the first sale toward the Bounty for More Eco-Friendly NFTs being developed by Artnome and GitCoin, and 25% of direct proceeds of each of the subsequent 9 artworks in this series toward the same fund.
You can bid/purchase/view the first minted artwork here, and the full series here. At the time of writing, I have only placed 1 artwork on auction, but will place the other 9 in the coming days/weeks.
| //MIT License | |
| //Copyright (c) 2021 Felix Westin | |
| //Source: https://github.com/Fewes/MinimalAtmosphere | |
| //Ported to GLSL by Marcin Ignac | |
| #ifndef ATMOSPHERE_INCLUDED | |
| #define ATMOSPHERE_INCLUDED | |
| // ------------------------------------- |
(Internal Tranining Material)
Usually the first step in performance optimization is to do profiling, e.g. to identify performance hotspots of a workload. This gist tells basic knowledge of performance profiling on PyTorch, you will get:
This tutorial takes one of my recent projects - pssp-transformer as an example to guide you through path of PyTorch CPU peformance optimization. Focus will be on Part 1 & Part 2.