I hereby claim:
- I am bilal2vec on github.
- I am bilal2vec (https://keybase.io/bilal2vec) on keybase.
- I have a public key ASB4YtLvI0oyvIfyPO-e0v_rrRdJCJYnirvbnMw_YIU9PAo
To claim this, I am signing this object:
| import torch | |
| bmnks = [(2, 8192, 6144, 4096), (2, 8192, 4096, 4096), (2, 8192, 14336, 4096), (2, 8192, 4096, 14336)] | |
| for (b, m, n, k) in bmnks: | |
| print(b, m, n, k) | |
| for dtype in [torch.float32, torch.bfloat16]: | |
| print(f'Dtype: {dtype}') | |
| x = torch.randn(b*m*k, dtype=dtype, device='cuda').view((b, m, k)).contiguous() | |
| w = torch.randn(k*n, dtype=dtype, device='cuda').view((k, n)).contiguous() |
| import time | |
| import numpy as np | |
| import torch | |
| bmnks = [(2, 8192, 6144, 4096), (2, 8192, 4096, 4096), (2, 8192, 14336, 4096), (2, 8192, 4096, 14336)] | |
| problems = {True: {}, False: {}} | |
| reference = {True: {}, False: {}} |
| // ******************* | |
| // This setup will allow you to synchronize personal events from one calendar (the "secondary calendar") | |
| // to another calendar, e.g. work (the "primary calendar"), but obfuscate the details. Then your coworkers | |
| // know when you're busy but don't get to see the personal details. | |
| // | |
| // Follow these steps: | |
| // 1. Go to https://script.google.com/home and click [+ New project] | |
| // 2. Make sure the two calendars you want to sync can be edited by the Google account you're currently under | |
| // (or switch accounts) |
I hereby claim:
To claim this, I am signing this object:
| bilal@tf-lm-finetuning:~/lm-finetuning$ python3 train_tfrecords.py --tpu algpt2pod --seq_len 1024 --batch_size 256 --train_len 1000000 --warmup_steps 10000 --model_type gpt2 --config_path gpt2 --epochs 10 --train_path gs://algpt2/train/0.tfrecord --val_path gs://algpt2/train/1.tfrecord | |
| wandb: Tracking run with wandb version 0.8.35 | |
| wandb: Run data is saved locally in wandb/run-20200512_151802-2j4oycre | |
| wandb: Syncing run noble-sunset-1222 | |
| wandb: ⭐️ View project at https://app.wandb.ai/bkkaggle/lm-finetuning | |
| wandb: 🚀 View run at https://app.wandb.ai/bkkaggle/lm-finetuning/runs/2j4oycre | |
| wandb: Run `wandb off` to turn off syncing. | |
| INFO:absl:Entering into master device scope: /job:worker/replica:0/task:0/device:CPU:0 | |
| 2020-05-12 15:18:03.832972: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA |
| { | |
| "jsxSingleQuote": true, | |
| "semi": false, | |
| "singleQuote": true, | |
| "tabWidth": 4, | |
| "printWidth": 100, | |
| "useTabs": true | |
| } |
| # Print out the number of parameters in a Pytorch model | |
| print(n_params(model)) | |
| # 150909673 | |
| # Save a model for a particular cross-validation fold to disk | |
| save_model(model, fold=0) |
| # Send a notification to your phone directly with IFTTT (https://ifttt.com/) notifying | |
| # you when a training run ends or at the end of an epoch. | |
| notify({'value1': 'Notification title', 'value2': 'Notification body'}, key=[IFTTT_KEY]) | |
| # Automatically set random seeds for Python, numpy, and Pytorch to make sure your results can be reproduced | |
| seed_envirionment(42) | |
| # Print how much GPU memory is currently allocated | |
| gpu_usage(device, digits=4) | |
| # GPU Usage: 6.5 GB |
| class Encoder(nn.Module): | |
| def __init__(self, in_ch, out_ch, r): | |
| super(Encoder, self).__init__() | |
| self.conv = nn.Conv2d(in_ch, out_ch, 3, padding=1) | |
| self.se = SqueezeAndExcitation(out_ch, r) | |
| def forward(self, x): | |
| x = F.relu(self.conv(x), inplace=True) | |
| x = self.se(x) |
| optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9) | |
| scheduler = torch.optim.CyclicMomentum(optimizer) | |
| data_loader = torch.utils.data.DataLoader(...) | |
| for epoch in range(10): | |
| for batch in data_loader: | |
| scheduler.batch_step() | |
| train_batch(...) |