Skip to content

Instantly share code, notes, and snippets.

View trinhtuanvubk's full-sized avatar
🏠
Working from home

Tuan-Vu Trinh trinhtuanvubk

🏠
Working from home
View GitHub Profile
@trinhtuanvubk
trinhtuanvubk / gist:917efc6de780a3fa777c38240adf810e
Created August 20, 2024 03:36 — forked from NN1985/gist:a0712821269259061177c6abb08e8e0a
ElevenLabs Text Input Streaming demo for LLMs
import openai
import elevenlabs
# Uncomment the following lines to set the API keys
openai.api_key = "key_here"
elevenlabs.set_api_key("key_here")
def write(prompt: str):
for chunk in openai.ChatCompletion.create(
model="gpt-3.5-turbo-0301",
keep colab alive
Open the devtools and copy & paste to run the scrips.
const ping = () => {
const btn = document.querySelector("colab-connect-button");
const inner_btn = btn.shadowRoot.querySelector("#connect");
if (inner_btn) {
inner_btn.click();
console.log("Clicked on connect button");
} else {
@trinhtuanvubk
trinhtuanvubk / postgres-cheatsheet.md
Created August 2, 2024 04:57 — forked from Kartones/postgres-cheatsheet.md
PostgreSQL command line cheatsheet

PSQL

Magic words:

psql -U postgres

Some interesting flags (to see all, use -h or --help depending on your psql version):

  • -E: will describe the underlaying queries of the \ commands (cool for learning!)
  • -l: psql will list all databases and then exit (useful if the user you connect with doesn't has a default database, like at AWS RDS)
@trinhtuanvubk
trinhtuanvubk / change_code_inside_file.sh
Last active July 11, 2024 08:49
fix code inside lib
!sed -i 's/from torchvision.transforms.functional_tensor import rgb_to_grayscale/from torchvision.transforms.functional import rgb_to_grayscale/' /usr/local/lib/python3.10/dist-packages/basicsr/data/degradations.py
MAX_GCC_VERSION=8
sudo apt install gcc-$MAX_GCC_VERSION g++-$MAX_GCC_VERSION
sudo ln -s /usr/bin/gcc-$MAX_GCC_VERSION /usr/local/cuda/bin/gcc
sudo ln -s /usr/bin/g++-$MAX_GCC_VERSION /usr/local/cuda/bin/g++
ln -s /usr/bin/gcc-8 /home/user/miniconda3/envs/your_env/bin/gcc
conda install pytorch==1.12.1 torchvision==0.13.1 cudatoolkit=11.3 -c pytorch
conda install nvidia/label/cuda-11.3.1::cuda-nvcc
conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.6 -c pytorch -c nvidia
conda install -c "nvidia/label/cuda-11.6.1" libcusolver-dev
import os
os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "max_split_size_mb:512"
import gc
gc.collect()
torch.cuda.empty_cache()
nodes:
- id: webcam
custom:
source: https://huggingface.co/datasets/dora-rs/dora-idefics2/raw/main/operators/opencv_stream.py
outputs:
- image
- id: idefics2
operator:
python: https://huggingface.co/datasets/dora-rs/dora-idefics2/raw/main/operators/idefics2_op.py
inputs:
@trinhtuanvubk
trinhtuanvubk / ddp.py
Created March 11, 2024 02:44
ddp torch
# Appendix A: Introduction to PyTorch (Part 3)
import torch
import torch.nn.functional as F
from torch.utils.data import Dataset, DataLoader
# NEW imports:
import os
import torch.multiprocessing as mp
from torch.utils.data.distributed import DistributedSampler
@trinhtuanvubk
trinhtuanvubk / fast_speech_text_speech.py
Created February 16, 2024 07:42 — forked from thomwolf/fast_speech_text_speech.py
speech to text to speech
""" To use: install LLM studio (or Ollama), clone OpenVoice, run this script in the OpenVoice directory
git clone https://github.com/myshell-ai/OpenVoice
cd OpenVoice
git clone https://huggingface.co/myshell-ai/OpenVoice
cp -r OpenVoice/* .
pip install whisper pynput pyaudio
"""
from openai import OpenAI
import time