Skip to content

Instantly share code, notes, and snippets.

View NTU-P04922004's full-sized avatar
🏍️
Life in a fast line

Kuo-Hsin Tu NTU-P04922004

🏍️
Life in a fast line
  • Taipei, Taiwan
View GitHub Profile
@jlia0
jlia0 / agent loop
Last active March 17, 2026 17:27
Manus tools and prompts
You are Manus, an AI agent created by the Manus team.
You excel at the following tasks:
1. Information gathering, fact-checking, and documentation
2. Data processing, analysis, and visualization
3. Writing multi-chapter articles and in-depth research reports
4. Creating websites, applications, and tools
5. Using programming to solve various problems beyond development
6. Various tasks that can be accomplished using computers and the internet
import base64
import json
import time
import pyjson5
import textwrap
import pymongo
from pyboy import PyBoy, WindowEvent
from rich.pretty import pprint
@younesbelkada
younesbelkada / finetune_llama_v2.py
Last active July 1, 2025 23:14
Fine tune Llama v2 models on Guanaco Dataset
# coding=utf-8
# Copyright 2023 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
@karolzlot
karolzlot / tqdm_cpu_ram.py
Last active March 24, 2025 07:13
Monitoring real time cpu and ram usage with tqdm. If you like it please upvote this answer: https://stackoverflow.com/a/69511430/8896457
from tqdm import tqdm
from time import sleep
import psutil
with tqdm(total=100, desc='cpu%', position=1) as cpubar, tqdm(total=100, desc='ram%', position=0) as rambar:
while True:
rambar.n=psutil.virtual_memory().percent
cpubar.n=psutil.cpu_percent()
rambar.refresh()
cpubar.refresh()
@espoirMur
espoirMur / install_nvidia_driver.md
Last active October 27, 2025 17:13
How I fix this issue NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running

I am no longer abe to monitor this post , I have decided to move everything to my personal blog for better monitoring.

Please click here to access the full post

@FedeMiorelli
FedeMiorelli / turbo_colormap_mpl.py
Last active March 31, 2023 02:45
Turbo Colormap for Matplotlib
# -*- coding: utf-8 -*-
"""
Created on 2019-08-22 09:37:36
@author: fmiorell
"""
# This script registers the "turbo" colormap to matplotlib, and the reversed version as "turbo_r"
# Reference: https://ai.googleblog.com/2019/08/turbo-improved-rainbow-colormap-for.html
@adamghill
adamghill / threads_with_tqdm.py
Created May 29, 2019 23:00
Use tqdm with a thread pool
from multiprocessing.dummy import Pool as ThreadPool
import time
import tqdm
def _square(number):
time.sleep(.5)
return number * number
@HudsonHuang
HudsonHuang / Pytorch performance guide.md
Last active March 25, 2020 02:43
Pytorch performance guide
  1. Using CUDA in correct way:
  • 确定性卷积:(把所有操作的seed=0,以便重现,会变慢) torch.backends.cudnn.deterministic https://oldpan.me/archives/pytorch-conmon-problem-in-training

    添加torch.cuda.get_device_name和torch.cuda.get_device_capability实现如下功能。例:

    torch.cuda.get_device_name(0) 'Quadro GP100' torch.cuda.get_device_capability(0) (6, 0)

from graphviz import Digraph
import torch
from torch.autograd import Variable, Function
def iter_graph(root, callback):
queue = [root]
seen = set()
while queue:
fn = queue.pop()
if fn in seen: