Skip to content

Instantly share code, notes, and snippets.

@apollohuang1
apollohuang1 / agent loop
Created March 10, 2025 09:23 — forked from jlia0/agent loop
Manus tools and prompts
You are Manus, an AI agent created by the Manus team.
You excel at the following tasks:
1. Information gathering, fact-checking, and documentation
2. Data processing, analysis, and visualization
3. Writing multi-chapter articles and in-depth research reports
4. Creating websites, applications, and tools
5. Using programming to solve various problems beyond development
6. Various tasks that can be accomplished using computers and the internet
@apollohuang1
apollohuang1 / normcore-llm.md
Created August 28, 2023 09:04 — forked from veekaybee/normcore-llm.md
Normcore LLM Reads
@apollohuang1
apollohuang1 / finetune_llama_v2.py
Created July 19, 2023 08:48 — forked from younesbelkada/finetune_llama_v2.py
Fine tune Llama v2 models on Guanaco Dataset
# coding=utf-8
# Copyright 2023 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
@apollohuang1
apollohuang1 / multielo-demo.py
Created May 29, 2023 06:30 — forked from roh26it/multielo-demo.py
A demo to showcase Elo ratings for a multiplayer use case
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from multielo import MultiElo
# Initialize the Elo ratings and history
recipes = ['Recipe 1', 'Recipe 2', 'Recipe 3']
elo_ratings = np.array([1500, 1500, 1500])
elo_history = [np.array([1500]), np.array([1500]), np.array([1500])]
@apollohuang1
apollohuang1 / min-char-rnn.py
Created May 23, 2023 09:42 — forked from karpathy/min-char-rnn.py
Minimal character-level language model with a Vanilla Recurrent Neural Network, in Python/numpy
"""
Minimal character-level Vanilla RNN model. Written by Andrej Karpathy (@karpathy)
BSD License
"""
import numpy as np
# data I/O
data = open('input.txt', 'r').read() # should be simple plain text file
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)
@apollohuang1
apollohuang1 / rwkv.py
Created May 23, 2023 08:03 — forked from mattiasarro/rwkv.py
RWKV MVP
# Taken from https://johanwind.github.io/2023/03/23/rwkv_details.html.
# I've added additional comments restructured it a tiny bit, which makes it clearer for me.
import numpy as np
from torch import load as torch_load # Only for loading the model weights
from tokenizers import Tokenizer
exp = np.exp
layer_norm = lambda x, w, b : (x - np.mean(x)) / np.std(x) * w + b
sigmoid = lambda x : 1/(1 + exp(-x))

Some remarks on Large Language Models

Yoav Goldberg, January 2023

Audience: I assume you heard of chatGPT, maybe played with it a little, and was imressed by it (or tried very hard not to be). And that you also heard that it is "a large language model". And maybe that it "solved natural language understanding". Here is a short personal perspective of my thoughts of this (and similar) models, and where we stand with respect to language understanding.

Intro

Around 2014-2017, right within the rise of neural-network based methods for NLP, I was giving a semi-academic-semi-popsci lecture, revolving around the story that achieving perfect language modeling is equivalent to being as intelligent as a human. Somewhere around the same time I was also asked in an academic panel "what would you do if you were given infinite compute and no need to worry about labour costs" to which I cockily responded "I would train a really huge language model, just to show that it doesn't solve everything!". We