Forked from iam-veeramalla/custom-gpt-llama-hyperstack
Created
August 20, 2024 10:39
-
-
Save AnindyaPal/244bc66a0535a663dd3647f996d06eba to your computer and use it in GitHub Desktop.
Setup your own custom GPT using openwebui on Hyperstack
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Step 1: Create a VM | |
| - Create an instance on Hyperstack platform (They have a huge variety of GPU instances) | |
| OS Image - Ubuntu Server 22.04 LTS R535 CUDA 12.2 | |
| Flavor Details - A100-80G-PCIe | |
| Step 2: Run resources on the VM | |
| - Install Ollama | |
| curl -fsSL https://ollama.com/install.sh | sh | |
| - Run the llama3 model | |
| ollama run llama3 | |
| Step 3: | |
| - Install Docker | |
| sudo apt update | |
| sudo apt install docker.io | |
| sudo systemctl start docker | |
| sudo usermod -aG docker ubuntu | |
| - Install the nvidia driver | |
| sudo apt install -y nvidia-docker2 | |
| sudo systemctl daemon-reload | |
| sudo systemctl restart docker | |
| - Run the Docker container bundlede | |
| docker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama | |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment