This document summarizes the working setup for running a local ChatGPT-esque experience using open source solutions. It leverages docker to run a container version of Ollama and Open WebUI, giving you the back end and front end experience necessary to replicate the ChatGPT interaction. Additionally we use a containerized version of Caddy also managed by Docker to secure encrypted access. Last but not least, we integrate a Tailscale virtual LAN, and self-signed tailscale certificates to ensure you can connect to your front end from anywhere thanks to Tailscale's MagidDNS bindings.
What are the individual pieces involved:
- Ollama: An open source local language model runner / manager. Lets you run and manage AI models (like LLaMA, Mistral, etc.) directly on your own machine. It handles downloading models, running them efficiently, and exposing them for apps or UIs to use. You can always directly interact with a model by running it through the ollama CLI.
- Caddy: An open source lightweight web server and reverse proxy. Serves web apps and handles HTTPS automatically (using Let’s Encrypt or custom certs). In this setup, it’s the piece that routes traffic securely (e.g., between your browser and OpenWebUI/Ollama).
- OpenWebUI: An open source interface that allows you to interact with model APIs (Such as OpenAI's, Ollama's, HuggingFace, etc.). Think of it as the chat app on top of the models you run locally, with extra features like multi-model routing, conversation history, and plugin integration.
- Tailscale: A mostly open source mesh VPN built on WireGuard. Creates a secure, private network between your devices using simple authentication. This lets you access your local servers (like OpenWebUI + Caddy) from anywhere, without exposing them publicly to the internet.
Although it adds a layer of complexity, the way we set it up is much easier to manage vs setting up windows services in Windows or binding to the systemctl or launcher Daemons in Unix systems. The docker instructions below are universal and once it is installed, by default it runs on startup so the service resumes upon powerloss.
The following pre-requisites will ensure that you get the engine running the builds (docker) of the ollama system, openWebUI and caddy
- Create an account on Docker Hub
- Install Docker Desktop from the official website
- Run the installer and ensure to enable WSL2 when prompted if on Windows.
- Reboot when prompted.
- Upon reboot, agree to the terms and conditions of docker.desktop when prompted
- Allow it to install the Windows Subsystem for Linux (WSL2)
- Reboot
- Start Docker Desktop and navigate yourself to the settings
- Enable "Start docker desktop when you sign into your computer"
- Disable "open Docker Dashboard when Docker Desktop Starts" unless thats is desired.
- Enable the Docker terminal
- Reboot once again to propagate all the changes
- Open up a terminal window and run docker -v to confirm it is hooked up.
- Create a personal account on Tailscale
- Install Tailscale from the Tailscale official downloads website and set up a device in it
- Log in to the Tailscale Admin Portal
- Navigate yourself to the DNS tab
- Make note of the Tailnet name (note if you wish to change this now is the time to do so)
- Enable MagicDNS
- Enable HTTPS
- Acknowledge the prompt.
- Install Tailscale on all the other devices you wish to access from
Tailnet Lock lets you verify that no node joins your Tailscale network (known as a tailnet) unless trusted nodes in your tailnet sign the new node. With Tailnet Lock enabled, even if Tailscale were malicious or Tailscale infrastructure hacked, attackers can't send or receive traffic in your tailnet.
You need to have at least 2 devices you can install a command line version of tailscale for the next part.
In order to enable this:
- Head over to Device Management page in the Tailscale Admin Portal
- Select "Enable Tailnet lock"
- In the "Add signinig nodes" section select "Add signing node"
- Select the nodes from which you'll sign new nodes in
- In the "Run command from signing node" section, copy the
tailscale lock initcommand from it. - Open a terminal in one of the signing nodes you selected and run the command.
- Your system is now secured further.
This allows the docker containers to connect easier with each other.
- Open a terminal window
- type and run
docker network create web
Set up a docker volume frist for a persistent model cache
docker volume create ollamadocker run -d \
--name ollama \
--network web
-p 127.0.0.1:11434:11434 \
-v ollama:/root/.ollama \
--restart unless-stopped \
ollama/ollama:latestAlternatively (and generally more desirable), you can use the following command to enable GPU acceleration in Windows:
docker run -d \
--name ollama \
--gpus=all \
--network web \
-p 127.0.0.1:11434:11434 \
-v ollama:/root/.ollama \
--restart unless-stopped \
ollama/ollama:latestExplanation
-dflags the run command to be detached from the terminal--name ollamasimply names the container "ollama"--gpus=allenables the NVIDIA GPU Paravirtualization.--network webbinds it to the bridged network we created so the containers can see each other.-p 127.0.0.1:11434:11434sets up the port for the container to be 11434 (ollama's default)-v ollama:/root/.ollamasets up a volume mapping, all the ollama model data is going to be stored in this folder within the ollama volume created earlier--restart unless-stoppedin case something takes down the container, this flag makes it auto-restart if not stopped by the user.ollama/ollama:latestthis is the image to use which we pull directly from the Docker Hub available images.
First check if ollama is serving anything successfully via a command line:
curl http://localhost:11434
Should return
Ollama is running
Additionally you should check if its responding to model queries via its api
By running the following command: curl http://localhost:11434/api/tags which
should return something like so:
{"models":[]}
Set up a docker volume frist for a persisten open-webui cache
docker volume create open-webuidocker run -d \
--name open-webui \
--network web \
-p 127.0.0.1:3000:8080 \
--add-host=host.docker.internal:host-gateway \
-e OLLAMA_BASE_URL=http://host.docker.internal:11434 \
-v open-webui:/app/backend/data \
--restart unless-stopped \
openwebui/open-webui:latestExplanation
-dflags the run command to be detached from the terminal--name open-webuisimply names the container "open-webui"--network webbinds it to the bridged network we created so the containers can see each other.-p 127.0.0.1:3000:8080sets up the port for the container to route from 3000 to openweb-ui's default port 8080. Note that if using keycloak you need to change its port to something else like 9090.--add-host=host.docker.internal:host-gatewayThis add the name of the machine running docker to the hosts file of the container, critical to ensure it can talk to the services running on the host machine.-e OLLAMA_BASE_URL=http://host.docker.internal:11434to ensure that the system binds the ollama connection endpoint in the open-webui instead of the defaulthttp://localhost:11434-v open-webui:/app/backend/datasets up a volume mapping, all the open-webui data is going to be stored in this folder within the open-webui volume created earlier.--restart unless-stoppedin case something takes down the container, this flag makes it auto-restart if not stopped by the user.openwebui/open-webui:latestthis is the image to use which we pull directly from the Docker Hub available images.
Navigate yourself to http://localhost:3000
We need to immediately set up the main user and then block the access to others for security purposes. To do this:
- Click on the arrow to get started
- Enter a name, email and password (email + password will be what you use to connect)
- Once logged in click on the initial on the bottom left side of the UI to bring up the context menu and select "Admin Panel" to load the admin settings.
- Navigate to Settings along the Tab bar
- Ensure "Default User Role" says "pending"
- Ensure "Enable New Sign Ups" is disabled
At this stage we have a locally accessible Open-WebUI interface and can interact with it. You could technically pull a model and stop here but you won't be able to access the UI outside of the host machine. So lets set up an HTTPS reverse proxy using Caddy & some signed HTTPS certificates obtained via Tailscale.
- Go into the Tailscale Admin Portal and identify the machine that is running your docker containers.
- Note the machine's name (Henceforth
<MACHINE_HOSTNAME>) If you are ok with it, leave it and skip the nested steps below. Else:- Click the 3 dots on the last column of the view and select "Edit machine name..."
- Untick the "Auto-generate from OS hostname" option so you can edit the text box below.
- Give it a name you want. From now use this new name whenever you see
<MACHINE_HOSTNAME>in any of the instructions that follow. - Click on "Update name" to dismiss the modal window.
- Click on the DNS Tab in the Tailscale Admin Portal
- Make note of the Tailnet name being displayed. (Henceforth
<TAILNET_NAME>)- Note if you want to change it you can re-roll some names but you can't simply set it to something custom. Additionally you best do it now, as once certs are generated you cannot change this.
- Open a command line window and navigate to your main user home
- Bash command:
cd ~ - Powershell command:
cd $env:USERPROFILE - Windows cmd:
%HOMEDRIVE% && cd %HOMEDRIVE%%HOMEPATH%
- Bash command:
- At the user home, create the .caddy/certs folder structure
- Bash command:
mkdir -p ~/.caddy/certs - Powershell command:
cd $env:USERPROFILE\.caddy\certs - Windows cmd:
mkdir %USERPROFILE%\.caddy\certs
- Bash command:
- Navigate yourself into this new certs folder via the command
cd .caddy\certs - Generate the certificates via the command:
tailscale cert <MACHINE_HOSTNAME>.<TAILNET_NAME> - This should generate two files one is
<MACHINE_HOSTNAME>.<TAILNET_NAME>.crtand the other one is<MACHINE_HOSTNAME>.<TAILNET_NAME>.key
The CaddyFile is the configuration file used to start a server by caddy.
-
In the same terminal window navigate yourself to the
.caddyfolder.- HINT:
- Bash command:
cd ~/.caddy - Powershell command:
cd $env:USERPROFILE\.caddy - Windows cmd:
%HOMEDRIVE% && cd %HOMEDRIVE%%HOMEPATH%\.caddy
- Bash command:
- HINT:
-
Create a
CaddyFilevia the command touchCaddyFile -
Populate it with the following:
# OpenWebUI via 443 https://<MACHINE_HOSTNAME>.<TAILNET_NAME>:443 { tls /certs/<MACHINE_HOSTNAME>.<TAILNET_NAME>.crt /certs/<MACHINE_HOSTNAME>.<TAILNET_NAME>.key encode zstd gzip log { output stdout format console } reverse_proxy http://open-webui:8080 } # ComfyUI via 8443 https://<MACHINE_HOSTNAME>.<TAILNET_NAME>:8443 { tls /certs/<MACHINE_HOSTNAME>.<TAILNET_NAME>.crt /certs/<MACHINE_HOSTNAME>.<TAILNET_NAME>.key encode zstd gzip log { output stdout format console } reverse_proxy http://comfyui:8188 } # Local-only HTTP access (short names) http://comfyui { encode zstd gzip log { output stdout format console } reverse_proxy http://comfyui:8188 } http://openwebui { encode zstd gzip log { output stdout format console } reverse_proxy http://open-webui:8080 } http://ollama { encode zstd gzip log { output stdout format console } reverse_proxy http://ollama:11434 }
Ensuring to replace <MACHINE_HOSTNAME> & <TAILNET_NAME> accordingly.
-
Save the file.
Your structure should look as follows so far:
<USERHOME>\.caddy\
├─ Caddyfile
└─ certs\
├─ <MACHINE_HOSTNAME>.<TAILNET_NAME>.crt
└─ <MACHINE_HOSTNAME>.<TAILNET_NAME>.keyAlright almost done with the setup. Now to ensure we run Caddy in a docker container that will restart whenever there is an issue. Note that the setup does a copy of the Caddyfile and certs from the current directory to the container.
-
If you have been following the instructions above, you should be in the
.caddyfolder -
Run the following command on a powershell window
docker run -d --name caddy ` -v ${PWD}/Caddyfile:/etc/caddy/Caddyfile:ro ` -v ${PWD}/certs:/certs:ro ` -v caddy_data:/data ` -v caddy_config:/config ` --add-host=host.docker.internal:host-gateway ` --restart unless-stopped ` -p 443:443 ` -p 8443:8443 ` -p 127.0.0.1:80:80 ` --network web ` caddy:latest
Explanation
-dflags the run command to be detached from the terminal--name caddysimply names the container "caddy"-p 443:443sets up the port for the container to route from 443 to caddy's default port 443.-p 8443:8443sets up the port for the container to also route 8443 port through. We will use this for ComfyUI later. Ignore if you don't plan to use it.-p 127.0.0.1:80:80Allos us to open up the http port for local connections this is used to cleanly connect through to the systems in the backend.--mount type=bind,src="$(pwd)/Caddyfile",dst=/etc/caddy/Caddyfile,rocopies the Caddyfile from the current directory to the container.--mount type=bind,src="$(pwd)/certs",dst=/certs,rocopies the certs from the current directory to the container.--mount type=volume,src=caddy_data,dst=/datasets up a volume mapping, all the caddy data is going to be stored here.--mount type=volume,src=caddy_config,dst=/configsets up a volume mapping, all the caddy config is going to be stored here.--restart unless-stoppedin case something takes down the container, this flag makes it auto-restart if not stopped by the user.--network webbinds it to the bridged network we created so the containers can see each other.caddy:latestthis is the image to use which we pull directly from the Docker Hub available images.
Alright, now lets do some sanity checks to ensure that everything is acting successfully. First lets verify from the main host machine.
- Open a terminal window and type in
curl -Ik https://<MACHINE_HOSTNAME>.<TAILNET_NAME>/ - Verify that the response gives you an HTTP/1.1 200 OK
- Repeat the command but this time with http and not https like so
curl -Ik http://<MACHINE_HOSTNAME>.<TAILNET_NAME>/ - It should give you an error because our Caddy File does not allow insecure connections to happen at all.
Then from any of the other telnet devices in your network lets repeat it:
- Open a terminal window and type in
curl -Ik https://<MACHINE_HOSTNAME>.<TAILNET_NAME>/ - Verify that the response gives you an HTTP/1.1 200 OK
- Repeat the command but this time with http and not https like so
curl -Ik http://<MACHINE_HOSTNAME>.<TAILNET_NAME>/ - It should also give you an error.
Congrats you secured everything and are almost ready! Just need to set up some more minor stuff and add some LLMs for you to play with!
This is due to Open-WebUI using http://localhost:3000 as default.
- Log onto OpenWebUI
- Head to the Admin Panel
- Under General populate your WebUI URL with http://<MACHINE_HOSTNAME>.<TAILNET_NAME>/
That's it!
First lets get you some models
Going to showcase the two usual ways to get models.
- The easiest way is via Admin Panel's Connection Manager on OpenWebUI
- The direct interaction with ollama way
- Navigate to your OpenWebUI instance
- Open the Admin Panel
- Navigate to Connections
- Click on Down arrow icon next to the Ollama Connection for your docker
(It should read
http://host.docker.internal:11434) - You have the ability to enter any of the model from ollama.com directly here
by entering its name e.g.
deepseek-r1:8b - Then switch to the models tag to enable / disable / manage the model
Additionally you can run non-ollama provided models if you have the right
ollama compatible GGUF version. e.g.
hf.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated-GGUF:BF16
- Open a terminal command window
- Run the following
docker exec -it ollama ollama pull <target model>
I recommend you install 2 models
- 'Fast Model' low input model ideally fine tuned to your work such as llama3.1:8b
- 'Reasoning Model' someting like deepseek-R1
Show me how!
Although you installed a pretty powerful environment, you still have to enable a system that will allow you to do image generation if that is also part of the goals you are trying to build. In order for you to do that follow these instructions
Create folder with the following structure:
Root/
├── custom_nodes
├── docker
├── input
├── models
├── output
├── temp
└── userMake note of the path as you'll need it during the docker can that comes next.
- Open a Terminal window
- Navigate to the path where you created the ComfyUI Bindings
- Navigate into the
dockerfolder - Create a new file called
Dockerfile - Populate it with the following
# Defines build arguments for the versions of PyTorch, CUDA, and cuDNN to use
ARG PYTORCH_VERSION=2.11.0
ARG CUDA_VERSION=13.0
ARG CUDNN_VERSION=9
# This image is based on the latest official PyTorch image, because it already contains CUDA, CuDNN, and PyTorch
FROM pytorch/pytorch:${PYTORCH_VERSION}-cuda${CUDA_VERSION}-cudnn${CUDNN_VERSION}-runtime
# Defines build arguments for the versions of ComfyUI and ComfyUI Manager to use
ARG COMFYUI_VERSION=0.18.2
# Installs Git, because ComfyUI and the ComfyUI Manager are installed by cloning their respective Git repositories
RUN apt-get update --assume-yes && \
apt-get install --assume-yes \
git \
sudo \
libgl1 \
python3-venv \
ffmpeg \
libglib2.0-0 && \
rm -rf /var/cache/apt/archives /var/lib/apt/lists/*
# Create a virtual environment for the container
ENV VIRTUAL_ENV=/opt/venv
RUN python -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
# Clones the ComfyUI repository and checks out the latest release
RUN git clone https://github.com/Comfy-Org/ComfyUI.git /opt/comfyui && \
cd /opt/comfyui && \
git checkout "v${COMFYUI_VERSION}"
# Installs the required Python packages for both ComfyUI and the ComfyUI Manager
RUN pip install \
--requirement /opt/comfyui/requirements.txt \
--requirement /opt/comfyui/manager_requirements.txt && \
pip cache purge
# Sets the working directory to the ComfyUI directory
WORKDIR /opt/comfyui
# Exposes the default port of ComfyUI (this is not actually exposing the port to the host machine, but it is good practice to include it as metadata,
# so that the user knows which port to publish)
EXPOSE 8188
# Adds the startup script to the container; the startup script will create all necessary directories in the models and custom nodes volumes that were
# mounted to the container and symlink the ComfyUI Manager to the correct directory; it will also create a user with the same UID and GID as the user
# that started the container, so that the files created by the container are owned by the user that started the container and not the root user
ADD entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/bin/bash", "/entrypoint.sh"]
- Open a Terminal window
- Navigate to the path where you created the ComfyUI Bindings
- Navigate into the
dockerfolder - Create a new file called
entrypoint.sh - Populate it with the following:
#!/bin/bash
# Check if we have to use custom ports or addresses
if [ -z "$PORT" ];
then
echo "No value for PORT specified, using default port 8188"
TARGET_PORT=8188
else
TARGET_PORT=$PORT
echo "CUSTOM PORT specified, using $TARGET_PORT..."
fi
# Creates the directories for the models inside of the volume that is mounted from the host
echo "Creating directories for models..."
MODEL_DIRECTORIES=(
"checkpoints"
"clip"
"clip_vision"
"configs"
"controlnet"
"diffusers"
"diffusion_models"
"embeddings"
"gligen"
"hypernetworks"
"loras"
"photomaker"
"style_models"
"text_encoders"
"unet"
"upscale_models"
"vae"
"vae_approx"
)
for MODEL_DIRECTORY in ${MODEL_DIRECTORIES[@]}; do
mkdir -p /opt/comfyui/models/$MODEL_DIRECTORY
done
# The custom nodes that were installed using the ComfyUI Manager may have requirements of their own, which are not installed when the container is
# started for the first time; this loops over all custom nodes and installs the requirements of each custom node
echo "Installing requirements for custom nodes..."
for CUSTOM_NODE_DIRECTORY in /opt/comfyui/custom_nodes/*;
do
if [ -f "$CUSTOM_NODE_DIRECTORY/requirements.txt" ];
then
CUSTOM_NODE_NAME=${CUSTOM_NODE_DIRECTORY##*/}
CUSTOM_NODE_NAME=${CUSTOM_NODE_NAME//[-_]/ }
echo "Installing requirements for $CUSTOM_NODE_NAME..."
pip install --requirement "$CUSTOM_NODE_DIRECTORY/requirements.txt"
fi
done
# Under normal circumstances, the container would be run as the root user, which is not ideal, because the files that are created by the container in
# the volumes mounted from the host, i.e., custom nodes and models downloaded by the ComfyUI Manager, are owned by the root user; the user can specify
# the user ID and group ID of the host user as environment variables when starting the container; if these environment variables are set, a non-root
# user with the specified user ID and group ID is created, and ComfyUI is run as this user; ComfyUI is started at its default port (--port 8188); the
# IP address is changed from localhost to 0.0.0.0 (--listen 0.0.0.0), because Docker is only forwarding traffic to the IP address it assigns to the
# container, which is unknown at build time; listening to 0.0.0.0 means that ComfyUI listens to all incoming traffic; the auto-launch feature is
# disabled (--disable-auto-launch), because we do not want (nor is it possible) to open a browser window in a Docker container; to allow users to pass
# in additional command line arguments ("$@"), for example, --enable-cors-header to enable CORS and allow external web apps to interact with ComfyUI
# in this container
if [ -z "$USER_ID" ] || [ -z "$GROUP_ID" ];
then
echo "Running container as $USER..."
exec python main.py \
--port $TARGET_PORT \
--listen 0.0.0.0 \
--disable-auto-launch \
--enable-manager \
"$@"
else
echo "Creating non-root user..."
getent group $GROUP_ID > /dev/null 2>&1 || groupadd --gid $GROUP_ID comfyui-user
id -u $USER_ID > /dev/null 2>&1 || useradd --uid $USER_ID --gid $GROUP_ID --create-home comfyui-user
chown --recursive $USER_ID:$GROUP_ID /opt/comfyui
export PATH=$PATH:/home/comfyui-user/.local/bin
echo "Running container as comfyui-user ($USER_ID:$GROUP_ID)..."
sudo --set-home --preserve-env=PATH --user \#$USER_ID \
python main.py \
--port $TARGET_PORT \
--listen 0.0.0.0 \
--disable-auto-launch \
--enable-manager \
"$@"
fi- Open a Terminal window
- Navigate to the path where you created the ComfyUI Bindings
- Navigate into the
dockerfolder - Run the following command:
docker build -t comfyui-docker . - Wait for the operation to finish.
Now if you were in the docker folder go back up one folder and run the command below.
Note: GPU shown; drop --gpus all for CPU-only.
Additionally, make sure you run this in a Powershell terminal. Git bash tends to mess
up the volume bindings below.
docker run `
--name comfyui `
--detach `
--restart unless-stopped `
--env USER_ID=0 `
--env GROUP_ID=0 `
-v "${PWD}/models:/opt/comfyui/models:rw" `
-v "${PWD}/custom_nodes:/opt/comfyui/custom_nodes:rw" `
-v "${PWD}/input:/opt/comfyui/input:rw" `
-v "${PWD}/output:/opt/comfyui/output:rw" `
-p 127.0.0.1:8188:8188 `
--network web `
--gpus all `
comfyui-docker:latestWait for it to complete. This may take a bit as its a lot of setup. (Takes around 10 minutes for it to complete on my end with no cache)
Run these commands:
docker ps --filter "name=comfyui"docker inspect comfyui --format "{{json .Mounts}}" | Out-Stringdocker logs -f comfyui
You should see the following in the mount results:
/opt/comfyui/user→comfy_user/data/user→comfy_user- No random hash-named volumes
And you should also see logs ending with on that last command.
Starting server
To see the GUI go to: http://0.0.0.0:8188
To see the GUI go to: http://[::]:8188Connect to http://127.0.0.1:8188
If you set up yourself with tailscale you can now access comfyUI from other machines by hitting the following endpoint
https://<MACHINE_HOSTNAME>.<TAILNET_NAME>/8443
If everything went successful you are DONE!
Instead of using long calls you can just shorten the calls on your caddy file
If you are like me you will likely forget to these ports and get frustrated with what connects where. Luckily we already did the work so that the system knows how to route calls through. All we are missing is adding the following 3 lines to your hostfile on the machine running all127.0.0.1 comfyui
127.0.0.1 openwebui
127.0.0.1 ollamaNow you can hit the containers as follows:
- http://ollama -> http://ollama:11434 via caddy -> http://localhost:11434
- http://comfyui -> http://comfyui:8188 via caddy -> http://localhost:8188
- http://openwebui -> http://open-webui:8080 via caddy -> http://localhost:3000
Check if you have your tailscale VPN enabled. Suggested to have it autoconnect by default and enable both "VPN on Demand" and "Use Tailscale DNS Settings"
This section contains information on how to update all of the pieces involved. They each have a slightly unique update path so see below.
Important
You should always stop all the running containers before updating via docker stop <container name> and then start/update them in this order:
- caddy
- ollama
- openweb-ui
- comfyUI
Show me how!
Thankfully caddy has an official docker image. So all you have to do is the following to update it once its set up:
-
Open a terminal window
-
Run the following command
docker pull caddy -
Stop the current container running caddy via
docker stop caddyif you haven't done so. -
Remove the caddy container via the command
docker rm caddy -
Navigate yourself to you
<USERHOME>\.caddy\ -
Run the image the way you did so before with the following POWERSHELL command:
docker run -d --name caddy ` -v ${PWD}/Caddyfile:/etc/caddy/Caddyfile:ro ` -v ${PWD}/certs:/certs:ro ` -v caddy_data:/data ` -v caddy_config:/config ` --add-host=host.docker.internal:host-gateway ` --restart unless-stopped ` -p 443:443 ` -p 8443:8443 ` -p 127.0.0.1:80:80 ` --network web ` caddy:latest
-
Thats it! You should be back up and running. Start any other containers or proceed to the next update step.
Show me how!
Thankfully ollama has an official docker image. So all you have to do is the following to update it once its set up:
-
Open a terminal window
-
Run the following command
docker pull ollama/ollama -
Stop the current container running ollama via
docker stop ollamaif you haven't done so. -
Remove the ollama container via the command
docker rm ollama -
Run the image the way you did so before with the following POWERSHELL command:
docker run -d ` --name ollama ` --gpus=all ` --network web ` -p 127.0.0.1:11434:11434 ` -v ollama:/root/.ollama ` --restart unless-stopped ` ollama/ollama:latest
-
Thats it! You should be back up and running. Start any other containers or proceed to the next update step.
Show me how!
Openweb-ui finally has an official docker image on Docker hub. So all you have to do is the following to update it once its set up:
- Open a terminal window
- Run the following command
docker pull openwebui/open-webui - Stop the current container running openwebui via
docker stop openwebuiif you haven't done so. - Remove the openwebui container via the command
docker rm openwebui - Run the image the way you did so before with the following POWERSHELL command:
docker run -d `
--name open-webui `
--network web `
-p 127.0.0.1:3000:8080 `
--add-host=host.docker.internal:host-gateway `
-e OLLAMA_BASE_URL=http://host.docker.internal:11434 `
-v open-webui:/app/backend/data `
--restart unless-stopped `
openwebui/open-webui:latest- Thats it! You should be back up and running. Start any other containers or proceed to the next update step.
Show me how!
Alright this one is the most annoying as it takes a bit of figuring out what is possible.
- First you need to check off ComfyUI's website if they changed the pytorch & cuda versions they are targeting.
- Write down the cude and pytorch version. And check if the pytorch + cuda version have a tag available.
- This is because our source pulls the image found at
pytorch/pytorch:<PYTHORCH_VERSION>-cuda<CUDA_VERSION>-cudnn<CUDNN_VERSION>-runtime - Confirm that the new expected pytorch + cuda + cudnn exist.
- This is because our source pulls the image found at
- Open a terminal window
- Navigate yourself to the structure where you have your comfyui data
- Crack open the dockerfile that is under the docker folder
- Change the following variables to the latest version you verified
- ARG PYTORCH_VERSION=
- ARG CUDA_VERSION=
- ARG CUDNN_VERSION=
- ARG COMFYUI_VERSION=
- Build this image using the command
docker build -t comfyui-docker .(Note: Usually takes 10+ minutes) - Stop the current container running comfyui via
docker stop comfyuiif you haven't done so. - Remove the comfyui container via the command
docker rm comfyui - Run the image the way you did so before with the following POWERSHELL command:
docker run `
--name comfyui `
--detach `
--restart unless-stopped `
--env USER_ID=0 `
--env GROUP_ID=0 `
-v "${PWD}/models:/opt/comfyui/models:rw" `
-v "${PWD}/custom_nodes:/opt/comfyui/custom_nodes:rw" `
-v "${PWD}/input:/opt/comfyui/input:rw" `
-v "${PWD}/output:/opt/comfyui/output:rw" `
-p 127.0.0.1:8188:8188 `
--network web `
--gpus all `
comfyui-docker:latest- Thats it! You should be back up and running. Start any other containers or proceed to the next update step.