Skip to content

Instantly share code, notes, and snippets.

@ImLp
Last active April 7, 2026 09:43
Show Gist options
  • Select an option

  • Save ImLp/afe9556c8f928608eaff5078e71d6fb7 to your computer and use it in GitHub Desktop.

Select an option

Save ImLp/afe9556c8f928608eaff5078e71d6fb7 to your computer and use it in GitHub Desktop.
Running a local LLM Stack using open source solutions (Ollama + OpenWeb UI + Caddy + Docker + Tailscale)

Local LLM Stack Setup (Ollama + Open WebUI + Caddy on Docker + Tailscale)

This document summarizes the working setup for running a local ChatGPT-esque experience using open source solutions. It leverages docker to run a container version of Ollama and Open WebUI, giving you the back end and front end experience necessary to replicate the ChatGPT interaction. Additionally we use a containerized version of Caddy also managed by Docker to secure encrypted access. Last but not least, we integrate a Tailscale virtual LAN, and self-signed tailscale certificates to ensure you can connect to your front end from anywhere thanks to Tailscale's MagidDNS bindings.

A bit of background

What are the individual pieces involved:

  • Ollama: An open source local language model runner / manager. Lets you run and manage AI models (like LLaMA, Mistral, etc.) directly on your own machine. It handles downloading models, running them efficiently, and exposing them for apps or UIs to use. You can always directly interact with a model by running it through the ollama CLI.
  • Caddy: An open source lightweight web server and reverse proxy. Serves web apps and handles HTTPS automatically (using Let’s Encrypt or custom certs). In this setup, it’s the piece that routes traffic securely (e.g., between your browser and OpenWebUI/Ollama).
  • OpenWebUI: An open source interface that allows you to interact with model APIs (Such as OpenAI's, Ollama's, HuggingFace, etc.). Think of it as the chat app on top of the models you run locally, with extra features like multi-model routing, conversation history, and plugin integration.
  • Tailscale: A mostly open source mesh VPN built on WireGuard. Creates a secure, private network between your devices using simple authentication. This lets you access your local servers (like OpenWebUI + Caddy) from anywhere, without exposing them publicly to the internet.

Why do we run it all via docker?

Although it adds a layer of complexity, the way we set it up is much easier to manage vs setting up windows services in Windows or binding to the systemctl or launcher Daemons in Unix systems. The docker instructions below are universal and once it is installed, by default it runs on startup so the service resumes upon powerloss.

0) Setting up pre-requisites

The following pre-requisites will ensure that you get the engine running the builds (docker) of the ollama system, openWebUI and caddy

0.1) Docker Desktop

  • Create an account on Docker Hub
  • Install Docker Desktop from the official website
  • Run the installer and ensure to enable WSL2 when prompted if on Windows.
  • Reboot when prompted.
  • Upon reboot, agree to the terms and conditions of docker.desktop when prompted
  • Allow it to install the Windows Subsystem for Linux (WSL2)
  • Reboot
  • Start Docker Desktop and navigate yourself to the settings
  • Enable "Start docker desktop when you sign into your computer"
  • Disable "open Docker Dashboard when Docker Desktop Starts" unless thats is desired.
  • Enable the Docker terminal
  • Reboot once again to propagate all the changes
  • Open up a terminal window and run docker -v to confirm it is hooked up.

0.2) Tailscale

  • Create a personal account on Tailscale
  • Install Tailscale from the Tailscale official downloads website and set up a device in it
  • Log in to the Tailscale Admin Portal
  • Navigate yourself to the DNS tab
  • Make note of the Tailnet name (note if you wish to change this now is the time to do so)
  • Enable MagicDNS
  • Enable HTTPS
  • Acknowledge the prompt.
  • Install Tailscale on all the other devices you wish to access from

0.2.1) Optional: Lock it down further by enabling tailnet lock

Tailnet Lock lets you verify that no node joins your Tailscale network (known as a tailnet) unless trusted nodes in your tailnet sign the new node. With Tailnet Lock enabled, even if Tailscale were malicious or Tailscale infrastructure hacked, attackers can't send or receive traffic in your tailnet.

You need to have at least 2 devices you can install a command line version of tailscale for the next part.

In order to enable this:

  • Head over to Device Management page in the Tailscale Admin Portal
  • Select "Enable Tailnet lock"
  • In the "Add signinig nodes" section select "Add signing node"
  • Select the nodes from which you'll sign new nodes in
  • In the "Run command from signing node" section, copy the tailscale lock init command from it.
  • Open a terminal in one of the signing nodes you selected and run the command.
  • Your system is now secured further.
0.3 ) Create a docker network

This allows the docker containers to connect easier with each other.

  • Open a terminal window
  • type and run docker network create web

1) Installing the LLM environment

1.1) Installing Ollama via Docker

1.1.1) Create volume

Set up a docker volume frist for a persistent model cache

docker volume create ollama

1.1.2) Run the latest ollama

docker run -d \
  --name ollama \
  --network web
  -p 127.0.0.1:11434:11434 \
  -v ollama:/root/.ollama \
  --restart unless-stopped \
  ollama/ollama:latest

Alternatively (and generally more desirable), you can use the following command to enable GPU acceleration in Windows:

docker run -d \
  --name ollama \
  --gpus=all \
  --network web \
  -p 127.0.0.1:11434:11434 \
  -v ollama:/root/.ollama \
  --restart unless-stopped \
  ollama/ollama:latest
Explanation
  • -d flags the run command to be detached from the terminal
  • --name ollama simply names the container "ollama"
  • --gpus=all enables the NVIDIA GPU Paravirtualization.
  • --network web binds it to the bridged network we created so the containers can see each other.
  • -p 127.0.0.1:11434:11434 sets up the port for the container to be 11434 (ollama's default)
  • -v ollama:/root/.ollama sets up a volume mapping, all the ollama model data is going to be stored in this folder within the ollama volume created earlier
  • --restart unless-stopped in case something takes down the container, this flag makes it auto-restart if not stopped by the user.
  • ollama/ollama:latest this is the image to use which we pull directly from the Docker Hub available images.

1.1.3) Perform sanity checks

First check if ollama is serving anything successfully via a command line: curl http://localhost:11434

Should return

Ollama is running

Additionally you should check if its responding to model queries via its api By running the following command: curl http://localhost:11434/api/tags which should return something like so:

{"models":[]}

1.2) Set up OpenWebUI

1.2.1) Create a volume for it

Set up a docker volume frist for a persisten open-webui cache

docker volume create open-webui

1.2.2) Run the latest open-webui

docker run -d \
  --name open-webui \
  --network web \
  -p 127.0.0.1:3000:8080 \
  --add-host=host.docker.internal:host-gateway \
  -e OLLAMA_BASE_URL=http://host.docker.internal:11434 \
  -v open-webui:/app/backend/data \
  --restart unless-stopped \
  openwebui/open-webui:latest
Explanation
  • -d flags the run command to be detached from the terminal
  • --name open-webui simply names the container "open-webui"
  • --network web binds it to the bridged network we created so the containers can see each other.
  • -p 127.0.0.1:3000:8080 sets up the port for the container to route from 3000 to openweb-ui's default port 8080. Note that if using keycloak you need to change its port to something else like 9090.
  • --add-host=host.docker.internal:host-gateway This add the name of the machine running docker to the hosts file of the container, critical to ensure it can talk to the services running on the host machine.
  • -e OLLAMA_BASE_URL=http://host.docker.internal:11434 to ensure that the system binds the ollama connection endpoint in the open-webui instead of the default http://localhost:11434
  • -v open-webui:/app/backend/data sets up a volume mapping, all the open-webui data is going to be stored in this folder within the open-webui volume created earlier.
  • --restart unless-stopped in case something takes down the container, this flag makes it auto-restart if not stopped by the user.
  • openwebui/open-webui:latest this is the image to use which we pull directly from the Docker Hub available images.

1.2.3) Test in the browser that openweb-ui is running

Navigate yourself to http://localhost:3000

1.2.4) Set up OpenWebUI Admin & secure access

We need to immediately set up the main user and then block the access to others for security purposes. To do this:

  • Click on the arrow to get started
  • Enter a name, email and password (email + password will be what you use to connect)
  • Once logged in click on the initial on the bottom left side of the UI to bring up the context menu and select "Admin Panel" to load the admin settings.
  • Navigate to Settings along the Tab bar
  • Ensure "Default User Role" says "pending"
  • Ensure "Enable New Sign Ups" is disabled

1.3) Set up Caddy using Tailscale Certificates

At this stage we have a locally accessible Open-WebUI interface and can interact with it. You could technically pull a model and stop here but you won't be able to access the UI outside of the host machine. So lets set up an HTTPS reverse proxy using Caddy & some signed HTTPS certificates obtained via Tailscale.

1.3.1) Generate your certificates on the host

  • Go into the Tailscale Admin Portal and identify the machine that is running your docker containers.
  • Note the machine's name (Henceforth <MACHINE_HOSTNAME>) If you are ok with it, leave it and skip the nested steps below. Else:
    • Click the 3 dots on the last column of the view and select "Edit machine name..."
    • Untick the "Auto-generate from OS hostname" option so you can edit the text box below.
    • Give it a name you want. From now use this new name whenever you see <MACHINE_HOSTNAME> in any of the instructions that follow.
    • Click on "Update name" to dismiss the modal window.
  • Click on the DNS Tab in the Tailscale Admin Portal
  • Make note of the Tailnet name being displayed. (Henceforth <TAILNET_NAME>)
    • Note if you want to change it you can re-roll some names but you can't simply set it to something custom. Additionally you best do it now, as once certs are generated you cannot change this.
  • Open a command line window and navigate to your main user home
    • Bash command: cd ~
    • Powershell command: cd $env:USERPROFILE
    • Windows cmd: %HOMEDRIVE% && cd %HOMEDRIVE%%HOMEPATH%
  • At the user home, create the .caddy/certs folder structure
    • Bash command: mkdir -p ~/.caddy/certs
    • Powershell command: cd $env:USERPROFILE\.caddy\certs
    • Windows cmd: mkdir %USERPROFILE%\.caddy\certs
  • Navigate yourself into this new certs folder via the command cd .caddy\certs
  • Generate the certificates via the command: tailscale cert <MACHINE_HOSTNAME>.<TAILNET_NAME>
  • This should generate two files one is <MACHINE_HOSTNAME>.<TAILNET_NAME>.crt and the other one is <MACHINE_HOSTNAME>.<TAILNET_NAME>.key

1.3.2) Create the CaddyFile

The CaddyFile is the configuration file used to start a server by caddy.

  • In the same terminal window navigate yourself to the .caddy folder.

    • HINT:
      • Bash command: cd ~/.caddy
      • Powershell command: cd $env:USERPROFILE\.caddy
      • Windows cmd: %HOMEDRIVE% && cd %HOMEDRIVE%%HOMEPATH%\.caddy
  • Create a CaddyFile via the command touch CaddyFile

  • Populate it with the following:

    # OpenWebUI via 443
    https://<MACHINE_HOSTNAME>.<TAILNET_NAME>:443 {
      tls /certs/<MACHINE_HOSTNAME>.<TAILNET_NAME>.crt /certs/<MACHINE_HOSTNAME>.<TAILNET_NAME>.key
      encode zstd gzip
      log {
        output stdout format console
      }
      reverse_proxy http://open-webui:8080
    }
    
    # ComfyUI via 8443
    https://<MACHINE_HOSTNAME>.<TAILNET_NAME>:8443 {
      tls /certs/<MACHINE_HOSTNAME>.<TAILNET_NAME>.crt /certs/<MACHINE_HOSTNAME>.<TAILNET_NAME>.key
      encode zstd gzip
      log {
        output stdout format console
      }
      reverse_proxy http://comfyui:8188
    }
    
    # Local-only HTTP access (short names)
    
    http://comfyui {
      encode zstd gzip
      log {
        output stdout format console
      }
      reverse_proxy http://comfyui:8188
    }
    
    http://openwebui {
      encode zstd gzip
      log {
        output stdout format console
      }
      reverse_proxy http://open-webui:8080
    }
    
    http://ollama {
      encode zstd gzip
      log {
        output stdout format console
      }
      reverse_proxy http://ollama:11434
    }

    Ensuring to replace <MACHINE_HOSTNAME> & <TAILNET_NAME> accordingly.

  • Save the file.

Your structure should look as follows so far:

<USERHOME>\.caddy\
  ├─ Caddyfile
  └─ certs\
      ├─ <MACHINE_HOSTNAME>.<TAILNET_NAME>.crt
      └─ <MACHINE_HOSTNAME>.<TAILNET_NAME>.key

1.3.3) Run Caddy via Docker

Alright almost done with the setup. Now to ensure we run Caddy in a docker container that will restart whenever there is an issue. Note that the setup does a copy of the Caddyfile and certs from the current directory to the container.

  • If you have been following the instructions above, you should be in the .caddy folder

  • Run the following command on a powershell window

    docker run -d --name caddy `
      -v ${PWD}/Caddyfile:/etc/caddy/Caddyfile:ro `
      -v ${PWD}/certs:/certs:ro `
      -v caddy_data:/data `
      -v caddy_config:/config `
      --add-host=host.docker.internal:host-gateway `
      --restart unless-stopped `
      -p 443:443 `
      -p 8443:8443 `
      -p 127.0.0.1:80:80 `
      --network web `
      caddy:latest
Explanation
  • -d flags the run command to be detached from the terminal
  • --name caddy simply names the container "caddy"
  • -p 443:443 sets up the port for the container to route from 443 to caddy's default port 443.
  • -p 8443:8443 sets up the port for the container to also route 8443 port through. We will use this for ComfyUI later. Ignore if you don't plan to use it.
  • -p 127.0.0.1:80:80 Allos us to open up the http port for local connections this is used to cleanly connect through to the systems in the backend.
  • --mount type=bind,src="$(pwd)/Caddyfile",dst=/etc/caddy/Caddyfile,ro copies the Caddyfile from the current directory to the container.
  • --mount type=bind,src="$(pwd)/certs",dst=/certs,ro copies the certs from the current directory to the container.
  • --mount type=volume,src=caddy_data,dst=/data sets up a volume mapping, all the caddy data is going to be stored here.
  • --mount type=volume,src=caddy_config,dst=/config sets up a volume mapping, all the caddy config is going to be stored here.
  • --restart unless-stopped in case something takes down the container, this flag makes it auto-restart if not stopped by the user.
  • --network web binds it to the bridged network we created so the containers can see each other.
  • caddy:latest this is the image to use which we pull directly from the Docker Hub available images.

1.3.4) Verifying Caddy setup

Alright, now lets do some sanity checks to ensure that everything is acting successfully. First lets verify from the main host machine.

  • Open a terminal window and type in curl -Ik https://<MACHINE_HOSTNAME>.<TAILNET_NAME>/
  • Verify that the response gives you an HTTP/1.1 200 OK
  • Repeat the command but this time with http and not https like so curl -Ik http://<MACHINE_HOSTNAME>.<TAILNET_NAME>/
  • It should give you an error because our Caddy File does not allow insecure connections to happen at all.

Then from any of the other telnet devices in your network lets repeat it:

  • Open a terminal window and type in curl -Ik https://<MACHINE_HOSTNAME>.<TAILNET_NAME>/
  • Verify that the response gives you an HTTP/1.1 200 OK
  • Repeat the command but this time with http and not https like so curl -Ik http://<MACHINE_HOSTNAME>.<TAILNET_NAME>/
  • It should also give you an error.

Congrats you secured everything and are almost ready! Just need to set up some more minor stuff and add some LLMs for you to play with!

1.4) Last minute setup

1.4.1) Fix broken links to Open-WebUI internals

This is due to Open-WebUI using http://localhost:3000 as default.

  • Log onto OpenWebUI
  • Head to the Admin Panel
  • Under General populate your WebUI URL with http://<MACHINE_HOSTNAME>.<TAILNET_NAME>/

That's it!

2) Interacting with some models

First lets get you some models

2.1) How to obtain LLM models

Going to showcase the two usual ways to get models.

  • The easiest way is via Admin Panel's Connection Manager on OpenWebUI
  • The direct interaction with ollama way

2.2.1) Adding models via OpenWebUI

  • Navigate to your OpenWebUI instance
  • Open the Admin Panel
  • Navigate to Connections
  • Click on Down arrow icon next to the Ollama Connection for your docker (It should read http://host.docker.internal:11434)
  • You have the ability to enter any of the model from ollama.com directly here by entering its name e.g. deepseek-r1:8b
  • Then switch to the models tag to enable / disable / manage the model

Additionally you can run non-ollama provided models if you have the right ollama compatible GGUF version. e.g. hf.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated-GGUF:BF16

2.2.2) Adding models via the ollama docker interface

  • Open a terminal command window
  • Run the following docker exec -it ollama ollama pull <target model>

2.2) Models Quickstart

I recommend you install 2 models

  • 'Fast Model' low input model ideally fine tuned to your work such as llama3.1:8b
  • 'Reasoning Model' someting like deepseek-R1

Extra Credit

Generating Images via a docker-contained ComfyUI

Show me how!

Although you installed a pretty powerful environment, you still have to enable a system that will allow you to do image generation if that is also part of the goals you are trying to build. In order for you to do that follow these instructions

1) Create a local folder with the ComfyUI bindings & build files.

Create folder with the following structure:

Root/
├── custom_nodes
├── docker
├── input
├── models
├── output
├── temp
└── user

Make note of the path as you'll need it during the docker can that comes next.

2) Create the DockerFile to build your instance

  • Open a Terminal window
  • Navigate to the path where you created the ComfyUI Bindings
  • Navigate into the docker folder
  • Create a new file called Dockerfile
  • Populate it with the following
# Defines build arguments for the versions of PyTorch, CUDA, and cuDNN to use
ARG PYTORCH_VERSION=2.11.0
ARG CUDA_VERSION=13.0
ARG CUDNN_VERSION=9

# This image is based on the latest official PyTorch image, because it already contains CUDA, CuDNN, and PyTorch
FROM pytorch/pytorch:${PYTORCH_VERSION}-cuda${CUDA_VERSION}-cudnn${CUDNN_VERSION}-runtime

# Defines build arguments for the versions of ComfyUI and ComfyUI Manager to use
ARG COMFYUI_VERSION=0.18.2

# Installs Git, because ComfyUI and the ComfyUI Manager are installed by cloning their respective Git repositories
RUN apt-get update --assume-yes && \
    apt-get install --assume-yes \
        git \
        sudo \
        libgl1 \
        python3-venv \
        ffmpeg \
        libglib2.0-0 && \
    rm -rf /var/cache/apt/archives /var/lib/apt/lists/*

# Create a virtual environment for the container
ENV VIRTUAL_ENV=/opt/venv
RUN python -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"

# Clones the ComfyUI repository and checks out the latest release
RUN git clone https://github.com/Comfy-Org/ComfyUI.git /opt/comfyui && \
    cd /opt/comfyui && \
    git checkout "v${COMFYUI_VERSION}"

# Installs the required Python packages for both ComfyUI and the ComfyUI Manager
RUN pip install \
    --requirement /opt/comfyui/requirements.txt \
    --requirement /opt/comfyui/manager_requirements.txt && \
    pip cache purge

# Sets the working directory to the ComfyUI directory
WORKDIR /opt/comfyui

# Exposes the default port of ComfyUI (this is not actually exposing the port to the host machine, but it is good practice to include it as metadata,
# so that the user knows which port to publish)
EXPOSE 8188

# Adds the startup script to the container; the startup script will create all necessary directories in the models and custom nodes volumes that were
# mounted to the container and symlink the ComfyUI Manager to the correct directory; it will also create a user with the same UID and GID as the user
# that started the container, so that the files created by the container are owned by the user that started the container and not the root user
ADD entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/bin/bash", "/entrypoint.sh"]

3) Populate the entrypoint script for ComfyUI

  • Open a Terminal window
  • Navigate to the path where you created the ComfyUI Bindings
  • Navigate into the docker folder
  • Create a new file called entrypoint.sh
  • Populate it with the following:
#!/bin/bash

# Check if we have to use custom ports or addresses
if [ -z "$PORT" ];
then
    echo "No value for PORT specified, using default port 8188"
    TARGET_PORT=8188
else
    TARGET_PORT=$PORT
    echo "CUSTOM PORT specified, using $TARGET_PORT..."
fi

# Creates the directories for the models inside of the volume that is mounted from the host
echo "Creating directories for models..."
MODEL_DIRECTORIES=(
    "checkpoints"
    "clip"
    "clip_vision"
    "configs"
    "controlnet"
    "diffusers"
    "diffusion_models"
    "embeddings"
    "gligen"
    "hypernetworks"
    "loras"
    "photomaker"
    "style_models"
    "text_encoders"
    "unet"
    "upscale_models"
    "vae"
    "vae_approx"
)
for MODEL_DIRECTORY in ${MODEL_DIRECTORIES[@]}; do
    mkdir -p /opt/comfyui/models/$MODEL_DIRECTORY
done

# The custom nodes that were installed using the ComfyUI Manager may have requirements of their own, which are not installed when the container is
# started for the first time; this loops over all custom nodes and installs the requirements of each custom node
echo "Installing requirements for custom nodes..."
for CUSTOM_NODE_DIRECTORY in /opt/comfyui/custom_nodes/*;
do
    if [ -f "$CUSTOM_NODE_DIRECTORY/requirements.txt" ];
    then
        CUSTOM_NODE_NAME=${CUSTOM_NODE_DIRECTORY##*/}
        CUSTOM_NODE_NAME=${CUSTOM_NODE_NAME//[-_]/ }
        echo "Installing requirements for $CUSTOM_NODE_NAME..."
        pip install --requirement "$CUSTOM_NODE_DIRECTORY/requirements.txt"
    fi
done

# Under normal circumstances, the container would be run as the root user, which is not ideal, because the files that are created by the container in
# the volumes mounted from the host, i.e., custom nodes and models downloaded by the ComfyUI Manager, are owned by the root user; the user can specify
# the user ID and group ID of the host user as environment variables when starting the container; if these environment variables are set, a non-root
# user with the specified user ID and group ID is created, and ComfyUI is run as this user; ComfyUI is started at its default port (--port 8188); the
# IP address is changed from localhost to 0.0.0.0 (--listen 0.0.0.0), because Docker is only forwarding traffic to the IP address it assigns to the
# container, which is unknown at build time; listening to 0.0.0.0 means that ComfyUI listens to all incoming traffic; the auto-launch feature is
# disabled (--disable-auto-launch), because we do not want (nor is it possible) to open a browser window in a Docker container; to allow users to pass
# in additional command line arguments ("$@"), for example, --enable-cors-header to enable CORS and allow external web apps to interact with ComfyUI
# in this container
if [ -z "$USER_ID" ] || [ -z "$GROUP_ID" ];
then
    echo "Running container as $USER..."
    exec python main.py \
        --port $TARGET_PORT \
        --listen 0.0.0.0 \
        --disable-auto-launch \
        --enable-manager \
        "$@"
else
    echo "Creating non-root user..."
    getent group $GROUP_ID > /dev/null 2>&1 || groupadd --gid $GROUP_ID comfyui-user
    id -u $USER_ID > /dev/null 2>&1 || useradd --uid $USER_ID --gid $GROUP_ID --create-home comfyui-user
    chown --recursive $USER_ID:$GROUP_ID /opt/comfyui
    export PATH=$PATH:/home/comfyui-user/.local/bin

    echo "Running container as comfyui-user ($USER_ID:$GROUP_ID)..."
    sudo --set-home --preserve-env=PATH --user \#$USER_ID \
        python main.py \
            --port $TARGET_PORT \
            --listen 0.0.0.0 \
            --disable-auto-launch \
            --enable-manager \
            "$@"
fi

4) Build the Docker Image

  • Open a Terminal window
  • Navigate to the path where you created the ComfyUI Bindings
  • Navigate into the docker folder
  • Run the following command: docker build -t comfyui-docker .
  • Wait for the operation to finish.

5) Run ComfyUI with correct volume bindings

Now if you were in the docker folder go back up one folder and run the command below. Note: GPU shown; drop --gpus all for CPU-only. Additionally, make sure you run this in a Powershell terminal. Git bash tends to mess up the volume bindings below.

docker run `
  --name comfyui `
  --detach `
  --restart unless-stopped `
  --env USER_ID=0 `
  --env GROUP_ID=0 `
  -v "${PWD}/models:/opt/comfyui/models:rw" `
  -v "${PWD}/custom_nodes:/opt/comfyui/custom_nodes:rw" `
  -v "${PWD}/input:/opt/comfyui/input:rw" `
  -v "${PWD}/output:/opt/comfyui/output:rw" `
  -p 127.0.0.1:8188:8188 `
  --network web `
  --gpus all `
  comfyui-docker:latest

Wait for it to complete. This may take a bit as its a lot of setup. (Takes around 10 minutes for it to complete on my end with no cache)

6) Verify everything ran correctly

Run these commands:

  • docker ps --filter "name=comfyui"
  • docker inspect comfyui --format "{{json .Mounts}}" | Out-String
  • docker logs -f comfyui

You should see the following in the mount results:

  • /opt/comfyui/usercomfy_user
  • /data/usercomfy_user
  • No random hash-named volumes

And you should also see logs ending with on that last command.

Starting server
To see the GUI go to: http://0.0.0.0:8188
To see the GUI go to: http://[::]:8188⁠

7) Access the UI

Connect to http://127.0.0.1:8188

8) Access it from another machine

If you set up yourself with tailscale you can now access comfyUI from other machines by hitting the following endpoint

https://<MACHINE_HOSTNAME>.<TAILNET_NAME>/8443

If everything went successful you are DONE!

Using clean URLs to connect it all

Instead of using long calls you can just shorten the calls on your caddy file If you are like me you will likely forget to these ports and get frustrated with what connects where. Luckily we already did the work so that the system knows how to route calls through. All we are missing is adding the following 3 lines to your hostfile on the machine running all
127.0.0.1   comfyui
127.0.0.1   openwebui
127.0.0.1   ollama

Now you can hit the containers as follows:

Troubleshooting

Can't hit your endpoint on another device?

Check if you have your tailscale VPN enabled. Suggested to have it autoconnect by default and enable both "VPN on Demand" and "Use Tailscale DNS Settings"

Updating

This section contains information on how to update all of the pieces involved. They each have a slightly unique update path so see below.

Important

You should always stop all the running containers before updating via docker stop <container name> and then start/update them in this order:

  • caddy
  • ollama
  • openweb-ui
  • comfyUI

Updating Caddy

Show me how!

Thankfully caddy has an official docker image. So all you have to do is the following to update it once its set up:

  • Open a terminal window

  • Run the following command docker pull caddy

  • Stop the current container running caddy via docker stop caddy if you haven't done so.

  • Remove the caddy container via the command docker rm caddy

  • Navigate yourself to you <USERHOME>\.caddy\

  • Run the image the way you did so before with the following POWERSHELL command:

    docker run -d --name caddy `
      -v ${PWD}/Caddyfile:/etc/caddy/Caddyfile:ro `
      -v ${PWD}/certs:/certs:ro `
      -v caddy_data:/data `
      -v caddy_config:/config `
      --add-host=host.docker.internal:host-gateway `
      --restart unless-stopped `
      -p 443:443 `
      -p 8443:8443 `
      -p 127.0.0.1:80:80 `
      --network web `
      caddy:latest
  • Thats it! You should be back up and running. Start any other containers or proceed to the next update step.

Updating ollama

Show me how!

Thankfully ollama has an official docker image. So all you have to do is the following to update it once its set up:

  • Open a terminal window

  • Run the following command docker pull ollama/ollama

  • Stop the current container running ollama via docker stop ollama if you haven't done so.

  • Remove the ollama container via the command docker rm ollama

  • Run the image the way you did so before with the following POWERSHELL command:

    docker run -d `
      --name ollama `
      --gpus=all `
      --network web `
      -p 127.0.0.1:11434:11434 `
      -v ollama:/root/.ollama `
      --restart unless-stopped `
      ollama/ollama:latest
  • Thats it! You should be back up and running. Start any other containers or proceed to the next update step.

Updating openweb-ui

Show me how!

Openweb-ui finally has an official docker image on Docker hub. So all you have to do is the following to update it once its set up:

  • Open a terminal window
  • Run the following command docker pull openwebui/open-webui
  • Stop the current container running openwebui via docker stop openwebui if you haven't done so.
  • Remove the openwebui container via the command docker rm openwebui
  • Run the image the way you did so before with the following POWERSHELL command:
docker run -d `
  --name open-webui `
  --network web `
  -p 127.0.0.1:3000:8080 `
  --add-host=host.docker.internal:host-gateway `
  -e OLLAMA_BASE_URL=http://host.docker.internal:11434 `
  -v open-webui:/app/backend/data `
  --restart unless-stopped `
  openwebui/open-webui:latest
  • Thats it! You should be back up and running. Start any other containers or proceed to the next update step.

Updating comfyui

Show me how!

Alright this one is the most annoying as it takes a bit of figuring out what is possible.

  • First you need to check off ComfyUI's website if they changed the pytorch & cuda versions they are targeting.
  • Write down the cude and pytorch version. And check if the pytorch + cuda version have a tag available.
    • This is because our source pulls the image found at pytorch/pytorch:<PYTHORCH_VERSION>-cuda<CUDA_VERSION>-cudnn<CUDNN_VERSION>-runtime
    • Confirm that the new expected pytorch + cuda + cudnn exist.
  • Open a terminal window
  • Navigate yourself to the structure where you have your comfyui data
  • Crack open the dockerfile that is under the docker folder
  • Change the following variables to the latest version you verified
    • ARG PYTORCH_VERSION=
    • ARG CUDA_VERSION=
    • ARG CUDNN_VERSION=
    • ARG COMFYUI_VERSION=
  • Build this image using the command docker build -t comfyui-docker . (Note: Usually takes 10+ minutes)
  • Stop the current container running comfyui via docker stop comfyui if you haven't done so.
  • Remove the comfyui container via the command docker rm comfyui
  • Run the image the way you did so before with the following POWERSHELL command:
docker run `
  --name comfyui `
  --detach `
  --restart unless-stopped `
  --env USER_ID=0 `
  --env GROUP_ID=0 `
  -v "${PWD}/models:/opt/comfyui/models:rw" `
  -v "${PWD}/custom_nodes:/opt/comfyui/custom_nodes:rw" `
  -v "${PWD}/input:/opt/comfyui/input:rw" `
  -v "${PWD}/output:/opt/comfyui/output:rw" `
  -p 127.0.0.1:8188:8188 `
  --network web `
  --gpus all `
  comfyui-docker:latest
  • Thats it! You should be back up and running. Start any other containers or proceed to the next update step.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment