Skip to content

Deployment Overview of Open WebUI with Ollama on Server

Prerequisites and Basic Requirements

The deployment requires a Linux server running Ubuntu. The following conditions must be met before proceeding:

  • Root privileges or sudo access are required to install system packages and manage services.
  • Docker Engine must be installed and running on the server.
  • The server must have access to the internet to download container images and model files.
  • Port 8080 must be available for the Open WebUI application.
  • Port 11434 is used internally by the Ollama service.
  • The domain name hostkey.com is configured for SSL certificate management via Certbot.

File and Directory Structure

The application utilizes specific directories for configuration, data storage, and certificates:

  • /root/nginx: Contains the Docker Compose configuration for the Nginx proxy and Certbot.
  • /root/nginx/compose.yml: The Docker Compose file defining the Nginx and Certbot services.
  • /data/nginx/user_conf.d: Stores custom Nginx configuration files, including the host-specific configuration {{ prefix }}{{ server_id }}.hostkey.in.conf.
  • /data/nginx/nginx-certbot.env: Environment file containing settings for the Nginx-Certbot container.
  • /etc/letsencrypt: Mount point for SSL certificates managed by Certbot.
  • /app/backend/data: Persistent volume location for Open WebUI backend data.
  • /etc/systemd/system/ollama.service: Systemd unit file for the Ollama service.

Application Installation Process

The deployment involves installing the Ollama inference engine and the Open WebUI interface.

Ollama Installation

The Ollama service is installed using the official installation script. The following steps are performed:

  1. The installation script is executed via curl -fsSL https://ollama.com/install.sh | sh.
  2. The ollama system user is created if it does not already exist.
  3. The original ollama.service file is backed up to /etc/systemd/system/ollama.service.bak.
  4. The ollama.service file is updated to include the following environment variables:
  5. OLLAMA_HOST=0.0.0.0
  6. OLLAMA_ORIGINS=*
  7. LLAMA_FLASH_ATTENTION=1
  8. The systemd daemon is reloaded, and the ollama service is restarted and enabled to start on boot.
  9. The phi4 model is pulled using the command ollama pull phi4.

Open WebUI Installation

The Open WebUI application is deployed as a Docker container with CUDA support:

  • The container image ghcr.io/open-webui/open-webui:cuda is pulled.
  • The container is named open-webui.
  • The container is configured to restart automatically (--restart always).
  • The container exposes port 8080 on the host.
  • The container utilizes all available GPUs (--gpus all).
  • The environment variable ENV is set to dev.
  • The environment variable OLLAMA_BASE_URLS is set to http://host.docker.internal:11434.
  • A named volume open-webui is mounted to /app/backend/data for data persistence.
  • The host is added to the container's DNS resolution via --add-host=host.docker.internal:host-gateway.

Docker Containers and Their Deployment

Two primary Docker deployments are managed on the server: the Open WebUI container and the Nginx-Certbot stack.

Open WebUI Container

The Open WebUI container is launched using the following command structure:

docker run -d -p 8080:8080 --gpus all \
  --add-host=host.docker.internal:host-gateway \
  -v open-webui:/app/backend/data \
  --name open-webui \
  -e ENV='dev' \
  -e OLLAMA_BASE_URLS='http://host.docker.internal:11434' \
  --restart always ghcr.io/open-webui/open-webui:cuda

Nginx and Certbot Stack

The Nginx proxy and Certbot are managed via Docker Compose located in /root/nginx/compose.yml. The configuration includes:

  • Service Name: nginx
  • Image: jonasal/nginx-certbot:latest
  • Restart Policy: unless-stopped
  • Network Mode: host
  • Environment Variables:
  • [email protected]
  • Variables loaded from /data/nginx/nginx-certbot.env
  • Volumes:
  • nginx_secrets (external) mounted to /etc/letsencrypt
  • /data/nginx/user_conf.d mounted to /etc/nginx/user_conf.d

The stack is started using the command:

docker compose up -d

executed from the /root/nginx directory.

Proxy Servers

The Nginx proxy is configured to handle traffic for the application and manage SSL certificates.

  • The Nginx container uses the jonasal/nginx-certbot:latest image.
  • Custom configuration files are stored in /data/nginx/user_conf.d.
  • The specific host configuration file {{ prefix }}{{ server_id }}.hostkey.in.conf is modified to route traffic.
  • The proxy_pass directive within the location / block is set to http://127.0.0.1:8080 to forward requests to the Open WebUI container.
  • SSL certificates are managed automatically by Certbot using the email [email protected].
  • The nginx_secrets volume is used to store Let's Encrypt certificates.

Starting, Stopping, and Updating

Service management commands are used to control the Ollama service and Docker containers.

Ollama Service

  • To restart the service: systemctl restart ollama
  • To enable the service on boot: systemctl enable ollama
  • To reload the systemd daemon after configuration changes: systemctl daemon-reload

Open WebUI Container

  • To check the container status: docker inspect -f '{{.State.Status}}' open-webui
  • To stop the container: docker stop open-webui
  • To start the container: docker start open-webui
  • To update the container, the existing container must be removed and a new one created using the docker run command with the latest image tag.

Nginx and Certbot Stack

  • To start the stack: docker compose up -d (executed from /root/nginx)
  • To stop the stack: docker compose down (executed from /root/nginx)
  • To update the stack, modify the compose.yml file or environment variables and run docker compose up -d again.
question_mark
Is there anything I can help you with?
question_mark
AI Assistant ×