Deployment Overview of Open WebUI with DeepSeek-R1:14B on Server¶
Prerequisites and Basic Requirements¶
The deployment requires a server running Ubuntu with the following specifications: - Operating System: Ubuntu (compatible with apt package manager). - Privileges: Root access or sudo privileges are required for system configuration and service management. - Network: The server must have access to the internet to download Docker images, Ollama binaries, and the DeepSeek-R1:14B model. - Ports: - Port 80: Required for HTTP traffic and ACME challenge validation. - Port 443: Required for HTTPS traffic. - Port 8080: Used internally by Open WebUI. - Port 11434: Used internally by the Ollama service.
File and Directory Structure¶
The application utilizes the following directory structure for configuration and data storage: - /root/nginx: Contains the Docker Compose configuration for the proxy server. - /root/nginx/compose.yml: The Docker Compose file defining the Nginx and Certbot services. - /data/nginx/user_conf.d: Directory containing custom Nginx virtual host configurations. - /data/nginx/nginx-certbot.env: Environment file for Nginx Certbot settings. - /etc/systemd/system/ollama.service: The main systemd unit file for Ollama. - /etc/systemd/system/ollama.service.d/override.conf: Systemd drop-in configuration for Ollama environment variables. - /var/www/certbot: Webroot directory inside the Nginx container used for ACME challenge validation.
Application Installation Process¶
The installation involves setting up the Ollama backend, pulling the specific AI model, and deploying the Open WebUI frontend via Docker.
-
Install System Prerequisites: The system ensures the presence of
curlandca-certificatespackages. -
Install Ollama: The Ollama service is installed using the official installation script:
This creates theollamasystem user and installs the binary to/usr/local/bin/ollama. -
Configure Ollama Service: A systemd override is created at
/etc/systemd/system/ollama.service.d/override.confto set the following environment variables:OLLAMA_HOST=0.0.0.0OLLAMA_ORIGINS=*LLAMA_FLASH_ATTENTION=1
-
Download the AI Model: The DeepSeek-R1:14B model is pulled into the Ollama registry:
-
Deploy Open WebUI: The Open WebUI application is deployed as a Docker container using the
ghcr.io/open-webui/open-webui:cudaimage. The container is configured with:- Network mode:
host - Environment variable
ENVset todev. - Environment variable
OLLAMA_BASE_URLSset tohttp://127.0.0.1:11434. - A named volume
open-webuimounted to/app/backend/datafor data persistence. - GPU device requests enabled for hardware acceleration.
- Network mode:
Docker Containers and Their Deployment¶
Two primary Docker containers are managed in this deployment:
-
Open WebUI Container:
- Name:
open-webui - Image:
ghcr.io/open-webui/open-webui:cuda - Restart Policy:
always - Network Mode:
host - Volumes:
open-webui:/app/backend/data - Device Requests: GPU capabilities are requested for inference acceleration.
- Name:
-
Nginx and Certbot Container:
- Image:
jonasal/nginx-certbot:latest - Restart Policy:
unless-stopped - Network Mode:
host - Volumes:
nginx_secrets(external) mounted to/etc/letsencrypt./data/nginx/user_conf.dmounted to/etc/nginx/user_conf.d.- Environment:
CERTBOT_EMAILis set to[email protected].- Additional settings are loaded from
/data/nginx/nginx-certbot.env.
- Image:
Proxy Servers¶
The deployment utilizes an Nginx container with Certbot integration to handle SSL termination and reverse proxying.
- Configuration Location: The Nginx configuration is managed via Docker Compose located at
/root/nginx/compose.yml. - Virtual Host Configuration:
- HTTPS Configuration: Located in
/data/nginx/user_conf.d/<prefix><server_id>.hostkey.in.conf.- The
proxy_passdirective is set tohttp://127.0.0.1:8080to forward traffic to the Open WebUI container. - The
Hostheader is preserved usingproxy_set_header Host $host.
- The
- HTTP Configuration: Located in
/data/nginx/user_conf.d/<prefix><server_id>.hostkey.in.http.conf.- Listens on port 80.
- Serves the ACME challenge files from
/var/www/certbot/.well-known/acme-challenge/. - Redirects all other traffic to HTTPS using a 301 redirect.
- SSL Management: Certbot is integrated within the Nginx container to automatically obtain and renew SSL certificates for the domain.
Starting, Stopping, and Updating¶
The services are managed using systemd for Ollama and Docker Compose for the proxy and application containers.
- Ollama Service:
- The service is enabled to start on boot.
-
To restart the service after configuration changes:
-
Nginx and Certbot:
- The container stack is started using Docker Compose from the
/root/nginxdirectory: - To test the Nginx configuration inside the container:
-
To reload Nginx after configuration changes:
-
Open WebUI:
- The container is managed by Docker and set to restart automatically.
- To update the image, pull the latest version and recreate the container:
Access Rights and Security¶
- System User: The
ollamasystem user is created to run the Ollama service. - Firewall: The deployment assumes the server firewall allows traffic on ports 80 and 443 for web access. Internal communication occurs over
localhost(127.0.0.1) on ports 8080 and 11434. - SSL: All external traffic is encrypted via HTTPS using Let's Encrypt certificates managed by Certbot.
- Origins: The Ollama service is configured with
OLLAMA_ORIGINS=*to allow cross-origin requests from the web interface.
Permission Settings¶
- Nginx Directory: The
/root/nginxdirectory is owned byrootwith permissions0755. - Compose File: The
/root/nginx/compose.ymlfile is owned byrootwith permissions0644. - Nginx Config Directory: The
/data/nginx/user_conf.ddirectory is mounted into the container and must be accessible by the Nginx process. - ACME Webroot: The directory
/var/www/certbot/.well-known/acme-challengeis created inside the Nginx container to store challenge files for certificate validation.