Deployment Overview of Open WebUI with Gemma3-27B on Server¶
Prerequisites and Basic Requirements¶
The deployment requires a server running Ubuntu with the following components installed and configured: - Root privileges or sudo access. - Docker Engine installed and running. - NVIDIA GPU drivers and the NVIDIA Container Toolkit for GPU acceleration. - Network access to the internet for downloading models and certificates. - Port 8080 available for the Open WebUI interface. - Port 11434 available for the Ollama API. - Ports 80 and 443 available for the Nginx reverse proxy and SSL certificate issuance.
File and Directory Structure¶
The application utilizes the following directory structure for configuration and data storage: - /root/nginx: Contains the Docker Compose configuration for the Nginx proxy. - /root/nginx/compose.yml: The Docker Compose file defining the Nginx and Certbot services. - /data/nginx/nginx-certbot.env: Environment file containing Nginx configuration variables. - /data/nginx/user_conf.d: Directory for custom Nginx configuration files. - /etc/letsencrypt: Volume mount for SSL certificates managed by Certbot. - /etc/systemd/system/ollama.service: Systemd service file for the Ollama backend. - /etc/docker/daemon.json: Docker daemon configuration file for NVIDIA runtime settings.
Application Installation Process¶
The deployment involves installing the Ollama backend, configuring the Docker environment for GPU support, and deploying the Open WebUI frontend.
- Install Ollama: The Ollama package is installed using the official installation script.
- Configure Ollama Service: The
ollama.servicefile is modified to expose the service on all network interfaces and enable flash attention. The following environment variables are added:OLLAMA_HOST=0.0.0.0OLLAMA_ORIGINS=*LLAMA_FLASH_ATTENTION=1
- Pull the Model: The
gemma3:27bmodel is downloaded via the Ollama CLI. - Install NVIDIA Container Toolkit: The
nvidia-container-toolkitandnvidia-container-runtimepackages are installed to enable GPU passthrough for Docker containers. - Configure Docker Runtime: The Docker daemon is configured to use the NVIDIA runtime by default via
/etc/docker/daemon.json.
Docker Containers and Their Deployment¶
The system deploys two primary Docker containers: Open WebUI and Nginx with Certbot.
Open WebUI Container¶
The Open WebUI container is launched with the following specifications: - Image: ghcr.io/open-webui/open-webui:cuda - Container Name: open-webui - Port Mapping: Host port 8080 maps to container port 8080. - GPU Access: The --gpus all flag is used to enable GPU acceleration. - Host Resolution: The --add-host=host.docker.internal:host-gateway flag allows the container to reach the host's Ollama service. - Volume: A named volume open-webui is mounted to /app/backend/data for persistent storage. - Environment Variables: - ENV=dev - OLLAMA_BASE_URLS=http://host.docker.internal:11434 - Restart Policy: Set to always.
The command used to run the container is:
docker run -d -p 8080:8080 --gpus all \
--add-host=host.docker.internal:host-gateway \
-v open-webui:/app/backend/data \
--name open-webui \
-e ENV='dev' \
-e OLLAMA_BASE_URLS='http://host.docker.internal:11434' \
--restart always ghcr.io/open-webui/open-webui:cuda
Nginx and Certbot Container¶
The Nginx proxy and Certbot are deployed using Docker Compose located in /root/nginx/compose.yml. - Image: jonasal/nginx-certbot:latest - Restart Policy: unless-stopped - Network Mode: host - Environment: - [email protected] - Loads additional variables from /data/nginx/nginx-certbot.env - Volumes: - nginx_secrets (external) mounted to /etc/letsencrypt for SSL certificates. - /data/nginx/user_conf.d mounted to /etc/nginx/user_conf.d for custom configurations.
The deployment is executed via:
from the/root/nginx directory. Proxy Servers¶
The Nginx container acts as a reverse proxy and handles SSL certificate management using Certbot. - Image: jonasal/nginx-certbot:latest - Configuration: Custom Nginx configurations are placed in the /data/nginx/user_conf.d directory on the host, which is mounted into the container. - SSL Certificates: Managed automatically by Certbot within the container, stored in the nginx_secrets volume at /etc/letsencrypt. - Email: Certificate notifications are sent to [email protected].
Access Rights and Security¶
- Ollama User: A system user named
ollamais created to manage the Ollama service. - Firewall: Ensure that ports
80,443,8080, and11434are open on the server firewall to allow external access to the proxy, web interface, and Ollama API. - CORS: The Ollama service is configured with
OLLAMA_ORIGINS=*to allow cross-origin requests from the Open WebUI frontend.
Permission Settings¶
- Nginx Directory: The
/root/nginxdirectory is owned byrootwith permissions0755. - Compose File: The
/root/nginx/compose.ymlfile is owned byrootwith permissions0644. - Docker Daemon Config: The
/etc/docker/daemon.jsonfile is owned byrootwith permissions0644.
Starting, Stopping, and Updating¶
Ollama Service¶
The Ollama service is managed via systemd. - Start/Enable:
Open WebUI Container¶
- Start: The container starts automatically due to the
--restart alwayspolicy. - Stop:
- Remove:
Nginx and Certbot Stack¶
- Start:
- Stop:
- Update: Pull the latest image and restart the stack.