Skip to content

Deployment Overview of Open WebUI with Ollama on Server

Prerequisites and Basic Requirements

The following requirements must be met on the server before deployment:

  • Operating System: Ubuntu

  • Privileges: Root access is required for service configuration and container management

  • Domain: The application is configured for the hostkey.in zone

  • Ports:

  • Port 11434 for Ollama backend

  • Port 8080 for Open WebUI internal service

  • Port 443 for external HTTPS access

FQDN of the Final Panel

The application is accessible via the Fully Qualified Domain Name (FQDN) format:

llama{Server ID}.hostkey.in:443

Replace {Server ID} with the specific server identifier to access the panel.

File and Directory Structure

The deployment utilizes the following directory structure for configuration, data, and certificates:

  • /root/nginx: Contains the Nginx and Certbot Docker Compose configuration.

  • /root/nginx/compose.yml: The main Docker Compose file for the reverse proxy.

  • /data/nginx/user_conf.d/: Stores custom Nginx configuration files for specific domains.

  • /data/nginx/nginx-certbot.env: Environment variables for the Nginx-Certbot service.

  • /data/nginx/user_conf.d/llama{Server ID}.hostkey.in.conf: Specific Nginx configuration for the application.

  • /etc/systemd/system/ollama.service: Systemd service file for Ollama.

  • /etc/systemd/system/ollama.service.bak: Backup of the original Ollama service file.

Application Installation Process

The application stack consists of Ollama running as a native system service and Open WebUI running as a Docker container.

  • Ollama Installation:

  • Installed via the official shell script: curl -fsSL https://ollama.com/install.sh | sh

  • The ollama system user is created to manage the service.

  • The llama3.3 model is downloaded and ready for use.

  • Open WebUI Deployment:

  • Deployed as a Docker container using the ghcr.io/open-webui/open-webui:cuda image.

  • The container is named open-webui and is configured to restart automatically.

  • GPU acceleration is enabled via the --gpus all flag.

Access Rights and Security

  • Firewall: The system allows incoming traffic on port 443 (HTTPS) for external users. Internal services communicate over ports 8080 and 11434.

  • Users: A dedicated system user named ollama is created to run the Ollama service.

  • Service Restrictions:

  • Ollama is configured to listen on all network interfaces (0.0.0.0).

  • Origin restrictions are disabled (OLLAMA_ORIGINS=*) to allow requests from the Open WebUI interface.

  • Flash Attention is enabled (LLAMA_FLASH_ATTENTION=1) for performance optimization.

Docker Containers and Their Deployment

The deployment utilizes the following Docker containers:

  • Open WebUI:

  • Image: ghcr.io/open-webui/open-webui:cuda

  • Port Mapping: Host port 8080 maps to container port 8080.

  • Volumes: A named volume open-webui is mounted to /app/backend/data for data persistence.

  • Environment Variables:

    • ENV: Set to dev

    • OLLAMA_BASE_URLS: Set to http://host.docker.internal:11434

  • Additional Flags: --add-host=host.docker.internal:host-gateway is used to resolve the host machine from within the container.

  • Nginx Certbot Proxy:

  • Image: jonasal/nginx-certbot:latest

  • Network Mode: host

  • Volumes:

    • nginx_secrets (external) mounted to /etc/letsencrypt

    • /data/nginx/user_conf.d mounted to /etc/nginx/user_conf.d

  • Environment: Uses [email protected] for certificate notifications.

Proxy Servers

A reverse proxy is implemented using Nginx with automatic SSL certificate management via Certbot.

  • Software: Nginx-Certbot Docker container.

  • SSL/TLS: Enabled via Let's Encrypt certificates stored in the nginx_secrets volume.

  • Configuration:

  • The proxy configuration is located in /data/nginx/user_conf.d/llama{Server ID}.hostkey.in.conf.

  • The proxy_pass directive is set to forward traffic to http://127.0.0.1:8080.

  • External requests on port 443 are routed to the internal Open WebUI service.

Permission Settings

  • Nginx Configuration Directory:

  • Path: /root/nginx

  • Owner: root

  • Group: root

  • Mode: 0755

  • Docker Compose File:

  • Path: /root/nginx/compose.yml

  • Owner: root

  • Group: root

  • Mode: 0644

Location of Configuration Files and Data

Component Configuration/Path Description
Nginx Proxy /root/nginx/compose.yml Main Docker Compose definition
Nginx Conf /data/nginx/user_conf.d/llama{Server ID}.hostkey.in.conf Domain-specific proxy rules
SSL Secrets nginx_secrets (Docker volume) Let's Encrypt certificates
Ollama Service /etc/systemd/system/ollama.service Systemd unit file
Open WebUI Data open-webui (Docker volume) Backend application data
Certbot Env /data/nginx/nginx-certbot.env Environment variables for proxy

Available Ports for Connection

  • Port 443: HTTPS access to the Open WebUI interface via the custom domain.

  • Port 8080: Internal access to the Open WebUI container (bound to 127.0.0.1 via proxy).

  • Port 11434: Internal access to the Ollama API service.

Starting, Stopping, and Updating

  • Ollama Service Management:

  • Restart the service: systemctl restart ollama

  • Reload daemon: systemctl daemon-reload

  • The service is enabled to start on boot.

  • Docker Containers:

  • Open WebUI is configured with --restart always to ensure it restarts automatically on failure or system reboot.

  • The Nginx-Certbot proxy is managed via docker compose in the /root/nginx directory.

  • To restart the proxy stack: docker compose up -d executed from /root/nginx.

question_mark
Is there anything I can help you with?
question_mark
AI Assistant ×