Deployment Overview of Qwen3-Coder on Server¶
Prerequisites and Basic Requirements¶
The following system requirements must be met before deploying the Qwen3-Coder application:
-
Operating System: Ubuntu (as indicated by the
aptpackage manager usage in the configuration). -
Privileges: Root or
sudoprivileges are required to install system packages, manage services, and run Docker containers. -
Domain: The server must be associated with the
hostkey.inzone. -
Ports:
-
Port
8080: Used internally by the Open WebUI application. -
Port
443: Used externally for HTTPS traffic via the Nginx proxy. -
Port
11434: Used internally by the Ollama service.
FQDN of the Final Panel¶
The application is accessible via the Fully Qualified Domain Name (FQDN) following the pattern:
qwen3-coder<Server ID>.hostkey.in:443
Where <Server ID> is replaced by the specific identifier assigned to the server instance. The traffic on this domain is secured via SSL and routed to the internal application port.
File and Directory Structure¶
The deployment utilizes the following directory structure for configuration, data, and certificates:
-
/root/nginx/: Contains the Docker Compose configuration for the Nginx proxy and Certbot. -
/root/nginx/compose.yml: The Docker Compose file defining the Nginx service. -
/data/nginx/nginx-certbot.env: Environment file for Nginx and Certbot configuration. -
/data/nginx/user_conf.d/: Directory containing Nginx site-specific configuration files. -
/data/nginx/user_conf.d/qwen3-coder<Server ID>.hostkey.in.conf: Specific Nginx configuration file for the Qwen3-Coder instance. -
/etc/systemd/system/ollama.service: Systemd unit file for the Ollama service. -
/usr/share/ollama/.ollama/models/: Storage location for the Ollama models. -
/var/lib/docker/volumes/open-webui/_data: Persistent volume location for Open WebUI data.
Application Installation Process¶
The application stack consists of the Ollama AI inference engine and the Open WebUI frontend.
-
Ollama Installation:
-
The Ollama package is installed using the official installation script:
curl -fsSL https://ollama.com/install.sh | sh. -
A system user named
ollamais created. -
The Ollama service is configured to listen on all network interfaces (
0.0.0.0) and allow origins from any source. -
The
qwen3-codermodel is pulled and stored locally.
-
-
Open WebUI Deployment:
-
The Open WebUI application is deployed as a Docker container using the image
ghcr.io/open-webui/open-webui:cuda. -
The container is named
open-webui. -
It is configured to connect to the local Ollama instance at
http://host.docker.internal:11434. -
The container runs with GPU support enabled (
--gpus all).
-
Access Rights and Security¶
Security and access control are managed through the following mechanisms:
-
Nginx Proxy: Handles external requests and enforces SSL/TLS encryption.
-
Firewall: Implicitly managed by the Nginx proxy configuration which listens on standard ports; no direct exposure of the internal application port (
8080) to the public internet is required. -
User Restrictions:
-
The
ollamaservice runs under the dedicatedollamasystem user. -
Docker volumes isolate application data from the host filesystem.
-
SSL Certificates: Managed automatically by Certbot within the Nginx container, ensuring valid HTTPS connections.
Databases¶
The Open WebUI application uses a local persistent volume for its data storage.
-
Storage Location: The data is stored in a Docker volume named
open-webui, physically located at/var/lib/docker/volumes/open-webui/_data. -
Connection Method: The application accesses this data via a mounted volume within the Docker container:
-v open-webui:/app/backend/data. -
Settings: No external database connection string is required; the application manages its internal SQLite database within the mounted volume.
Docker Containers and Their Deployment¶
The deployment utilizes Docker to run both the frontend and the reverse proxy.
Open WebUI Container¶
The Open WebUI container is launched using the following parameters:
-
Image:
ghcr.io/open-webui/open-webui:cuda -
Name:
open-webui -
Port Mapping: Host port
8080maps to container port8080. -
Environment Variables:
-
ENV: Set todev. -
OLLAMA_BASE_URLS: Set tohttp://host.docker.internal:11434. -
Volumes: Mounts the
open-webuinamed volume to/app/backend/data. -
Hardware: Uses all available GPUs (
--gpus all). -
Restart Policy: Configured to restart always (
--restart always). -
Host Resolution: Adds a custom DNS entry
host.docker.internalpointing to the host gateway.
Nginx and Certbot Container¶
The Nginx proxy is managed via Docker Compose located at /root/nginx/compose.yml.
-
Image:
jonasal/nginx-certbot:latest -
Service Name:
nginx -
Network Mode: Host (
network_mode: host). -
Volumes:
-
nginx_secrets(external) mapped to/etc/letsencrypt. -
Host directory
/data/nginx/user_conf.dmapped to/etc/nginx/user_conf.d. -
Environment: Uses the file
/data/nginx/nginx-certbot.envfor configuration. -
Email: Configured with
[email protected].
Proxy Servers¶
The Nginx reverse proxy is configured to handle SSL termination and route traffic to the internal Open WebUI service.
-
Proxy Configuration File:
/data/nginx/user_conf.d/qwen3-coder<Server ID>.hostkey.in.conf -
Proxy Pass: The Nginx configuration includes a rule to forward requests to the internal application:
proxy_pass http://127.0.0.1:8080; -
SSL/Certbot: SSL certificates are obtained and renewed automatically by the
jonasal/nginx-certbotcontainer for the domainqwen3-coder<Server ID>.hostkey.in. -
Domain: The proxy listens for the specific subdomain and routes traffic securely over port
443.
Permission Settings¶
File and directory permissions are set as follows to ensure proper operation:
-
Nginx Directory:
-
/root/nginx: Mode0755, owned byroot:root. -
Docker Compose File:
-
/root/nginx/compose.yml: Mode0644, owned byroot:root. -
Nginx Config Directory:
-
/data/nginx/user_conf.d: Managed by the Nginx container process, requiring appropriate read/write access for the container. -
Ollama Service:
-
The
ollamaservice runs with system privileges and owns the model files in/usr/share/ollama/.ollama/models/. -
Docker Volumes:
-
Docker manages permissions for the
open-webuivolume automatically based on the container's internal user requirements.
Available Ports for Connection¶
The following ports are relevant for the deployment:
-
Port 443: The external entry point for HTTPS traffic to the
qwen3-coder<Server ID>.hostkey.indomain. -
Port 8080: The internal port where Open WebUI listens. This is proxied by Nginx and is not directly exposed to the public internet.
-
Port 11434: The internal port where the Ollama service listens, accessible only within the host network context.
Starting, Stopping, and Updating¶
Service management for the deployed components is handled via Docker and systemd.
-
Nginx Proxy:
-
To start or update the proxy stack, run:
docker compose up -dfrom the/root/nginxdirectory. -
To stop the proxy stack, run:
docker compose downfrom the/root/nginxdirectory. -
Open WebUI Container:
-
To stop the container:
docker stop open-webui -
To start the container:
docker start open-webui -
To update the image: Pull the new image version and restart the container.
-
Ollama Service:
-
To restart the service:
systemctl restart ollama -
To enable the service on boot:
systemctl enable ollama -
To reload the systemd daemon after configuration changes:
systemctl daemon-reload