Skip to content

Deployment Overview of Temporal on Server

Prerequisites and Basic Requirements

The Temporal application deployment requires the following environment specifications and privileges:

  • Operating System: Linux-based server with Docker and Docker Compose installed.

  • Privileges: Root access or a user with equivalent sudo privileges is required to manage containers, configure Nginx, and manage file permissions.

  • Network Requirements: The server must allow inbound traffic on specific ports for the application, databases, and monitoring tools.

  • Domain Configuration: The server must be configured to resolve the hostkey.in domain for the application's external interface.

FQDN of the Final Panel

The fully qualified domain name (FQDN) for accessing the Temporal web interface follows the standard format defined by the hosting environment:

  • Format: <prefix><Server ID>.hostkey.in:<port>

  • Implementation: temporal<Server ID>.hostkey.in:443

Note: The prefix is set to temporal, and the external traffic is routed through port 443 via the Nginx reverse proxy.

File and Directory Structure

The application utilizes a specific directory structure on the host server to store configurations, data, and certificates. The following paths are utilized:

  • Nginx Configuration: /root/nginx

    • Compose file: /root/nginx/compose.yml

    • User-specific configuration: /data/nginx/user_conf.d/temporal<Server ID>.hostkey.in.conf

  • Docker Compose: /root/docker-compose

    • Main deployment file: /root/docker-compose/docker-compose-multirole_edited.yaml
  • Grafana Data: /data/grafana

    • Configuration file: /data/grafana/grafana.ini
  • Application Data:

    • PostgreSQL data resides at: /var/lib/postgresql/data

    • Dynamic configuration files are mounted from: ./dynamicconfig

Application Installation Process

The Temporal application is deployed using Docker Compose. The installation process involves orchestrating multiple containers defined in a single YAML manifest.

  • Deployment Method: Docker Compose (v2).

  • Versioning: The deployment utilizes specific versions for core components:

    • Temporal Server: Defined by the ${TEMPORAL_VERSION} variable.

    • Temporal UI: Defined by the ${TEMPORAL_UI_VERSION} variable.

    • Temporal Admin Tools: Defined by the ${TEMPORAL_ADMINTOOLS_VERSION} variable.

  • Execution: The docker-compose-multirole_edited.yaml file is executed from the /root/docker-compose directory to start the multi-role Temporal server stack.

Docker Containers and Their Deployment

The deployment consists of several interconnected services orchestrated within the temporal-network.

  • Core Temporal Services:

    • temporal-history: Handles history service logic.

    • temporal-matching: Handles matching service logic.

    • temporal-frontend: Provides the frontend service (Port 7237).

    • temporal-frontend2: Provides a secondary frontend instance (Port 7236).

    • temporal-worker: Handles worker service logic.

    • temporal-admin-tools: CLI tools for administration.

    • temporal-ui: Web interface for monitoring and management.

  • Database and Storage Services:

    • postgresql: PostgreSQL database instance.

    • elasticsearch: Elasticsearch instance for visibility.

  • Monitoring and Logging:

    • loki: Log aggregation.

    • prometheus: Metrics collection (Image: prom/prometheus:v2.37.0).

    • grafana: Visualization dashboard (Image: grafana/grafana:7.5.16).

    • jaeger-all-in-one: Distributed tracing.

    • otel-collector: OpenTelemetry collector.

  • Network Proxy:

    • temporal-nginx: Internal Nginx proxy balancing traffic between frontend instances.

    • nginx (External): The external Nginx container running the jonasal/nginx-certbot:latest image, managing SSL and reverse proxying.

Proxy Servers

The system uses a layered proxy architecture to handle traffic and SSL termination.

  • External Proxy:

    • Service: Nginx with Certbot.

    • Image: jonasal/nginx-certbot:latest.

    • Function: Handles SSL certificate management via Let's Encrypt and routes traffic from the public interface.

    • Environment: Configured with [email protected].

    • Volumes:

      • nginx_secrets (External volume for Let's Encrypt).

      • /data/nginx/user_conf.d mounted to /etc/nginx/user_conf.d.

  • Internal Proxy:

    • Service: Nginx (temporal-nginx).

    • Image: nginx:1.22.1.

    • Function: Balances requests between temporal-frontend and temporal-frontend2.

    • Configuration: Uses a custom configuration file mounted from ./deployment/nginx/nginx.conf.

Databases

The application relies on two primary database technologies, both running as Docker containers within the temporal-network.

  • PostgreSQL:

    • Usage: Primary data store for Temporal history and state.

    • Container Name: temporal-postgresql.

    • Storage: Data is persisted via a bind mount to /var/lib/postgresql/data on the host.

    • Configuration:

      • User and Password are set via environment variables (${POSTGRES_USER}, ${POSTGRES_PASSWORD}).

      • Exposed internally on ${POSTGRES_DEFAULT_PORT}.

  • Elasticsearch:

    • Usage: Visibility and search capabilities.

    • Container Name: temporal-elasticsearch.

    • Version: Defined by ${ELASTICSEARCH_VERSION}.

    • Configuration:

      • Running in single-node mode (discovery.type=single-node).

      • Java Heap Size set to 512MB (ES_JAVA_OPTS).

      • Security disabled (xpack.security.enabled=false).

Permission Settings

File and directory permissions are configured to ensure the correct ownership for root and the application services.

  • Root Directories:

    • /root/nginx: Owned by root:root, mode 0644.

    • /root/docker-compose: Owned by root:root, mode 0644.

  • Data Directories:

    • /data/grafana: Owned by root:root, mode 0644.

    • /data/nginx: Owned by root:root, mode 0644.

  • Configuration Files:

    • All generated configuration files (e.g., compose.yml, docker-compose-multirole_edited.yaml, grafana.ini) are owned by root with permissions set to 0644.

Available Ports for Connection

The following ports are exposed on the host or mapped for internal communication.

Service Internal Port Exposed Port (Host) Protocol/Usage
Temporal UI 8080 8080 HTTP (Direct access, internal)
Temporal Frontend 1 7237 7237 gRPC
Temporal Frontend 2 7236 7236 gRPC
Temporal History 7234 7234 gRPC
Temporal Matching 7235 7235 gRPC
Temporal Worker 7232 7232 gRPC
Temporal Nginx 7233 7233 gRPC (Internal LB)
Prometheus 9090 9090 HTTP (Metrics)
Grafana 3000 8085 HTTP (Dashboard)
Elasticsearch 9200 9200 HTTP
Loki 3100 3100 HTTP
Jaeger 16686, 14268, 14250 16686, 14268, 14250 HTTP, gRPC
OTel Collector 1888, 13133, 4317, 55670 1888, 13133, 4317, 55670 Multiple
Nginx (External) 80/443 443 HTTPS (Public)

Starting, Stopping, and Updating

Service management is handled via Docker Compose commands executed in the respective project directories.

  • Start/Restart the Main Application: Execute the command in the /root/docker-compose directory:

    cd /root/docker-compose
    docker compose -f docker-compose-multirole_edited.yaml up -d
    

  • Start/Restart the Nginx Proxy: Execute the command in the /root/nginx directory:

    cd /root/nginx
    docker compose up -d
    

  • Stop the Services: To stop the main application stack:

    cd /root/docker-compose
    docker compose -f docker-compose-multirole_edited.yaml down
    
    To stop the Nginx proxy:
    cd /root/nginx
    docker compose down
    

  • Update the Application: Pull new images and restart the containers:

    cd /root/docker-compose
    docker compose -f docker-compose-multirole_edited.yaml pull
    docker compose -f docker-compose-multirole_edited.yaml up -d
    

question_mark
Is there anything I can help you with?
question_mark
AI Assistant ×