Skip to content

Proxmox 9

In this article

Deployment Features

ID Compatible OS VM BM VGPU GPU Min CPU (Cores) Min RAM (Gb) Min HDD/SDD (Gb) Active
32 Debian 12 + + + + 2 2 - No

Proxmox VE 9.0

Proxmox VE 9.0 was released on August 5, 2025, and has significant differences from version 8.x:

Main new features of version 9.0:

  • Transition to Debian Trixie;
  • Snapshots for virtual machines on LVM storage with thick resource allocation (preview technology);
  • High availability (HA) rules for binding nodes and resources;
  • Factories for the software-defined network (SDN) stack;
  • Modernized mobile web interface;
  • ZFS supports adding new devices to existing RAIDZ pools with minimal downtime.

Critical changes in version 9.0:

  • The testing repository has been renamed to pve-test;
  • Possible changes in the names of network interfaces;
  • VirtIO vNIC: default value changed for the MTU field;
  • Update to AppArmor 4;
  • Privilege VM.Monitor revoked;
  • New privilege VM.Replicate for storage replication;
  • Creating privileged containers requires privileges Sys.Modify;
  • Support for configuring maxfiles for backup discontinued;
  • GlusterFS support discontinued;
  • systemd registers a warning System is tainted: unmerged-bin after boot.

If you ordered a server with version 9.0, be sure to familiarize yourself with the detailed developer documentation

Note

Unless otherwise specified, by default we install the latest release version of the software from official repositories.

Proxmox 9. Installation

After installing the server, within 15-20 minutes, the installation of the Proxmox VE service is performed. An email will be sent to the mailbox linked to the account, notifying you about the installed server and providing a link in the format https://proxmox<ID_server>.hostkey.in, which you must access to enter the web management interface for Proxmox VE:

  • Login - root;
  • Password - system password.

Attention

If you are installing Proxmox as an operating system, to access the web interface you need to go to the address http://server_IP:8006.

First Login and Basic Check

  1. Open your browser → https://<server_IP>:8006 and enter the credentials:

  2. Navigate to: DatacenterNodeSummary - check CPU, RAM, disks, uptime.

  3. Disable the enterprise repository if there is no subscription:

NodeRepositoriespve-enterpriseDisable. Keep pve-no-subscription:

Terminal Commands:

sed -i 's/^deb/#deb/g' /etc/apt/sources.list.d/pve-enterprise.list || true
apt update

Network: Bridge vmbr0

The bridge vmbr0 is a virtual "switch" to which VMs are connected. It is bound to a physical interface (e.g., ens18/eno1).

Through the Web Interface

  1. NodeSystemNetwork.

  2. Verify that vmbr0 exists. If it does not exist or is not configured - CreateLinux Bridge:

    • Name: vmbr0
    • IPv4/CIDR: specify your static IP in the format X.X.X.X/YY (leave blank if using DHCP);
    • Gateway (IPv4): default gateway (usually X.X.X.1) (do not enter if using DHCP);
    • Bridge ports: your physical interface, for example ens1;
    • Save → Apply configuration:

Through CLI (if web access is lost)

Example /etc/network/interfaces (ifupdown2):

auto lo
iface lo inet loopback
auto ens18
iface ens18 inet manual
auto vmbr0
iface vmbr0 inet static
    address 192.0.2.10/24
    gateway 192.0.2.1
    bridge-ports ens18
    bridge-stp off
    bridge-fd 0
Apply changes:

ifreload -a

Note

If DHCP addressing is needed for the node: replace the block iface vmbr0 inet static with iface vmbr0 inet dhcp and remove the gateway line.

Common Errors:

  • Incorrectly specified bridge-ports (wrong physical interface) → network "disappears". Correct the interface and execute ifreload -a.
  • Wrong gateway or subnet entered → local connection exists but no internet access.

Disks and Storage

Add a Second Disk for VM Storage

  1. NodeDisks: ensure that the new disk is visible (e.g., sdb).

  2. Option A - LVM-Thin (convenient for snapshots):

    • DisksLVM-ThinCreate: select the disk → specify VG name (e.g., pve2) and thin-pool (e.g., data2).

    • The storage will appear in DatacenterStorage.

  3. Option B - Directory:

    • Create a file system (DisksZFS or manually mkfs.ext4), mount to /mnt/...

    • DatacenterStorageAddDirectory → path /mnt/... → enable Disk image, ISO image (as needed).

Note

For ZFS choose a profile considering RAM (recommended ≥ 8 GB). On weak VDS LVM-Thin or Directory is better.

Loading ISO Images

ISO images can be loaded in two ways.

A. Through the Web Interface

  1. DatacenterStorage → (select storage with type ISO, e.g., local) → Content.
  2. Upload → select local ubuntu-25.10-live-server-amd64.iso → wait for upload completion.

B. Through the Node (CLI)

Example of downloading Ubuntu 25.10 ISO to local storage:

cd /var/lib/vz/template/iso
wget https://releases.ubuntu.com/25.10/ubuntu-25.10-live-server-amd64.iso
If the ISO does not appear in the list - ensure it is located in the .../template/iso folder of the desired storage and that the storage type includes ISO Image.

Create First VM (Ubuntu 25.10)

Example: Ubuntu Server 25.10 (VPS with 2 vCPU)

Click Create VM (top right):

General: Leave ID as default, Name - ubuntu2510 (or your own):

OS: select ISO ubuntu-25.10-live-server-amd64.iso, Type: Linux:

System:

  • Graphics card: Default;
  • BIOS: OVMF (UEFI);
  • Machine: q35;
  • SCSI Controller: VirtIO SCSI single;
  • (Optionally) enable Qemu Agent in Options after VM creation (see below):

Disks:

  • Bus/Device: SCSI;
  • SCSI Controller: VirtIO SCSI single;
  • Storage: your LVM-Thin/Directory;
  • Size: 20–40 GB (minimum 10-15 GB for testing);
  • Discard (TRIM): enable on a thin-pool:

CPU:

  • Sockets: 1;
  • Cores: 2 (according to your VPS);
  • Type: host (best performance):

Memory:

  • 2048–4096 MB. You can enable Ballooning (e.g., Min 1024, Max 4096):

Network:

  • Model: VirtIO (paravirtualized);
  • Bridge: vmbr0;
  • If VLAN is needed: VLAN Tag:

Confirm: check settings, mark Start after created and click Finish:

OS Installation:

  1. Start the VM → Console (noVNC) → Try or Install Ubuntu:

  2. Installer:

    • DHCP/static IP as needed;
    • Disk: Use entire disk;
    • Profile: user/password;
    • OpenSSH server: enable.
  3. Reboot and log in via console/SSH.

Post-install:

sudo apt update && sudo apt -y full-upgrade
sudo apt -y install qemu-guest-agent
sudo systemctl enable --now qemu-guest-agent
Then in Proxmox: VMOptionsQemu Agent = Enabled:

Boot Order: if booting from ISO - OptionsBoot Order → move scsi0 above cdrom.

Windows Installation (for More Powerful Nodes)

Suitable for nodes with ≥4 vCPU/8 GB RAM. On weak VPS, Windows may work unstably.

  1. ISO: Download the ISOs of Windows Server (2019/2022/2025) and virtio-win.iso (drivers) in StorageContent:

  2. Create VMOS: Microsoft Windows, select the installation ISO image. The option Add additional drive for VirtIO drivers allows you to add a second CD with drivers:

  3. System: BIOS OVMF (UEFI);

    • Machine: q35;
    • If necessary, enable Add EFI Disk and Add TPM (for new Windows versions). If it doesn't start - try SeaBIOS and disable EFI/TPM:

  4. Disks:

    • Bus: SCSI;
    • Controller: VirtIO SCSI;
    • Size: 40–80 GB;
    • Enable IO Threads:

  5. CPU: 2–4 vCPUs;

    • Type: host:

  6. Memory: 4–8 GB:

  7. Network: Model VirtIO (paravirtualized), Bridge vmbr0:

  8. Confirm: Complete VM creation by clicking Create, then in HardwareCD/DVD Drive, attach a second ISO - virtio-win.iso:

  9. Windows Installer: On the disk selection step, click Load Driver → specify the CD with VirtIO (vioscsi/viostor). After installation - set network drivers in Device Manager (NetKVM):

  10. Guest Agent (optional): Install Qemu Guest Agent for Windows using virtio-win ISO:

Troubleshooting Windows:

  • Black screen/doesn't boot: Change OVMF → SeaBIOS, disable EFI/TPM.
  • No network: Ensure NIC = VirtIO and NetKVM driver is installed.
  • Disk slowdowns: Make sure the disk = SCSI + virtio driver.

LXC Containers: Quick Start

Pre-made templates with minimal software are available in the template storage.

  1. DatacenterStorage → (select storage with type Templates)→ **Content** → **Templates**. Download, for example:ubuntu-25.04-standard_*.tar.zst` or another needed template:

  2. Click Create CT:

    • General: Specify ID/Name, Unprivileged container = Enabled (safer by default). Set password root or SSH key.

    • Template: Select the downloaded template.

    • Disks: Storage/Size (for example, 8–20 GB).

    • CPU/RAM: According to the task (for example, 1 vCPU, 1–2 GB RAM).

    • Network: Bridge vmbr0, IPv4 = DHCP (or Static if needed). VLAN Tag as necessary.

Tip for Network: If you are using NAT on vmbr1, then set it and specify the static IP.

- **DNS**: Default from the host or your own.

- **Features**: Optionally enable `nesting`, `fuse`, `keyctl` (depends on applications in the container).
  • Start at boot/Start after created: as desired.
  • After starting: log in via SSH and install software from the template or packages:
    apt update && apt -y upgrade
    

In LXC, Qemu Guest Agent is not needed. Mounting host directories is done through MP (Mount points).

Typical VM Profiles

  • Ubuntu/Debian (Web/DB/Utility): SCSI + VirtIO, UEFI (OVMF), 1–2 vCPUs, 2–4 GB RAM, disk 20–60 GB; enable Qemu Guest Agent.
  • Lightweight Services (DNS/DHCP/Proxy): 1 vCPU, 1–2 GB RAM, disk 8–20 GB.
  • Container Hosts (Docker/Podman): 2–4 vCPUs, 4–8 GB RAM; separate disk/pool for data.

Alternative to ISO: You can use Ubuntu 25.10 Cloud-Init images for quick cloning with auto-configured network/SSH. Suitable if you plan to have many similar VMs.

Connecting VMs and LXC in One Network

Basic Variant (One Subnet):

  1. Ensure all VMs/containers have Bridge = vmbr0 (or vmbr1).
  2. If using a DHCP network - addresses are assigned automatically, if static - specify IPs in one subnet (for example, 10.10.0.2/24, 10.10.0.3/24) and common gateway 10.10.0.1.
  3. Optionally. VLAN: Specify VLAN Tag in the network card settings of VMs/CT and ensure that the switch's uplink allows this VLAN.
  4. Inside the OS, check that the local firewall does not block ICMP/SSH/HTTP.
  5. Test: From Ubuntu VM ping <IP-LXC> and vice versa; ip route, traceroute will help with issues.

When Different Subnets:

  • Proxmox itself does not route between bridges. A router (separate VM with Linux/pfSense) or NAT on the host is needed.
  • Simple NAT on the host (example):

Enable forwarding:

sysctl -w net.ipv4.ip_forward=1
NAT from vmbr1 to internet via vmbr0:
iptables -t nat -A POSTROUTING -o vmbr0 -j MASQUERADE
For persistence, add rules in /etc/network/if-up.d/ or use nftables:

Note

Using NAT is suitable for combining LXC and installed through ISO OSes into one subnet.

Backups and Templates

  • Backup: DatacenterBackup or NodeBackup - set up vzdump schedule (storage, time, snapshot/stop mode):

  • VM Template: After basic VM setup → Convert to Template. Creating new VMs through Clone saves time and eliminates errors:

Common Problems and Solutions

"Web Interface Disappeared" (GUI Not Opening)

Check if the node is accessible via SSH. On the node, execute:

systemctl status pveproxy pvedaemon pve-cluster
journalctl -u pveproxy --no-pager -n 100
Soft restart of services:
systemctl restart pveproxy pvedaemon
If packages were updated - complete apt and resolve hanging processes (carefully), check available space df -h.

Lost Network After Bridge Editing

Connect via console (through provider/VNC/IPMI). Check /etc/network/interfaces and apply:

ifreload -a
ip a
ip r
Ensure that the gateway and mask are set correctly, bridge-ports is the correct physical interface.

VM Does Not Connect to Internet

  • Ensure inside the VM correct IPs/mask/gateway/DNS are specified.
  • Check that the Bridge of the VM's network adapter is - vmbr0 (or nat/vmbr1).
  • If VLAN is used - specify VLAN Tag in NIC VM settings (HardwareNetwork DeviceVLAN Tag), and on the switch uplink allow this VLAN.

ISO Does Not Boot / Installer Not Visible

  • Check Boot Order (OptionsBoot Order) and that the correct ISO is connected.
  • For UEFI, check if Secure Boot in the guest OS is not enabled if the ISO does not support it.

High Load/Disk "Clog"

  • Use VirtIO SCSI and enable IO Threads for intensive disk use.
  • Do not store backups on the same thin-pool that holds operational disks - better have a separate storage.

"Disconnected" Webcam/USB Device in VM

  • For USB passthrough, use HardwareUSB Device. If the device stops responding - Stop/Start VM or reconnect USB on host. Sometimes disabling Use USB3 helps with compatibility.

Updates and Reboot

apt update && apt full-upgrade -y
reboot
Update during "windows" and make a backup before major upgrades.

Diagnostics: Cheat Sheet

Node network:

ip a; ip r; ping -c3 1.1.1.1; ping -c3 google.com
Proxmox services:

systemctl status pveproxy pvedaemon pvestatd pve-cluster
journalctl -u pveproxy -n 200 --no-pager
Disk space:

df -h | sort -k5 -h
lvs; vgs; pvs
Storages:

cat /etc/pve/storage.cfg
VM device:

qm list; qm config <VMID>; qm status <VMID>
Quick VM restart:

qm stop <VMID> --skiplock; sleep 2; qm start <VMID>

Mini-FAQ

Q: Can vmbr0 be renamed? A: Yes, but it's not recommended on a production node - it's easier to leave vmbr0 and add additional bridges (vmbr1) as needed.

Q: Where do ISOs lie by default? A: In the local storage: /var/lib/vz/template/iso.

Q: What distinguishes local from local-lvm? A: Local - a regular directory for ISO, container templates, etc. local-lvm - LVM-Thin for VM/container disks with snapshots.

Q: How to quickly clone a VM? A: Turn an exemplary VM into a Template, then CloneFull/Linked.

Q: How to safely scale CPU/RAM of a VM? A: Shut down the VM and change resources; for Linux part parameters can be changed "on the fly", but better planned.

Readiness Checklist for System

  • Access to https://<server_IP>:8006 exists;
  • vmbr0 is set up and internet from node;
  • ISOs are loaded into storage;
  • First VM is created and installed;
  • Qemu Guest Agent is enabled;
  • Backup is configured (vzdump schedule);
  • Updates are checked.

Ordering a Server with Proxmox 9 using API

To install this software using the API, follow these instructions.


Some of the content on this page was created or translated using AI.

question_mark
Is there anything I can help you with?
question_mark
AI Assistant ×