Super sale on 4th Gen EPYC servers with 10 Gbps ⭐ from €259/month or €0.36/hour
EN
Currency:
EUR – €
Choose a currency
  • Euro EUR – €
  • United States dollar USD – $
VAT:
OT 0%
Choose your country (VAT)
  • OT All others 0%

06.02.2023

Migrating virtual servers from oVirt to VMware

server one
HOSTKEY
Rent dedicated and virtual servers with instant deployment in reliable TIER III class data centers in the Netherlands and the USA. Free protection against DDoS attacks included, and your server will be ready for work in as little as 15 minutes. 24/7 Customer Support.

Author: Sultan Usmanov, Hostkey DevOps

While working on optimizing our fleet of physical servers and compacting virtualization, we were faced with the task of migrating virtual servers from oVirt to VMware. Another problem was the need to maintain the ability to rollback to the oVirt infrastructure in the event of any complications during the migration process, since for a hosting company, the stability of the equipment is the top priority.

The following infrastructure was deployed for the server migration:

  1. NFS server, presented on oVirt, ESXi servers and the Mediation Server
  2. An intermediary server on which RAW and Qcow2 discs were converted to VMDK format.

Below are the scripts, commands and steps that we used during the migration.

To partially automate and streamline the connecting process to the oVirt servers, and in order to copy the disk of the virtual server and place it on NFS for further conversion, a bash script was written that ran on the proxy server and performed the following actions:

  1. Connect to Engine-server;
  2. Find the needed virtual server;
  3. Switch off the server;
  4. Rename (server names in our infrastructure should not be duplicated);
  5. Copy the server disk to the NFS partition mounted on the oVirt and ESXi servers.

Since we were working with tight deadlines, we wrote a script which only works with servers that have one disk.

Bash-script

#!/usr/bin/env bash

##Source
engine_fqdn_src= FQDN name Engine server
engine_api_src="https://${engine_fqdn_src}/ovirt-engine/api"
guest_id=$1
	
##Common vars
engine_user=user with rights to manage virtual servers
engine_pass=password
export_path=/mnt/autofs/nfs
	
OVIRT_SEARCH() {
	local engine_api=$1
	local api_target=$2
	
	local search
	if [[ ! -z $3 ]]&&[[ ! -z $4 ]];then
	local search="?search=$3=$4"
	fi
	
	curl -ks -user "$engine_user:$engine_pass" \
	-X GET -H 'Version: 4' -H 'Content-Type: application/JSON' \
		-H  'Accept: application/JSON' "${engine_api}/${api_target}${search}" |\
	jq -Mc
}
	
##Source
vm_data=$(OVIRT_SEARCH $engine_api_src vms name $guest_id)
disk_data=$(OVIRT_SEARCH $engine_api_src disks vm_names $guest_id)
host_data=$(OVIRT_SEARCH $engine_api_src hosts address $host_ip)
	
vm_id=$(echo $vm_data | jq -r '.vm[].id')
host_ip=$(echo $vm_data | jq -r '.vm[].display.address')
host_id=$(echo $vm_data | jq -r '.vm[].host.id')
disk_id=$(echo $disk_data | jq -r '.disk[].id')
stor_d_id=$(echo $disk_data | jq -r '.disk[].storage_domains.storage_domain[].id')
	
##Shutdown and rename vm
post_data_shutdown="<action/>"
post_data_vmname="<vm><name>${guest_id}-</name></vm>"
	
##Shutdown vm
curl -ks -user "$engine_user:$engine_pass" \
-X POST -H 'Version: 4' \
-H 'Content-Type: application/xml' -H 'Accept: application/xml' \
--data $post_data_shutdown \
${engine_api_src}/vms/${vm_id}/shutdown
	
sleep 60
	
##Shutdown vm
curl -ks -user "$engine_user:$engine_pass" \
-X POST -H 'Version: 4' \
-H 'Content-Type: application/xml' -H 'Accept: application/xml' \
--data $post_data_shutdown \
${engine_api_src}/vms/${vm_id}/stop
	
##Changing vm name
curl -ks -user "$engine_user:$engine_pass" \
-X PUT -H 'Version: 4' \
-H 'Content-Type: application/xml' -H 'Accept: application/xml' \
--data $post_data_vmname \
${engine_api_src}/vms/${vm_id}
	
##Copying disk to NFS mount point
scp -r root@$host_ip:/data/$stor_d_id/images/$disk_id /mnt/autofs/nfs

In cases when servers had two disks, the work described below was carried out.

Connecting to the engine and finding the right server

Under “Compute” >> “Virtual Machines” in the search box, enter the name of the server destined for migration. Find the server and disable it:

Follow to the “Storage” >> “Disks” section and find the server you want to migrate:

In this window, you need to remember the IDs of the drives connected to the server to be transferred. Then follow the proxy server via SSH and connect to the oVirt server where the virtual server is located. In our case, we used the “Midnight Commander” application to connect to the physical server and copy the necessary disks:

After copying the discs, it is necessary to confirm their format (raw or qcow). To check, you can use the qemu-img info command and specify the drive name. After selecting the format, you should perform the conversion using the following command:

qemu-img convert -f qcow2 (disk name) -O vmdk (disk name.vmdk) -o compat6

In our case, we are converting from qcow2 format to vmdk.

At the end of the conversion, you need to go to vCenter or, if just an ESXi server is installed, navigate to it through the web interface and create a virtual server without a disk. In our case vCenter was installed.

Creating a virtual server

Since we have a cluster configured, you just need to right-click on it and select the “New Virtual Machine” option:

Then select “Create new virtual machine”:

Set the server name and click “Next”:

Select the physical server where you plan to host the virtual server and click “Next”:

Select the storage where the server will be placed and click “Next”:

Specify the version of ESXi for compatibility. If there are servers with version 6 in the infrastructure, you must select the desired version. In our case we chose version 7.

Select the operating system and version that was installed on the server that we are transferring to. We are using Linux with the CentOS 6 operating system here.

In the “Customize Hardware” window, you must set all the parameters identical to those that were set in the portable system. You also need to remove the disk because the converted disk will be connected instead.

If you want to leave the old mac-address on the network card, you have to set it manually:

After creating a virtual server, you need to connect via SSH into the ESXi host and convert the disk from Thick to Thin provision and specify its location, i.e. the name of the server you created above.

vmkfstools -i /vmfs/volumes/NFS/disk name/disk name.vmdk(what we converted in the previous step) -d thin /vmfs/volumes/storage name on ESXi/server name/disk name.vmdk

After the successful conversion, you should connect the disk to the server. Again, go to the web interface for ESXi or vCenter, find the desired server, right-click on its name and select “Edit Settings”.

In the window that opens, on the right side, click on “ADD NEW DEVICE” and select “Existing Hard Drive” from within the drop-down list.

On our storage - ESXi disk, find the server and disk which we converted earlier, and click “OK”.

As a result of these actions, a virtual server will be created:

To start the server, you need to select the IDE controller in the disk settings in the “Virtual Device Node” section. Otherwise, when the system boots, a “Disk not found” message will appear.

The steps described above for creating a virtual server and connecting a disk will be enough to correctly start the system if the “Virtio-SCSI” interface was on the source server in the disk settings. You can check the interface type in the oVirt settings menu on the virtual server itself in the “Compute >> Virtual Machines” section. Just find the server and go to “Disks”:

During the migration process, we encountered the problem of migrating servers from Virtio, which is an old controller and the disks in /etc/fstab are not labeled sda as in newer systems, but rather they are named vda. To transfer these kinds of servers, we employed the following solution: before starting the system, you must connect LiveCD and perform the following steps:

  1. Boot from LiveCD;
  2. Create and mount disks, for example: mount /dev/sda1 /mnt/sda1;
  3. Go to the mnt partition and connect to the system via chroot by running these commands:
mount -t proc proc /sda1/proc
mount -t sysfs sys /sda1/sys
mount -o bind /dev /sda1/dev
mount -t devpts pts /sda1/dev/pts
	
chroot /sda1

After logging in (chroot), you need to change the name of the disks in fstab and rebuild the grub configuration file:

  1. vim /etc/fstab
  2. grub2-mkconfig -o /boot/grub2/grub.cfg

After completing these steps, restart the server.

This solution allowed us to solve the problem of migrating a large fleet of servers. On average, transferring a disk with settings and starting a 15-20 GB server took from about 20 to 30 minutes, and larger ones with a volume of about 100 GB took from about one and a half to two hours.

Rent dedicated and virtual servers with instant deployment in reliable TIER III class data centers in the Netherlands and the USA. Free protection against DDoS attacks included, and your server will be ready for work in as little as 15 minutes. 24/7 Customer Support.

Other articles

27.10.2025

Checklist: 5 Signs It's Time for Your Business to Upgrade from VPS to a Dedicated Server

Do you still rely on cloud services despite paying for them? If your budget is at least €50 per year, a dedicated server could be more cost-effective. Please review the checklist and the comparative tests between cloud and bare-metal solutions.

29.09.2025

What to Do If Your Laptop Breaks Down? How Kasm Turns Even an Old Tablet into a Workstation

When technical issues disrupt work, Kasm Workspaces becomes a lifesaver, turning outdated devices into powerful workstations through a browser. The article discusses how the platform addresses issues with broken laptops and equipment shortages, compares different versions (Community, Starter, Enterprise, Cloud), examines resource requirements, and reviews test results on VPS.

24.09.2025

Replacing Google Meet and Microsoft Teams: Jitsi Meet and Other Alternatives for Business

If you’re in the market for a replacement for Google Meet—just like we were—we’ve got options for you: Zoom, NextCloud, or self-hosted solutions. After thorough testing, we decided on Jitsi Meet on a VPS and have put it to use in real-world scenarios. We’d love to share our insights and any potential pitfalls you should be aware of.

23.09.2025

Monitoring SSL Certificates in oVirt Engine: How We Achieved Peace of Mind with the Help of Go and Prometheus

Looking to prevent system downtime caused by expired SSL certificates? At Hostkey, we’ve developed a user-friendly yet reliable tool built on Go that seamlessly integrates with Prometheus and Grafana. Our system promptly notifies you of potential issues before they become critical.

05.09.2025

Is a Cheap VPS Enough? A Performance Comparison of VPS Plans

Is it worth saving on a VPS or should you opt for a plan with a buffer? We tested three budget HOSTKEY configurations and clearly show which tasks the minimal plan can handle and where it's wiser to invest in a more powerful server.

Upload