In our article Automatic installation of the TrueNAS platform we told you that the TrueNAS SCALE operating system is now available for automatic installation on virtual and physical servers. Let's talk more about automated installation.
Why did we run SCALE and not CORE? Let's compare and choose. The developer of these operating systems has collected the main differences in a table. From this, we can see that in terms of functionality there are almost none. The main difference between the versions is in the base OS and, consequently, in the type of built-in virtualization, and the SCALE version also claims to be a scalable ZFS across multiple nodes using the Gluster distributed file system. And yes, ZFS support on the SCALE version is implemented via DKMS.
Then it was necessary to figure out which servers we could put this or that TrueNAS on. We support a single list of operating systems available on both "dedicateds" and "virtuals". In theory (according to Install and Installing SCALE) the list of supported hardware and virtualization systems is not particularly different either, but we will double-check. There are no problems with the support of physical servers in both systems, but our standard client VM configuration with Virtio-SCSI disk does not like the TrueNAS CORE installer, or to be more exact - it does not see this type of disk. We had a similar situation with automated Windows installation, and we solved it by adding the necessary drivers to the installer. But the TrueNAS SCALE installer sees the disk "out of the box". Also CORE-version supports a smaller variety of 10G-adapters. All in all, if you don't see any difference, why pay more to modify CORE to fit your realities when you can just get SCALE? Let's do it.
The next step is to automate the installation. The official documentation offers us only one option - installation from an ISO image by means of a keyboard-only installer. Well, let's try the approach we already used for automating the installation of Cloud Hosted Router (it's the default way there), namely to upload an image of a partially installed operating system to the disk and tweak it on the spot. In Cloud Hosted Router, for the final configuration there is a special file in the file system where you simply write a set of commands in its standard scripting language and it is executed straightaway from the first boot. TrueNAS doesn't give us such an easy path, at least not officially, so we will figure it out on our own.
First, we install TrueNAS SCALE 22 from ISO to the minimum disk that the installer will approve (it is 8 GB) and before the first boot of the operating system we immediately take the resulting full disk image into the archive. We will work with this image later. Judging by messages in the console, the native installer actually unpacks some images to disk, but first it checks that the hardware meets minimum requirements and also can assemble two disks into a mirror and put the OS on it. Thanks to the developers, disk partitioning and the boot loader are universal and work both with UEFI and Legacy, which means one less issue.
As we said in the article “From DVD and flash drives to modern solutions: how we automated OS installation on servers”, our main installer is a Live image based on Rocky Linux 8, which is loaded on the server being deployed through PXE, and it does a disk cleanup, selects the disks for installation, downloads and extracts the required system image to the prepared disks, and does a final configuration. The first steps for TrueNAS are the same, then you fill the stripped "minimal" image to disk without pre-cutting the partitions, because they are already in the image, but thereafter everything is custom.
TrueNAS uses ZFS as the root file system, this is where the OS settings are stored and we need to access it from within our installer. Since this is the only case so far when the ZFS driver was needed in our Live image, we did not build it in permanently and thus, did not inflate the size of the bootable image. Nothing prevents us from adding support for ZFS if needed when the installation script is running.
dnf install -y epel-release
dnf install https://zfsonlinux.org/epel/zfs-release-2-2$(rpm --eval "%{dist}").noarch.rpm -y
dnf install -y dkms
dnf config-manager --disable zfs
dnf config-manager --enable zfs-kmod
dnf install zfs -y
modprobe zfs
Then import zpool and mount it in a convenient location.
zpool import boot-pool
mount -t zfs boot-pool/ROOT/22.12.1 /mnt
As it turns out, TrueNAS SCALE stores all its basic settings for SQLite-base in this file: /data/freenas-v1.db
. Accordingly, we installed the necessary tools required to work with this database:
dnf install sqlite -y
After digging through the database of the configured TrueNAS instance and comparing it to the bare-metal instance, we found entries containing the settings we wanted and then wrote commands to create the necessary minimal configuration:
sqlite3 /mnt/data/freenas-v1.db "INSERT INTO network_interfaces VALUES(1,'$inetif','',0,0,'',NULL,0,NULL,NULL,'<%= @host.mac %>',
'<%= host_param('ip')%>','',<%= host_param('cidr')%>,4);"
## inetif — active network interface name
## <%= @host.mac %>,<%= host_param('ip')%>,<%= host_param('cidr')%> — respectively
## MAC address, IP and subnet mask from the parameters allowed in Foreman when setting up a build
sqlite3 /mnt/data/freenas-v1.db "DELETE FROM network_globalconfiguration WHERE id=1"
## We don't need this entry
sqlite3 /mnt/data/freenas-v1.db "INSERT INTO network_globalconfiguration VALUES(1,'TrueNAS','TrueNAS-b','local','<%= host_param('gateway')%>','','8.8.8.8',
'','','',0,'','','',NULL,'{\"mdns\": false, \"wsd\": false, \"netbios\": false}',
'{\"type\": \"DENY\", \"activities\": []}');"
## The gateway, the DNS server, and the allowed Outbound activity through the interface are set here
pass=$(openssl passwd -6 <%= host_param("password") %>)
sqlite3 /mnt/data/freenas-v1.db "UPDATE account_bsdusers SET bsdusr_unixhash='$pass'
WHERE bsdusr_username='admin'"
## Setting the administrator user password from the Foreman configuration of the server
sqlite3 /mnt/data/freenas-v1.db "UPDATE system_settings SET stg_guihttpsredirect=1 WHERE id=1"
## Let the web-interface of the installed OS work only through HTTPS
In principle, this would be the end of the setup, but there is one nuance. Firstly, the GPT table is from an 8GB disk, and it would be good to correct it to the size of the real disk. Secondly, I want to circumvent the restrictions of the operating system to use a separate disk for the operating system and separate disks for data. When installing on a physical server with several disks the client can create ZFS pools on them independently through the TrueNAS web interface, but in the case of a single-disk configuration, which is standard for our virtual servers, the web interface will not allow this, although free space will of course remain there (we took up only 8GB for our image). Without creating at least one more pool, we can neither host user data nor activate embedded applications. So, create a pool from under our installer and add it to the configuration file:
echo w | fdisk /dev/$INST_DRIVE
## fix the GPT table. INST_DRIVE - the disk on which the OS is installed
parted -s /dev/$INST_DRIVE mkpart primary 8590 100%
## Create an additional section
if [[ $(echo $INST_DRIVE | grep -c nvme) -eq 0 ]]; then
## for not-nvme disks:
wipefs -a /dev/${INST_DRIVE}4
dd if=/dev/zero of=/dev/${INST_DRIVE}4 bs=512k count=20
## These two commands will erase possible artifacts from a previous installation
zpool create init-pool /dev/${INST_DRIVE}4
## Create a ZFS pool for data/applications
else
## similar for nvme disks, the only difference is the partition name
wipefs -a /dev/${INST_DRIVE}p4
dd if=/dev/zero of=/dev/${INST_DRIVE}p4 bs=512k count=20
zpool create init-pool /dev/${INST_DRIVE}p4
fi
## Now you need to add the resulting pool to the "imported" to the OS configuration
sqlite3 /mnt/data/freenas-v1.db "INSERT INTO system_systemdataset
VALUES(1,'init-pool',1,'8f179c8648fc4419af075a5cf26c19f8',NULL);"
sqlite3 /mnt/data/freenas-v1.db "INSERT INTO storage_volume
VALUES(1,'init-pool','14024449139687287443',0,'');"
At this point the configuration is done. All that remains is to reboot the server from the system disk. The first TrueNAS startup will be a bit longer than usual, because native initialization scripts will work, but in a couple of minutes we will get an operating system with a configured network, a web-interface accessible via HTTPS, and need to set an administrator password and active ZFS pool for data and applications. If you have several disks on the server, you can create a pool for them using the OS web interface and then use this pool, as intended by the TrueNAS developers.