LUKS Encryption with ZFS Root on Void Linux
The list of non-systemd operating systems that run ZFS on the root partition is a short list, but a valued one. Today, we install Void Linux. The documentation for this OS is a litle lacking. Parts of the OS documentation are decent, especially the advanced chroot-based installation page. There are also separate pages for installing Void Linux with LUKS and installing Void Linux with a ZFS root, but not both at the same time. Let's fix that.
This is going to be very similar to how we installed Devuan jessie, only instead of using the debootstrap tool, Void Linux provides its own rootfs tarball that we will install and configure. There are a couple of gotchas that can render your zpool unbootable if you follow the wiki, but by now these steps should seem really familiar: make a LUKS container, make a zpool in the LUKS container, extract a base OS into the zpool, chroot to the base OS, install ZFS, install bootloader, cross fingers, reboot. That's really all we're doing.
Start with an Ubuntu live CD. Ubuntu 16.04+ includes the ZFS kernel modules but not the userland utilities. We start with Ubuntu because it's faster to "apt-get install" these packages than to download and build ZFS DKMS modules from scratch (twice), but if that's what you feel like doing, hey man, go for it.
Wipe the MBR, create a new partition table, and create one partition for the LUKS container. Assuming your disk is /dev/sda:
sudo su DEVICE=/dev/sda # set this accordingly LUKSNAME=cryptroot wipefs --force --all ${DEVICE} # or do this, or do both: dd if=/dev/zero of=${DEVICE} bs=1M count=2 # Set MBR /sbin/parted --script --align opt ${DEVICE} mklabel msdos /sbin/parted --script --align opt ${DEVICE} mkpart pri 1MiB 100% /sbin/parted --script --align opt ${DEVICE} set 1 boot on /sbin/parted --script --align opt ${DEVICE} p # print # Create LUKS container and open/mount it cryptsetup luksFormat --hash=sha512 --key-size=512 --cipher=aes-xts-plain64 --use-urandom ${DEVICE}1 cryptsetup luksOpen ${DEVICE}1 ${LUKSNAME} # We put this UUID into an env var to reuse later CRYPTUUID=`blkid -o export ${DEVICE}1 | grep -E '^UUID='`
Necessary on Ubuntu: install ZFS utils in live CD session:
apt-get install -y zfsutils-linux /sbin/modprobe zfs # May not be necessary
Create your new ZFS zpool and datasets. This example will create multiple datasets for the system root directory, /boot, /home, /var, and /var/log.
TARGET=/mnt ZPOOLNAME=zroot ZFSROOTBASENAME=${ZPOOLNAME}/ROOT ZFSROOTDATASET=${ZFSROOTBASENAME}/default /sbin/zpool create -f \ -R ${TARGET} \ -O mountpoint=none \ -O atime=off \ -O compression=lz4 \ -O normalization=formD \ -o ashift=12 \ ${ZPOOLNAME} /dev/mapper/${LUKSNAME} /sbin/zfs create -o canmount=off ${ZFSROOTBASENAME} /sbin/zfs create -o mountpoint=/ ${ZFSROOTDATASET} /sbin/zfs create -o mountpoint=/boot ${ZPOOLNAME}/boot /sbin/zfs create -o mountpoint=/home ${ZPOOLNAME}/home /sbin/zfs create -o mountpoint=/var ${ZPOOLNAME}/var /sbin/zfs create -o mountpoint=/var/log ${ZPOOLNAME}/var/log /sbin/zpool set bootfs=${ZFSROOTDATASET} ${ZPOOLNAME} # Do not skip this step /sbin/zpool status -v # print zpool info
Fetch the Void Linux rootfs. Get it from any of the project's list of mirrors. Assuming your architecture is x86_64, fetching the latest Void rootfs at time of writing would look like this:
VOIDMIRROR=https://repo.voidlinux.eu/live/current wget -N ${VOIDMIRROR}/void-x86_64-ROOTFS-20171007.tar.xz wget -N ${VOIDMIRROR}/sha256sums.txt wget -N ${VOIDMIRROR}/sha256sums.txt.sig
Validate the rootfs checksum. You should also fetch and verify its GPG signature, but you probably won't.
sha256sum ./void-x86_64-ROOTFS-20171007.tar.xz
Compare this checksum with the value from sha256sums.txt. If it matches, untar its contents into ${TARGET}:
tar xf ./void-x86_64-ROOTFS-20171007.tar.xz -C ${TARGET}
Create a new ${TARGET}/etc/fstab that matches your ZFS datasets. For example:
cat ~/fstab.new /dev/mapper/cryptroot / zfs defaults,noatime 0 0 zroot/boot /boot zfs defaults,noatime 0 0 zroot/home /home zfs defaults,noatime 0 0 zroot/var /var zfs defaults,noatime 0 0 zroot/var/log /var/log zfs defaults,noatime 0 0 chmod 0644 ~/fstab.new mv ~/fstab.new ${TARGET}/etc/fstab
Create a LUKS key file, add it to the LUKS container, and put its info into a crypttab:
KEYDIR=${TARGET}/boot KEYFILE=rootkey.bin # Create key file: dd if=/dev/urandom of=${KEYDIR}/${KEYFILE} bs=512 count=4 # or, faster: # openssl rand -out ${KEYDIR}/${KEYFILE} 2048 chmod 0 ${KEYDIR}/${KEYFILE} cryptsetup luksAddKey ${DEVICE}1 ${KEYDIR}/${KEYFILE} # This prompts for the LUKS container password ln -sf /dev/mapper/${LUKSNAME} /dev # Set crypttab: echo "${LUKSNAME} ${CRYPTUUID} /${KEYFILE} luks" >> ${TARGET}/etc/crypttab
Mount some special mountpoints into the new FS:
for i in /dev /dev/pts /proc /sys do echo -n "mount $i..." mount -B $i ${TARGET}$i echo 'done!' done
Copy /etc/resolv.conf into the new system. You need this to resolve network endpoints.
cp -p /etc/resolv.conf ${TARGET}/etc/
chroot into ${TARGET}:
chroot /mnt
Configure the system. There are some post-installation steps documented that you can perform now with regards to setting the hostname, adding users, installing software, et cetera. At a minimum, set the root password and a locale:
passwd echo "LANG=en_US.UTF-8" > /etc/locale.conf echo "en_US.UTF-8 UTF-8" >> /etc/default/libc-locales xbps-reconfigure -f glibc-locales
Update the Void Linux software package repository. You may want to pick a faster mirror first, as I've done here:
# optional: echo 'repository=http://lug.utdallas.edu/mirror/void/current' > /etc/xbps.d/00-repository-main.conf
Then:
xbps-install -Su
The Void Linux rootfs is tiny, only about 35 MB, and very minimalistic. Install some packages that are not in the base install: a kernel, "cryptsetup" so you can unlock your LUKS container, and the "GRUB" and "ZFS" packages so you can boot your system and access your zpool:
xbps-install linux cryptsetup grub zfs
A note about kernels: Void Linux has a number of kernels available. Check your mirrors for all your available options. The default kernel package is "linux", which will give you a modern kernel, but you can also select "linux-lts" which will install an older, presumably more stable, kernel. If neither of these suit you, you can review the linux kernel packages available and install the kernel that best fits you. For instance, Linux kernel version 4.17.1 was released on 2018-06-11 and had a corresponding Void package available within 3 days, so running "xbps-install linux4.xx" for any available value of "xx" is a plausible kernel you can use here. Caveat: not all kernels are created equal and for your ZFS-root Linux machine to work your kernel needs to understand ZFS, which means that a new kernel will need to compile new kernel modules. This can fail, and often does. Be careful about mixing and matching your kernels with your DKMS modules, or you may lose the ability to import your zpools. If you install a specific kernel, make sure to install the matching kernel headers or you will be unable to build your ZFS kernel modules.
Ensure that GRUB can read your ZFS root dataset:
grub-probe /
The output of this command must be "zfs". If it isn't, stop and correct your install.
Edit the dracut settings so your initrd will contain the LUKS key file.
vi /etc/kernel.d/post-install/20-dracut
Make the following change:
- dracut -q --force boot/initramfs-${VERSION}.img ${VERSION} + dracut -q --force --hostonly --include /boot/rootkey.bin /rootkey.bin boot/initramfs-${VERSION}.img ${VERSION}
Adjust the "/boot/rootkey.bin" and "/rootkey.bin" values as needed. These should match ${KEYDIR}/${KEYFILE} and the /${KEYFILE} value you put into ${TARGET}/etc/crypttab, respectively.
Build your new initrd. This requires you to know your exact kernel version:
xbps-reconfigure -f linux4.16
Adjust the name of your linux4.xx package accordingly. Your DKMS modules will have been built when you install the "zfs" XBPS package, but the reconfiguration step here will attempt to re-compile them if they aren't already present. Be aware: if your "spl" and "zfs" DKMS builds fail, you will not be able to boot your machine. Stop now and fix your kernel before proceeding.
Edit /etc/default/grub. You will want to edit or add the following three lines:
- GRUB_CMDLINE_LINUX_DEFAULT # add the "boot=zfs" option
- GRUB_CMDLINE_LINUX="cryptdevice=${CRYPTUUID}:${LUKSNAME}"
- GRUB_ENABLE_CRYPTODISK=y
As an example, changes to your /etc/default/grub might look like this:
- GRUB_CMDLINE_LINUX_DEFAULT="loglevel=4 slub_debug=P page_poison=1" + GRUB_CMDLINE_LINUX_DEFAULT="loglevel=4 slub_debug=P page_poison=1 boot=zfs" + GRUB_CMDLINE_LINUX="cryptdevice=UUID=93a7dbeb-2ae0-48b2-bd00-c806ae9066df:cryptroot" + GRUB_ENABLE_CRYPTODISK=y
Install the bootloader.
mkdir -p /boot/grub grub-mkconfig -o /boot/grub/grub.cfg grub-install /dev/sda
Exit the chroot.
exit # leave the chroot
Unmount your mountpoints.
for i in sys proc dev/pts dev do umount ${TARGET}/$i done
Unmount your ZFS mountpoints and change their mount option to "legacy". This is a start-time mounting reliability thing that may or may not be necessary for you, but I've found that some systems that use ZoL have problems with letting ZFS automatically manage mounting their datasets.
/sbin/zfs unmount -a for dataset in boot home var/log var do /sbin/zfs set mountpoint=legacy ${ZPOOLNAME}/${dataset} done /sbin/zpool export -a -f
Reboot.
reboot
There are some scary-looking error messages in the init sequence that I haven't figured out how to fix, but they seem to be benign. The method given here boots a Void Linux system (seemingly) without trouble.
Final thoughts on Void Linux: I've been playing around with getting a LUKS+ZFS-on-root configuration in Void for at least a couple of months without success until recently. The OS itself is a nice example of a Linux distro that isn't a typical Debian/Ubuntu/Red Hat fork. It appears to have been created in 2008 by a (former?) NetBSD developer to showcase the XBPS package management system, which itself appears to be an ideological re-design of pkgsrc. The project lead went missing in January 2018, so Void has had to scramble to obtain in absentia control over their own project. They are, for lack of a better term, forking themselves. Since Void uses a rolling release model and there are no regularly-scheduled release milestones to be blessed by the guy in charge, this doesn't really affect you as an end user, but I thought it was worth mentioning that enough people care about Void to not let one person let it die.
UPDATE: "Would /var vol being out of sync with / cause any conflicts when rolling back / e.g., due to bad update?" Yes, maybe. This tutorial splits your system into different ZFS datasets which is, generally, a good thing. You don't explicitly need to do this. You can put everything into a single dataset if you want. HOWEVER: if you don't snapshot all of your mounted ZFS datasets before a big update, you could have problems if you ever need to roll one or more of them back. For example if you upgrade foobard from v2 to v3, and foobard has a different on-disk format for /var/db/foobar, it may auto-upgrade your v2 files to v3 files. If you ever want to go back to v2, ZFS gives you the option to do so via snapshots, but this probably isn't what you want. When rolling back any software regardless of your underlying file system, you want to be aware of what files are being changed and ensure that you can revert your change without causing damage. To its credit, ZFS allows you to make a snapshot before you make a change, and again after the change, and you can sift and sort the differences between them through the hidden .zfs subdirectory at the top of each data set. In other words, your /var and / mountpoints aren't "out of sync" any more under ZFS than if you have them under different ext2/3/4/FAT32/XFS/whatever partitions. Since Void Linux manages its services under /var/service with Gerrit Pape's runit, consider putting /var on a different dataset with ZFS as carefully as you'd consider putting it on a separate partition using any other file system.
1 comment:
Nice writeup. Would /var vol being out of sync with / cause any conflicts when rolling back / e.g., due to bad update? Only thing I can thing of is broken symlinks in /var/service to services that are no longer installed.
Post a Comment