Installing Linux Mint 18/Ubuntu 16.04: Encrypted ZFS Root and /boot Partitions

A while ago I cobbled together a spate of howtos into a singular method for installing a root-on-ZFS Ubuntu/Mint system with LUKS. Since then, I've kept reading on the subject and I found a solution for a long-standing problem with Linux/GRUB OSes. I lamented that there wasn't a good way to boot your machine using full-disk encryption, so even if all of your /home and system data is secure when the system is powered off, you still had this pesky /boot partition hanging around out in the open. There's a big Achilles' heel in your nice, secure disk encryption setup if your bootloader and kernel are just sitting ducks.

Turns out, there's a way you can really get full disk encryption with LUKS and GRUB. With a few changes to the previous instructions, we can get a system that runs ZFS on root in a LUKS-encrypted container and get an LUKS-encrypted /boot partition as well.

For stability purposes, the /boot partition will remain ext2/3-formatted. There was a recent mailing list thread crowing about ext2's "worse is better" design philosophy that I found quite entertaining.

The following is a terse set of instructions that skip a lot of explanation. Refer to the previous instructions for further details where desired. All the usual caveats still apply: do not perform these steps on a disk that contains data you care about. Be comfortable with the concepts of using disk partitioning tools and ZFS. Your actual mileage may vary. Not applicable where void by law. Safety not guaranteed. Use at your own risk. This means you.

Boot the Linux Mint 18+ (or Ubuntu 16.04+) ISO. Open a terminal and become root.

sudo su
killall xscreensaver

Identify your storage device. This is usually /dev/sda, and we use that as an example in this howto. Wipe the partition table off of this device with dd or, if you prefer, wipefs.

wipefs --force --all ${ROOTDISK}
# or:
# dd if=/dev/zero of=${ROOTDISK} bs=1M count=2

Set up a new partition table on the disk. Create one partition for your encrypted /boot container and one for your encrypted ZFS pool.

/sbin/parted --script ${ROOTDISK} mklabel msdos
/sbin/parted --script --align optimal ${ROOTDISK} mkpart primary  1MiB 513MB  # encrypted /boot
/sbin/parted --script --align optimal ${ROOTDISK} mkpart primary 513MB  100%  # cryptroot
/sbin/parted --script ${ROOTDISK} set 1 boot on

Our steps now begin to differ from the original howto. Instead of one LUKS container, we create and mount two. I first started experimenting with containers named "cryptroot" and "cryptboot" and you can imagine how quickly that got confusing. Among other things, /boot holds the kernel so I call the LUKS container that will hold /boot "cryptkern" here to reduce confusion.

cryptsetup luksFormat -h sha512 ${ROOTDISK}1
cryptsetup luksFormat -h sha512 ${ROOTDISK}2
cryptsetup luksOpen  ${ROOTDISK}1 cryptkern
cryptsetup luksOpen  ${ROOTDISK}2 cryptroot

Install the ZFS utilities on the live CD session. You can also fetch these .DEB files and store them locally, but that is not covered here.

apt update
apt install -y zfsutils-linux
zpool create -O mountpoint=none -O compression=lz4 -O atime=off -o ashift=12 zmint /dev/mapper/cryptroot
zfs   create                       zmint/root
zfs   create -o mountpoint=/       zmint/root/mint18
zpool set bootfs=zmint/root/mint18 zmint

Additional ZFS datasets can be created at this point, but this howto skips them for simplicity.

Export the zpool and reimport it under /mnt.

zpool export -a
zpool import -R /mnt zmint

Format and mount the /boot partition. Note that the mkfs.ext3 command is formatting the unlocked LUKS container device and not ${ROOTDISK}1, which is the LUKS container itself.

mkfs.ext3 /dev/mapper/cryptkern
mkdir /mnt/boot
mount -o noatime /dev/mapper/cryptkern /mnt/boot

Install the OS to /mnt. I use unsquashfs.

apt install -y squashfs-tools
time unsquashfs -f -d /mnt/ /media/cdrom/casper/filesystem.squashfs
cp -v -p /run/resolvconf/resolv.conf /mnt/run/resolvconf/resolv.conf
cp -v -p /media/cdrom/casper/vmlinuz /mnt/boot/vmlinuz-`uname -r`

A totally random aside:

There's a lot of banter online about the merits and deficiencies of using /dev/random versus /dev/urandom and which OSes have greater or fewer differences between the two. Even professionals like to bicker about it. Ultimately, I suggest we all start using /dev/arandom. He who fears PRNG-deciphering nation states and loves wearing a tinfoil hat can make his own decisions. Where you value speed, I find haveged to be great at dramatically improving PRNG performance if you trust your machine's rand()-making device. It's less useful here than when creating enormously-sized PGP keys, but it's handy to have around.

# Optional
apt install -y haveged

Once you trust your randomness, create a LUKS decryption key file for each container. This prevents us from needing to type decryption passwords for each container every time we boot.

time dd bs=512 count=4 if=/dev/random iflag=fullblock of=/mnt/boot/rootkey.bin
time dd bs=512 count=4 if=/dev/random iflag=fullblock of=/mnt/root/kernelkey.bin

cryptsetup luksAddKey ${ROOTDISK}1 /mnt/root/kernelkey.bin
cryptsetup luksAddKey ${ROOTDISK}2 /mnt/boot/rootkey.bin

chmod 0 /mnt/root/kernelkey.bin
chmod 0 /mnt/boot/rootkey.bin

Mount important system directories into /mnt and chroot to the new system:

cd /
for i in /dev /dev/pts /proc /sys; do mount -B $i /mnt$i; done
chroot /mnt /bin/bash --login

Write your fstab:

/dev/mapper/cryptkern /boot ext3 defaults,noatime 0 2
/dev/mapper/cryptroot /      zfs defaults         0 0

Configure the system. This section is abbreviated for simplicity. See the original instructions for details.

passwd -u root
ln -s /proc/self/mounts /etc/mtab
echo myhostname > /etc/hostname
echo myhostname >> /etc/hosts
dpkg-reconfigure tzdata
# add a user account, etc...

Get the list of UUIDs for your system's partitions. You will use these UUIDs for decrypting the LUKS container partitions and in the bootloader.

ls -l /dev/disk/by-uuid

Write your crypttab. If your cryptkern partition is /dev/sda1, use the sda1 UUID for that line in crypttab and so on. crypttab is evaluated from top to bottom, so ordering matters here: I put the cryptroot line at the top and the cryptkern line beneath it, since the key for cryptkern is kept in the cryptroot container.

vi /etc/crypttab
# Add these lines:
cryptroot UUID=UUIDHERE /rootkey.bin        luks,keyscript=/bin/cat
cryptkern UUID=UUIDHERE /root/kernelkey.bin luks,discard

Write a hook script for initramfs to get a copy of the key to decrypt cryptroot.

vi /etc/initramfs-tools/hooks/crypto_keyfile
# Add these lines:
cp -p /boot/rootkey.bin "${DESTDIR}"

Make the hook script executable.

chmod +x /etc/initramfs-tools/hooks/crypto_keyfile

Install ZFS and a ZFS-aware initramfs on the installed system.

apt update # again
apt install -y zfsutils-linux
apt install -y zfs-initramfs
cp -v -p /lib/udev/rules.d/60-zpool.rules /etc/udev/rules.d/

Edit /etc/default/grub. This is where you'll point to the encrypted root partition.

vi /etc/default/grub
# Make the following changes

Note that you're NOT putting the cryptkern UUID and LUKS container here. Your boot sequence will be:

  1. Power on
  2. Get password for cryptkern
  3. Load initramfs
  4. initramfs loads key for cryptroot from /boot
  5. Mount /
  6. System loads key to mount cryptkern from /root

You could use passwords for all of your LUKS containers if you really wanted to do so, but keys are quicker and relatively safe if you can wrap you head around this ping ponging of device decryption.

Symlink your cryptroot device under /dev so updating the bootloader won't throw an error.

ln -sf /dev/mapper/cryptroot /dev
echo 'ENV{DM_NAME}=="cryptroot", SYMLINK+="cryptroot"' > /etc/udev/rules.d/99-cryptroot.rules

Update your initramfs and bootloader.

update-initramfs -c -k all
grub-install /dev/sda

Quit the chroot, ummount everything, and reboot.

umount /mnt/dev/pts
umount /mnt/dev
umount /mnt/proc
umount /mnt/sys
umount /mnt/boot
zfs umount -a
zpool export -a

If all goes according to plan, after POST you'll get a screen like this:

Attempting to decrypt master key...
Enter passphrase for hd0.msdos1 (a9e29f6295bc49919d5ed7820f941974):

That's hd0 (your disk), msdos1 (your MBR partition table), and as an example, a9e29f6295bc49919d5ed7820f941974, which corresponds to the UUID of your cryptkern LUKS container as per /dev/disk/by-uuid. If you type your password correctly, you'll see this:

Slot 0 opened

And your boot sequence will proceed as intended. When necessary to decrypt each LUKS container, the boot sequence will use to the keys you specified in /etc/crypttab and not prompt for manual input.

External References and Resources




A "review" of Michael Warren Lucas - Immortal Clay

Michael W. Lucas writes very good technical books. They hit that rare sweet spot between dry, abstract classroom theory and rote copy-and-paste that gives you practical examples as well as enough background on the subject to inform you as to why you would want to do it his way. His guides are invaluable, especially with complex subjects like ZFS, which is no mere file system but rather an amorphous stack of cooperating storage technologies and principals. It's a godsend to have a concise explanation handed to you that foregoes much of the academics and gives you usable, real-world guidance.

What I'm trying to say that Michael W. Lucas can write tech books. But what about fiction?

Lucas writes his fiction under the name Michael Warren Lucas, presumably to separate it from his fact. He ran a sale on some of his titles earlier this year and encouraged me to try his ostensibly horror-ish title Immortal Clay. So I did.

Admittedly, I approached this story with a fair amount of trepidation. A good writer cannot necessarily write everything equally well and I've suffered a number of authors who felt brave enough to try to genre hop before they were ready. Nonetheless, a low e-book price was encouraging and if I didn't like the book, I wouldn't be out too much. Immortal Clay describes itself as what would happen if The Thing from John Carpenter's The Thing had won. This was a good sign.

You know The Thing. Researchers in Antarctica find a dog that is really a shapeshifting alien that eats living organisms and creates perfect clones of them. Kurt Russell fights it with a flamethrower. But that's where The Thing's story ends, and that is more or less where Immortal Clay picks up.

I was ambivalent about this because it's very easy to conjure up a "what if" concept and very difficult to deliver a complex story based on it. There is a wonderful short story about the events of The Thing told from The Thing's perspective. It would be hard to top that.

Fortunately, any concerns I may have had about Lucas's ability to deliver on the promise of his premise were allayed by the end of the first chapter. This is a book that starts with the end of the world and just keeps going from there. When a space alien that perfectly clones life forms finally conquers the planet, you wouldn't think there would be much more of a story to tell. You'd be wrong.

Our protagonist dies in the prologue. Or rather, the alien duplicate that is our protagonist is copied from a person who dies in the prologue. It possesses all of his physical characteristics and behaviors, and retains all of his memories. He was a police detective before he was eaten and the alien copy keeps his inherent desire for justice, even if he now lives in a bizarre realm where there really is no longer any specific system of law anymore. How do you solve crimes, even murders, when you aren't completely sure if alien-copied things can be killed?

Immortal Clay is not pure horror, per se. It is a post-apocalyptic suburban mystery novel. It certainly draws inspiration from The Thing, and James Gunn's Slither, but it has as much in common with Jean-Paul Sartre and Tim Burton as it does to Raymond Chandler. Our hero is just as confused as Alec Baldwin and Geena Davis are at the start of Beetlejuice, stuck in their home and unsure of their fates until they find their guidebook for the afterlife. You remember it: it reads like stereo instructions. Our hero doesn't get the benefit of a book or a grizzled social worker to spell it out for him, so he asks himself some compelling philosophical questions without always getting answers. Turns out if an alien eats your homeworld and spits out perfect copies of everything, those copies will have a lot of psychological problems and, oh by the way, all the old rules of what constitutes "alive" and "dead" are right out the window.

Immortal Clay is a fun romp through a community of quote-unquote "survivors" with quirky personalities blended with real survival problems trying to make the best they can after the most absurd of unnatural disasters has ruined their planet. They are left to pick up the pieces and try to put them back together, even if there's no instruction manual. It has some genuinely horrific imagery and some genuinely emotionally harrowing moments, especially around our not-really "survivors" having very real survivors' guilt.

The mystery portions of the story are well-paced and Lucas avoids the contrivance of an oh-I'm-so-clever Agatha Christie denouement. It's a good old-fashioned whodunnit, with the added complications of "mass extinction" and "civilization was eaten by a space monster" thrown in to keep things interesting. I found myself unable to put this story down once I'd picked it up. Lucas's fiction style leans towards short, brutally plot-propelling chapters that break the action up into even, fitting scenes, and he clearly pays very strict deference to his outline. Even at 51 chapters, it's a quick read. The plot never lags and there is no unnecessarily flowery "let me prove to you I have a thesaurus" prose. Our protagonist never spends, for no discernible reason, scores of pages describing in agonizing detail how he eats a bowl of cereal. He's got a crime to solve, dammit, and he's going to solve it or die trying.

If he can die, that is. He isn't certain he can be killed.

But someone is trying to figure it out for him for sure. Or rather, some Thing.


How to Create an OpenBSD VM in Azure

[Thinking] Oh glory of glories. Oh heavenly testament to the eternal majesty of God's creation.

[Out loud] Holy macaroni!

— Homer takes his chances in the mystery wall

Yesterday I discovered that Microsoft Azure has officially announced that OpenBSD can run on their platform. And there was much rejoicing.

Unfortunately, the provided "guidance" document for setting it up is almost entirely correct. "Almost" here meaning that the directions look right. They seem right. But if you follow the directions, you are not going to have a working product and will have neither OpenBSD nor joy. OpenBSD is joy, joy is OpenBSD. We can do better. We must do better.

The instructions are clear enough, but the tutorial is a Dürer's Rhinoceros of steps that seem right upon first glance but were not tested, not verified for correctness, and not even reviewed by someone intimately familiar with the process.

If you attempt to run the steps in the guidance document as is, you'll get this error when you try to create the VM:

az vm create \
  --resource-group myResourceGroup \
  --name myOpenBSD61 \
  --image "https://mystorageaccount.blob.core.windows.net/vhds/OpenBSD61.vhd" \
  --os-type linux \
  --admin-username azureuser \
  --ssh-key-value ~/.ssh/id_rsa.pub

invalid usage for storage profile: create unmanaged OS disk created from generalized VHD:
  missing: --use-unmanaged-disk

We are not content to curse this darkness. Instead we shall light a candle and shine the luminous glory of truth, like a beacon, upon this ominous dusky horizon of bad documentation.

How to Actually Create an OpenBSD VM in Azure

N.B. You should be comfortable with Azure resources and with the OpenBSD operating system before continuing. This tutorial is easy, but it is not for the timid.

First, create your OpenBSD VM locally. There are bunch of ways to do this, but the end result must be a fixed-size .VHD file. Converting from one virtualization platform to another is outside the scope of this tutorial. Assuming you have a Windows Hyper-V instance, you can create a 2 GiB .VHD file easily.

From an elevated PowerShell prompt:

$size_bytes = 2 * [Math]::Pow(2,30)
$file_path  = 'C:\Users\Public\Documents\Hyper-V\Virtual hard disks\openbsd.vhd'

New-VHD -Fixed -SizeBytes $size_bytes -Path $file_path

Set your $size_bytes and $file_path accordingly.

Second, attach this .VHD file to a new VM and install OpenBSD on it. Installing OpenBSD is more difficult to do than falling over, but not by much. I like to mount installXX.iso to the virtual DVD drive of the VM and do the install manually, but you can configure an autoinstall process if you choose.

Customize your OpenBSD VM how you wish. You can partition the virtual disk to your liking, add software, or whatever you like. Be aware:

You MUST enable sshd.

You can answer "yes" to this question in the installer, or answer "no" and set up sshd differently, but you must set up SSH on this machine. If you don't do this, you won't be able to remote into your VM.

The instructions for prepping the VM in the guidance document are correct. As root:

echo dhcp > /etc/hostname.hvn0
echo stty com0 115200 >> /etc/boot.conf
echo set tty com0 >> /etc/boot.conf

# Choose a mirror from the official list: https://www.openbsd.org/ftp.html
echo https://my.favorite.mirror.here/pub/OpenBSD > /etc/installurl

pkg_add py-setuptools openssl git
ln -sf /usr/local/bin/python2.7        /usr/local/bin/python
ln -sf /usr/local/bin/python2.7-2to3   /usr/local/bin/2to3
ln -sf /usr/local/bin/python2.7-config /usr/local/bin/python-config
ln -sf /usr/local/bin/pydoc2.7         /usr/local/bin/pydoc

git clone https://github.com/Azure/WALinuxAgent
cd WALinuxAgent
python setup.py install
waagent -register-service

Take a moment here to check that waagent is running and the log exists:

ps -auxww | grep waagent
tail -f /var/log/waagent.log

There will be numerous direct-to-screen errors about failures to mount discs. Don't panic.

When the VM is configured to your wishes, prep it for running in Azure and shut it down:

waagent -deprovision+user -force
halt -p

At this point, waagent has disabled your root user account, so good luck getting back into your VM without a good sudo or doas config.

Note that you do not need to prepare a user account for this VM before you deploy it in the cloud. You do not need to:

  1. Create a local super user
  2. Make the be-all, end-all of kickass root passwords
  3. Add anything to /etc/doas.conf
  4. Set up SSH keys

You can do these things if you like, but it is not strictly required.

The guidance document pushes the Azure CLI 2.0 tool, so we'll use it in this tutorial but, if you want, you have alternatives. You can:

  1. Do everything through the Azure portal website.
  2. Use the Azure PowerShell cmdlets. There's a direct link to the installer.
  3. Use Azure CLI 2.0. You can install it locally. We're not going to do that.
  4. Use Azure CLI 2.0 through Azure Cloud Shell. This is a builtin, roaming shell account you access through the portal. It will create a dedicated storage account in your subscription, so it's not exactly free-free, but it leaves your local machine alone while still being handy.

Azure Cloud Shell will eventually allow you to use bash or PowerShell as your working shell, but at the time of writing it only offers bash. This is sufficient for our purposes.

Caveats about Azure Cloud Shell: It does not work equally well in all browsers. I've had luck with it in Firefox and Edge, but not in a Chromium-based browser. Your actual mileage may vary. Also, muscle memory may compel you to Ctrl-W a botched command you've typed. In bash, this deletes the word before the cursor. In Firefox, this closes the window without a warning. Sigh. Azure Cloud Shell does not maintain session state. While your Azure Cloud Shell files may be persistent, it does not provide command history between sessions and it doesn't offer tmux or GNU screen. And I have had limited luck with copying and pasting. Welcome to the future, kids.

In Azure Cloud Shell, you will check your Azure subscriptions and determine which one will keep your OpenBSD VMs. You may have more than one Azure subscription, so find and set the name or the GUID of your desired subscription:

az account list --output table
az account set --subscription mysubscription

Create a dedicated Azure resource group for your VM disk images. Think of it as a library that you can maintain and reference throughout the other groups in your subscription.

az group create \
  --name mylibrary \
  --location westus2

This tutorial uses "westus2" as the desired geographical location. Adjust this accordingly.

Create a storage account, and a blob container within that storage account to hold your .VHD files.

az storage account create \
  --name myopenbsdimagelibrary \
  --resource-group mylibrary \
  --location westus2 \
  --sku Standard_LRS
az storage container create \
  --name vhds \
  --account-name myopenbsdimagelibrary \
  --public-access off

When the storage account and container are created, upload your prepped OpenBSD .VHD file to it. You can do this in several different ways, but the way I recommend is to use a free utility called AzCopy. Uploading things to the cloud always sucks, but this tool sucks less.

Download and install AzCopy.exe, probably to "C:\Program Files (x86)\Microsoft SDKs\Azure\AzCopy\AzCopy.exe". There are a bunch of different options that AzCopy supports, but the real reason I recommend it, especially after having used it to upload terabytes of information, is that AzCopy supports restartability. When you have hundreds of gigs to upload and the connection craps out in the middle of the night, you have to restart the upload. AzCopy can keep you from having to restart all over from byte 0.

An example of how to upload your OpenBSD .VHD file to your new storage account container would be, all on one line:

  /Source:"C:\Users\Public\Documents\Hyper-V\Virtual hard disks"

The critical thing here is to set /BlobType:page. Not only does this potentially make the upload much faster (nearly 400% faster in my preliminary tests), it will allow you to create a custom Azure image from the .VHD file. Block blobs cannot be used for images. The guidance document skips this crucial imaging step entirely.

Sometimes AzCopy fails in mid-copy so I run it in an overly fancy loop until it exits 0 or I hit my max loop count. You can use my PowerShell script azcopy_upload.ps1 like so:

$azcopy_properties = @{
  'Path'        = 'C:\Users\Public\Documents\Hyper-V\Virtual hard disks';
  'KeyFile'     = 'C:\Users\myalias\Desktop\keyfile.xml';
  'Destination' = 'https://myopenbsdimagelibrary.blob.core.windows.net/vhds';
  'BlobType'    = 'page';
  'Pattern'     = 'openbsd.vhd';
.\azcopy_upload.ps1 @azcopy_properties

Be aware that the KeyFile value this script requires is an XML-encoded file containing the SecureString object that contains your Azure storage account key. To create this file you can run these steps:

$sec_str = ConvertTo-SecureString -AsPlainText -Force -String 'MYBIGLONGACCESSKEYGOESHERE=='
Export-CliXml -InputObject $sec_str -Force -Encoding 'UTF8' -Path 'C:\Users\myalias\Desktop\keyfile.xml'

When the .VHD file is uploaded to your storage account, create a custom Azure image from it:

az image create \
  --name myopenbsdimage \
  --resource-group mylibrary \
  --location westus2 \
  --os-type Linux \
  --source https://myopenbsdimagelibrary.blob.core.windows.net/vhds/openbsd.vhd

When the image is created, you're in business. Keep your image library pristine by making your new OpenBSD VMs in a separate resource group in the same subscription. If you screw something up in the VM, it's faster and easier to delete the entire group than to individually delete the resources within the group: virtual NICs, vnets, disks, et cetera.

az group create \
  --name openbsdvms \
  --location westus2
az vm create \
  --resource-group openbsdvms \
  --name openbsdvm1 \
  --public-ip-address-allocation dynamic \
  --size Basic_A1 \
  --storage-sku Standard_LRS \
  --admin-username azureuser \
  --generate-ssh-keys \
  --image $(az image show \
    --resource-group mylibrary \
    --name myopenbsdimage \
    --output tsv \
    --query id)

You can adjust your --size and --storage-sku settings depending on your performance preferences. Some VM sizes are storage settings are incompatible, so you may need to adjust these to make everyone happy. If you need a static IP, set --public-ip-address-allocation to static.

It should provision a new VM in a few minutes. Get the IP address of the newly-created VM and ssh into it directly from Azure Cloud Shell:

ssh -l azureuser $(az vm list-ip-addresses \
  --resource-group openbsdvms \
  --name openbsdvm1 \
  --output tsv \
  --query [].virtualMachine.network.publicIpAddresses[].ipAddress)

And that, dear reader, is how to really get OpenBSD running in Azure. Crackin' the whip, secure by default.


"Information for Non-Endorsed Distributions", aka, "Important Stuff for Fringe OSes": https://docs.microsoft.com/en-us/azure/virtual-machines/linux/create-upload-generic


A "review" of Deadpool

A Juice Newton opening credits sequence. 10/10



SPACEPLAN has just been released for Steam. Billing itself as an "experimental piece of interaction based partly on a total misunderstanding of Stephen Hawking's A Brief History of Time," SPACEPLAN is a delightful little game that began as a browser-based story at http://jhollands.co.uk/spaceplan/ and is well worth the investment of a few million clicks.

I stumbled across SPACEPLAN some time last year and immediately found its story entertaining, clever, and compelling. It's impossible to go into detail of the events of SPACEPLAN without ruining some aspect of the plot, so suffice it to say that it is a discovery-driven game that uses mouse clicks as its primary means of game currency. You end up clicking in SPACEPLAN. You click a lot.

Or, conversely, you don't click very much. I counted and you can, technically, complete the game using (32 + 6 + 10) mouse clicks (that's energy-generating clicks, not energy-spending clicks, of which there are at minimum 11). So it's possible to complete the game with only a few dozen clicks if you're patient. Veeeeeeeeery patient. The appeal of SPACEPLAN is that you need to go and accomplish things, and to do that you need energy, and you accrue energy by either (a) clicking the mouse, or (b) waiting for your energy accumulator to accumulate. It's up to you how much you want to click beyond the minimum 15 initial clicks, compelled only by your own sense of curiosity for what's going to happen next. You can do a purely click-fueled run, or you can do a 25-click run, or anything in between. It's up to you.

From there it's left up to the player to balance patience against progress: if you want to spend your energy to enhance your exploration, therefore advancing the story, or spend it on improving your energy collection technology, delaying immediate story progression in order to be able to accelerate the telling of that same story later. The entire game is a representation of the procrastinator's dilemma: do you begin walking to your destination along a dirt road today or do you wait for a six-lane express motorway to get built tomorrow?

SPACEPLAN is not the kind of game that would be fun to stream online. There will never be SPACEPLAN tournaments. You set up your energy collection process, whether that be manual or automatic, and then you wait. You wait until you have enough energy to buy the next thing you need to unlock the next story element. SPACEPLAN is more interactive than Progress Quest but, depending on how you play it, it doesn't have to be much more interactive.

If you intend to replay the game for analytical purposes, you'll want to invest in an autoclick utility. These are widely available for free online because simulating a mouse click is a common Windows UI programming exercise, and clicking something over and over again is just monotonous enough to warrant creating a simple automation utility.

So, thusly armed with an autoclick program that looks like My First Visual Basic GUI and works like a champ, I set out measuring how expensive the inventory of SPACEPLAN is, and only after completing the tally did it occur to me that I could not share it without spoiling the game for a new player.

In short: SPACEPLAN leans heavily on the utility of potatoes. Like, a lot. Potatoes are so useful in the universe of SPACEPLAN that it would make Mark Watney jealous. And so you click ever onward, unto a tuber-based journey that reaches into the very deepest of our cosmological questions about the origin and nature of time and space. And it misrepresents the principles and concepts in A Brief History of Time, phenomenally. Click. Explore. Repeat.