Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Select an option

  • Save zigforge/44c003f327e0a6c4df4db1f178d78445 to your computer and use it in GitHub Desktop.

Select an option

Save zigforge/44c003f327e0a6c4df4db1f178d78445 to your computer and use it in GitHub Desktop.
Install FreeBSD 14.1 on Hetzner

Install FreeBSD 14.1 on Hetzner server

Hetzner no longer offers direct install of FreeBSD, but we can do it ourselves. Here is how :)

Boot the server into rescue mode

Boot the Hetzner server in Hetzner Debian based rescue mode. ssh into it.

The Hetzner rescue image will tell you hardware details about the server in the login banner. For example, with one of my servers I see:

Hardware data:

   CPU1: AMD Ryzen 7 3700X 8-Core Processor (Cores 16)
   Memory:  64248 MB
   Disk /dev/nvme0n1: 1024 GB (=> 953 GiB) doesn't contain a valid partition table
   Disk /dev/nvme1n1: 1024 GB (=> 953 GiB) doesn't contain a valid partition table
   Disk /dev/nvme2n1: 1024 GB (=> 953 GiB) doesn't contain a valid partition table
   Disk /dev/nvme3n1: 1024 GB (=> 953 GiB) doesn't contain a valid partition table
   Total capacity 3815 GiB with 4 Disks

Network data:
   eth0  LINK: yes
         MAC:  xx:xx:xx:xx:xx:xx
         IP:   xxx.xxx.xxx.xxx
         IPv6: xxxx:xxx:xxx:xxxx::x/64
         RealTek RTL-8169 Gigabit Ethernet driver

(MAC, IPv4 and IPv6 address redacted by me in the above example output. You'll see actual values.)

You can also run lsblk to show the drives:

lsblk

Output:

NAME    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
loop0     7:0    0   3.2G  1 loop
nvme2n1 259:0    0 953.9G  0 disk
nvme3n1 259:1    0 953.9G  0 disk
nvme1n1 259:2    0 953.9G  0 disk
nvme0n1 259:3    0 953.9G  0 disk

Install tmux.

apt install -y tmux

Open a tmux session, so that if we lose connection to server while in the process of setup, we can quickly attach to a tmux session and pick up work again right away.

tmux

Caution

The disadvantage of running in tmux is that it will mess up the text shown in the FreeBSD installer a bit.

Tip

In a future update of this guide, I will check if screen works better or if there are any steps we can take to make the bsdinstaller not mess up the text when running in tmux.

Retrieve mfsBSD and run it in QEMU with raw drives attached

basically have a mini VPS with mfsbsd running with real disk passthrough and console access, just like a KVM, so I can install as usual - and then I can even test my installation directly by booting from it in the same way! Then when it works I just boot the server normal (ie directly into FreeBSD) and if I ever b0rk something up I boot the Linux rescue image and run mfsbsd again!

Source: https://www.reddit.com/r/freebsd/comments/wf7h34/hetzner_has_silently_dropped_support_for_freebsd/ijcxgvb/

Retrieve mfsBSD.

wget https://mfsbsd.vx.sk/files/iso/14/amd64/mfsbsd-14.1-RELEASE-amd64.iso
sha256sum mfsbsd-14.1-RELEASE-amd64.iso

SHA-256 hashsum:

c3bf0eb314bfcc372eccc30917a32d156416f6ad23b63ff37fe4034d533fc09a  mfsbsd-14.1-RELEASE-amd64.iso

Start mfsBSD in QEMU with the raw drives from the machine attached:

qemu-system-x86_64 \
    -cdrom mfsbsd-14.1-RELEASE-amd64.iso \
    \
    -drive format=raw,file=/dev/nvme0n1,if=virtio \
    -drive format=raw,file=/dev/nvme1n1,if=virtio \
    -drive format=raw,file=/dev/nvme2n1,if=virtio \
    -drive format=raw,file=/dev/nvme3n1,if=virtio \
    \
    -display curses \
    -boot d \
    -m 8G

Start install

Log in from the console

  • login: root
  • password: mfsroot

Proceed to either of the following:

  • Perform a standard install of FreeBSD as described in 01_standard_install.md below, or
  • make a custom install of FreeBSD as described in 02_custom_install.md below

Standard install of FreeBSD

Start the FreeBSD installer

bsdinstall

Proceed with installation. When done, "power off" the qemu VM

poweroff

Check that it works

Now boot the physical drives in qemu without having the CD ISO attached.

qemu-system-x86_64 \
    -drive format=raw,file=/dev/nvme0n1,if=virtio \
    -drive format=raw,file=/dev/nvme1n1,if=virtio \
    -drive format=raw,file=/dev/nvme2n1,if=virtio \
    -drive format=raw,file=/dev/nvme3n1,if=virtio \
    \
    -nic user,hostfwd=tcp::2222-:22 \
    -display curses \
    -boot d \
    -m 8G

Before you reboot the host machine

Qemu provides an emulated NIC to the VM. So if the physical network in the host uses a NIC that needs a different driver, the NIC name will be different in the VM from what it will be when running FreeBSD on the hardware.

The Qemu NIC will appear as em0.

However, in my case the physical NIC in the machine uses a different driver and appears as re0 when running FreeBSD on the hardware.

The Hetzner Debian based rescue system will give you a minimal description of the NIC in the machine when you ssh into it. Make note of that. If it's Intel, you can put an entry for both igb0 in addition to em0 in your /etc/rc.conf and then when you boot and ssh into the machine you will see which one was used, and then you can update your /etc/rc.conf accordingly.

If the NIC has a RealTek chipset, it'll probably be re0 that you should put an entry for in your /etc/rc.conf.

If the NIC is neither Intel nor RealTek, you have to find out what Linux commands to use in the Hetzner Debian based rescue system to show more details about your NIC, and then you need to figure out which FreeBSD NIC driver is correct for that one and edit your /etc/rc.conf accordingly.

For reference, here is what the complete /etc/rc.conf from one of my Hetzner servers looks like currently:

clear_tmp_enable="YES"
syslogd_flags="-ss"
hostname="de4"

# Used when booting in Qemu
ifconfig_em0="DHCP"
ifconfig_em0_ipv6="inet6 accept_rtadv"

# Used when booting on hardware
ifconfig_re0_name="extif"
ifconfig_extif="DHCP"
#ifconfig_extif_ipv6="inet6 2a01:4f9:5a:16cb:876a:bce7:b3c8:118a prefixlen 80"
ipv6_defaultrouter="fe80::1%extif"

local_unbound_enable="YES"

sshd_enable="YES"

ntpd_enable="YES"
ntpd_sync_on_start="YES"

moused_nondefault_enable="NO"

# Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable
dumpdev="AUTO"

zfs_enable="YES"

wireguard_enable="YES"
wireguard_interfaces="wg0"

jail_enable="YES"

Moment of truth

Reboot the host machine. All goes well, you'll be able to ssh into it and find a running FreeBSD system :D

That's it, you're done!

Custom install of FreeBSD

For many (most?) purposes, the standard install described above is sufficient. It's straight forward, and easy to fix if when something breaks.

The standard install described above however does not encrypt any parts of the system, not even the home directories of users. And while you can add additional individual encrypted datasets to your ZFS pool even with a standard install, you will not be able to turn on encryption for any of the ZFS datasets that have been created by the installer. Wouldn't it be nice if we could reduce the amount of data that is kept unencrypted at rest at least a bit? One of the motivations of the custom install described here is to do exactly that.

Defining our goals

For my server there are some specific things I am interested in achieving:

  • Keep as much of the system as possible encrypted at rest. With data encrypted at rest, and the keys to decrypt that data kept separate, we can recycle the harddrives in the future without needing to do overwrites of the drives first. This is desirable for multiple reasons:
    • Big drives take a long time to fully overwrite. Especially so when you do one pass of writing zeros followed by one or more passes of writing random data to completely cover the drives.
    • Hardware failures can leave us unable to fully or even partially being able to overwriting the data, meaning that safe disposal will hinge on being able to sufficiently physically destroying the drives.
  • The base system should be possible to throw away and set up again quickly and easily.
    • Corollary: None of the system directory trees should be included in backups. Not even /usr/home as a whole. We'll get back to this.
  • Anything that is important should live in jails, with their own ZFS datasets.
    • This way, we can back up as well as restore or rollback to past versions of those "things" mostly independently of the host system itself.

Initial install

We will start off with a standard install.

This will form the basis for our "outer" base system. We will this one to boot the server into a state where we can ssh into it to unlock our remaining datasets, from which we can then reboot into our "inner" base system.

It'll work similar to how it's done in https://github.com/emtiu/freebsd-outerbase

Deciding on the configuration of your ZFS pool(s)

On the server I am currently setting up while updating this guide, we have 4 drives total. I go a bit back and forth from time to time, sometimes using separate pools for system and data of interest, and sometimes setting up servers with one big pool for everything. On the last couple of servers I set up, I was using one big pool for both system and data. This time around, I will set up the server with a single drive for the system and a pool with 3 drives for data.

There are some tradeoffs, in both directions.

Disadvantages of having a separate pool for the system include:

  • If the system pool consists of a single drive, we lose out on some of the ZFS healing properties for the system install itself.
  • If the total number of drives is low, we lose out on drives for our data pool that could otherwise provide additional redundancy or capacity for our data.

The main advantage of having a separate pool for the system, as I see it is this:

  • As long as you remember which drive or set of drives the system was installed to, you should be able to completely reinstall the system overwriting all data you previously had on that or those drives, while your important data you want to keep is safely kept in its separate pool on its separate drives.

Note

When I say "remember which", I really mean "write it down somewhere obvious, where you can find it".

Which configuration to use, in terms of number of pools and in terms of the setup of the ZFS pool(s) themselves will depend on the number of drives you have and what your routines for managing backups and restores will be like.

A word on backups

Regardless of whether you choose to keep separate pools for system and data, or everything on one pool, there is one thing that is more important than all else:

Important

Always backup your data! This means:

  • Having backups in other physical locations. For example:
    • One encrypted copy of your backups on a separate server, in a different data center, and
    • One encrypted copy of your backups at home (if the data is yours) or office (if the data belongs to a business with an office), and
    • One encrypted copy of your backups in the cloud.
  • Regularly verifying that backups are kept up to date, and that the backups are complete and correct.
  • Regularly verifying that you can actually restore from the backups.
  • Occasionally verifying that you can set up a new server with the services that you need in order to replace the current server, so that whatever serving or processing you are doing with your data on your current server can continue there. Ideally with as little interruption to service as possible.

If you can't afford to keep as many as three separate backup locations now, start with just one of them. One is much better than none, even though more is better.

Configuring backups is beyond the scope of this guide. I will probably write a separate guide on that topic in the future. When that happens I will add a link to that guide from here.

Performing the install

Run

bsdinstall
  • For hostname I choose stage4, because the normal boot itself has 3 stages and this will be our fourth stage of sort of booting.
  • At the partitioning step we do guided root on ZFS, and we select:
    • Pool type/drives to consist of a single vdev with one drive
    • Encrypt disks: NO.
      • Remember, this is the "outer" base system. The "outer" base system is unencrypted, but will hold none of our service configurations or any of our data short of a default install running an SSH server.
    • Partition scheme "GPT (UEFI)"
  • At the user creation step, after you've created a password for root, create a user that has "boot" as part of its name, to distinguish it from the kinds of users you normally make on your servers. For example, I usually make my user named "erikn" but here I name it erikboot. When asked if you want to add the user to any additional groups, make sure to add the user to the wheel group.
  • Keep ssh selected as a service to run.
  • For all other steps make whatever choices you'd normally make according to your preference.

Finish initial steps

Export the zpool and then power off the VM.

zpool export zroot
poweroff

Check that it works so far

Now it's time to boot the VM again, but without the mfsBSD media.

In order to boot EFI in QEMU we need some extra files from https://www.kraxel.org/repos/jenkins/edk2/ as mentioned at https://wiki.freebsd.org/UEFI and also https://joonas.fi/2021/02/uefi-pc-boot-process-and-uefi-with-qemu/

wget https://www.kraxel.org/repos/jenkins/edk2/edk2.git-ovmf-x64-0-20220719.209.gf0064ac3af.EOL.no.nore.updates.noarch.rpm
sha256sum edk2.git-ovmf-x64-0-20220719.209.gf0064ac3af.EOL.no.nore.updates.noarch.rpm
bc42937c5c50b552dd7cd05ed535ed2b8aed30b04060032b7648ffeee2defb8e  edk2.git-ovmf-x64-0-20220719.209.gf0064ac3af.EOL.no.nore.updates.noarch.rpm

Extract.

apt install -y rpm2cpio
rpm2cpio edk2.git-ovmf-x64-0-20220719.209.gf0064ac3af.EOL.no.nore.updates.noarch.rpm | cpio -idmv
./usr/share/doc/edk2.git-ovmf-x64
./usr/share/doc/edk2.git-ovmf-x64/README
./usr/share/edk2.git
./usr/share/edk2.git/ovmf-x64
./usr/share/edk2.git/ovmf-x64/MICROVM.fd
./usr/share/edk2.git/ovmf-x64/OVMF-need-smm.fd
./usr/share/edk2.git/ovmf-x64/OVMF-pure-efi.fd
./usr/share/edk2.git/ovmf-x64/OVMF-with-csm.fd
./usr/share/edk2.git/ovmf-x64/OVMF_CODE-need-smm.fd
./usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd
./usr/share/edk2.git/ovmf-x64/OVMF_CODE-with-csm.fd
./usr/share/edk2.git/ovmf-x64/OVMF_VARS-need-smm.fd
./usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd
./usr/share/edk2.git/ovmf-x64/OVMF_VARS-with-csm.fd
./usr/share/edk2.git/ovmf-x64/UefiShell.iso
./usr/share/qemu/firmware/80-ovmf-x64-git-need-smm.json
./usr/share/qemu/firmware/81-ovmf-x64-git-pure-efi.json
./usr/share/qemu/firmware/82-ovmf-x64-git-with-csm.json
37888 blocks

Boot

qemu-system-x86_64 \
    \
    -drive if=pflash,format=raw,unit=0,readonly=on,file=usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd \
    -drive if=pflash,format=raw,unit=1,file=usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd \
    \
    -drive format=raw,file=/dev/nvme0n1,if=virtio \
    -drive format=raw,file=/dev/nvme1n1,if=virtio \
    -drive format=raw,file=/dev/nvme2n1,if=virtio \
    -drive format=raw,file=/dev/nvme3n1,if=virtio \
    \
    -nic user,hostfwd=tcp::2222-:22 \
    -vnc 127.0.0.1:1,password=on -k en-us -monitor stdio \
    -boot d \
    -m 8G

From the qemu console, use command change vnc password to set VNC password as per https://wiki.archlinux.org/title/QEMU#VNC

Then forward port 5901 from the server to your machine over SSH and then connect to VNC over the forwarded port.

Run this command in a new terminal on your computer to forward the port:

ssh -L 25901:127.0.0.1:5901 yourserver.example.com

(Subsitute your actual server DNS name or IP address in place of yourserver.example.com)

Then connect to VNC from your machine using the forwarded port 127.0.0.1:25901.

VNC should show the FreeBSD console login prompt. Log in.

Check the pool info and the datasets that have been created so far.

zpool status
  pool: zroot
 state: ONLINE
config:

	NAME        STATE     READ WRITE CKSUM
	zroot       ONLINE       0     0     0
	  vtbd0p3   ONLINE       0     0     0

errors: No known data errors
zfs list
NAME                  USED  AVAIL  REFER  MOUNTPOINT
zroot                 810M   914G    96K  /zroot
zroot/ROOT            807M   914G    96K  none
zroot/ROOT/default    807M   914G   807M  /
zroot/home            224K   914G    96K  /home
zroot/home/erikboot   128K   914G   128K  /home/erikboot
zroot/tmp              96K   914G    96K  /tmp
zroot/usr             288K   914G    96K  /usr
zroot/usr/ports        96K   914G    96K  /usr/ports
zroot/usr/src          96K   914G    96K  /usr/src
zroot/var             616K   914G    96K  /var
zroot/var/audit        96K   914G    96K  /var/audit
zroot/var/crash        96K   914G    96K  /var/crash
zroot/var/log         136K   914G   136K  /var/log
zroot/var/mail         96K   914G    96K  /var/mail
zroot/var/tmp          96K   914G    96K  /var/tmp

Export the zpool and shutdown the machine VM. Boot with mfsBSD media again.

zpool export zroot
poweroff

Note

Depending on what services you chose to run when you installed FreeBSD, it might not be possible to export the zpool at this point. For example, it might say that /var/log is busy. In that case, don't worry – power off the machine with the poweroff command even if you were not able to export the zpool.

Boot with mfsBSD again

qemu-system-x86_64 \
    -cdrom mfsbsd-14.1-RELEASE-amd64.iso \
    \
    -drive format=raw,file=/dev/nvme0n1,if=virtio \
    -drive format=raw,file=/dev/nvme1n1,if=virtio \
    -drive format=raw,file=/dev/nvme2n1,if=virtio \
    -drive format=raw,file=/dev/nvme3n1,if=virtio \
    \
    -display curses \
    -boot d \
    -m 8G

Once the console reaches the login screen, log in with the same mfsBSD credentials as before:

  • login: root
  • password: mfsroot

Initial ZFS datasets

Import pool

zpool import -o altroot=/mnt -f zroot

Get rid of the datasets that we don't want

zfs destroy zroot/var/tmp
zfs destroy zroot/var/mail
zfs destroy zroot/var/log
zfs destroy zroot/var/crash
zfs destroy zroot/var/audit
zfs destroy zroot/var
zfs destroy zroot/usr/src
zfs destroy zroot/usr/ports
zfs destroy zroot/usr
zfs destroy zroot/tmp
zfs destroy -r zroot/home

Now we are left with only the datasets we want to have

zfs list
NAME                 USED  AVAIL  REFER  MOUNTPOINT
zroot                809M   914G    96K  /mnt/zroot
zroot/ROOT           807M   914G    96K  none
zroot/ROOT/default   807M   914G   807M  /mnt

Unmount the zroot dataset.

mount
/dev/md0 on / (ufs, local, read-only)
devfs on /dev (devfs)
tmpfs on /rw (tmpfs, local)
devfs on /rw/dev (devfs)
zroot on /rw/mnt/zroot (zfs, local, noatime, nfsv4acls)
zfs umount zroot

Of course, now there are a bunch of files that we want to have which we no longer have, since they were put on those other datasets by bsdinstall.

Let's fix that.

Restore the files we want to keep

zfs mount zroot/ROOT/default
cd /mnt/tmp/
fetch https://download.freebsd.org/releases/amd64/14.1-RELEASE/base.txz
sha256sum base.txz
bb451694e8435e646b5ff7ddc5e94d5c6c9649f125837a34b2a2dd419732f347  base.txz
cd /mnt/
tar xv --keep-old-files -f /mnt/tmp/base.txz
cd /
chroot /mnt/
getent passwd
[...]
erikboot:[...]:1001:1001:Boot user:/home/erikboot:/bin/sh
mkdir /home/erikboot
chown erikboot:erikboot /home/erikboot
chmod 751 /home/erikboot
chmod 1777 /tmp

Exit chroot

exit

Export pool and power off QEMU.

zpool export zroot
poweroff

Prepare system

Boot into VM without mfsBSD

qemu-system-x86_64 \
    \
    -drive if=pflash,format=raw,unit=0,readonly=on,file=usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd \
    -drive if=pflash,format=raw,unit=1,file=usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd \
    \
    -drive format=raw,file=/dev/nvme0n1,if=virtio \
    -drive format=raw,file=/dev/nvme1n1,if=virtio \
    -drive format=raw,file=/dev/nvme2n1,if=virtio \
    -drive format=raw,file=/dev/nvme3n1,if=virtio \
    \
    -nic user,hostfwd=tcp::2222-:22 \
    -vnc 127.0.0.1:1,password=on -k en-us -monitor stdio \
    -boot d \
    -m 8G

From the qemu console, use command change vnc password to set VNC password as before.

Then, log in as root over forwarded VNC as before.

Install some packages in the outer system.

pkg install -y doas tree neovim zsh tmux

Create config file for doas in the outer system.

cat > /usr/local/etc/doas.conf <<EOF
permit nopass :wheel
EOF

Disallow password login over ssh by setting KbdInteractiveAuthentication to no in /etc/ssh/sshd_config in the outer system.

KbdInteractiveAuthentication no

Restart sshd

service sshd restart

Create ssh authorized keys file for non-root user

su - erikboot
mkdir .ssh
chmod 700 .ssh
cat > .ssh/authorized_keys <<EOF
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP60YiJ+4xoNz+DvcwBWV8WcjkRPnP+hOTnL4aSgH/Wd erikn@milkyway
EOF
exit

Reservation

Create a dataset that will reserve 20% of the capacity of the pool, as per recommendation from Michael W Lucas in the book FreeBSD Mastery: Advanced ZFS.

zfs create -o refreservation=200G -o canmount=off -o readonly=on -o mountpoint=none zroot/reservation
zfs list
NAME                 USED  AVAIL  REFER  MOUNTPOINT
zroot                201G   714G    96K  /zroot
zroot/ROOT          1.11G   714G    96K  none
zroot/ROOT/default  1.11G   714G  1.11G  /
zroot/reservation    200G   914G    96K  none

Prepare and mount encrypted dataset for "inner"

zfs create -o mountpoint=none -o encryption=on -o keyformat=passphrase zroot/IROOT
Enter new passphrase:
Re-enter new passphrase:
zfs create -o mountpoint=none zroot/IROOT/default
zfs list -o name,used,avail,refer,mountpoint,encryption,keyformat
NAME                  USED  AVAIL  REFER  MOUNTPOINT  ENCRYPTION   KEYFORMAT
zroot                 201G   714G    96K  /zroot      off          none
zroot/IROOT           400K   714G   200K  none        aes-256-gcm  passphrase
zroot/IROOT/default   200K   714G   200K  none        aes-256-gcm  passphrase
zroot/ROOT           1.11G   714G    96K  none        off          none
zroot/ROOT/default   1.11G   714G  1.11G  /           off          none
zroot/reservation     200G   914G    96K  none        off          none
zfs set -u mountpoint=/mnt zroot/IROOT/default
zfs mount zroot/IROOT/default
mount
zroot/ROOT/default on / (zfs, local, noatime, nfsv4acls)
devfs on /dev (devfs)
/dev/gpt/efiboot0 on /boot/efi (msdosfs, local)
zroot on /zroot (zfs, local, noatime, nfsv4acls)
zroot/IROOT/default on /mnt (zfs, local, noatime, nfsv4acls)

Install "inner"

bsdinstall

Choose hostname as inner.

On the partitioning step, choose "Shell" ("Open a shell and partition by hand"). We've already done the partitioning and mounted the target so proceed to exit.

exit

The installer will now extract the system.

After it finishes, exit the installer and have a look at the extracted files.

ls -al /mnt
total 180
drwxr-xr-x  19 root wheel   24 Oct 11 11:36 .
drwxr-xr-x  20 root wheel   25 Oct 11 11:09 ..
-rw-r--r--   2 root wheel 1011 May 31 09:00 .cshrc
-rw-r--r--   2 root wheel  495 May 31 09:00 .profile
-r--r--r--   1 root wheel 6109 May 31 09:39 COPYRIGHT
drwxr-xr-x   2 root wheel   49 May 31 09:00 bin
drwxr-xr-x  14 root wheel   70 Oct 11 11:36 boot
dr-xr-xr-x   2 root wheel    3 Oct 11 11:36 dev
-rw-------   1 root wheel 4096 Oct 11 11:36 entropy
drwxr-xr-x  30 root wheel  107 Oct 11 11:36 etc
drwxr-xr-x   3 root wheel    3 Oct 11 11:36 home
drwxr-xr-x   4 root wheel   78 May 31 09:08 lib
drwxr-xr-x   3 root wheel    5 May 31 08:58 libexec
drwxr-xr-x   2 root wheel    2 May 31 08:32 media
drwxr-xr-x   2 root wheel    2 May 31 08:32 mnt
drwxr-xr-x   2 root wheel    2 May 31 08:32 net
dr-xr-xr-x   2 root wheel    2 May 31 08:32 proc
drwxr-xr-x   2 root wheel  150 May 31 09:04 rescue
drwxr-x---   2 root wheel    7 May 31 09:39 root
drwxr-xr-x   2 root wheel  150 May 31 09:27 sbin
lrwxr-xr-x   1 root wheel   11 May 31 08:32 sys -> usr/src/sys
drwxrwxrwt   2 root wheel    2 May 31 08:32 tmp
drwxr-xr-x  15 root wheel   15 May 31 09:49 usr
drwxr-xr-x  24 root wheel   24 May 31 08:32 var

Give same hostid to inner as outer has, so that zpool import will not think pool has been used by a different system.

cp /etc/hostid /mnt/etc/hostid

And create authorized keys for the inner user.

mkdir /mnt/home/erikn/.ssh/
chown 1001:1001 /mnt/home/erikn/.ssh/
chmod 700 /mnt/home/erikn/.ssh/
cat > /mnt/home/erikn/.ssh/authorized_keys <<EOF
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDzZ5vJNPjptlO4boYHjSaegKtNc48JdxVEzeWrFI3TF erikn@milkyway
EOF

Note that we specified different ssh public keys to log in to the "outer" and "inner" systems.

Edit rc conf files of outer and inner.

nvim /etc/rc.conf
clear_tmp_enable="YES"
syslogd_flags="-ss"
hostname="stage4"

# Used when booting in QEMU
ifconfig_em0="DHCP"
ifconfig_em0_ipv6="inet6 accept_rtadv"

# Used when booting on hardware
ifconfig_re0_name="extif"
ifconfig_extif="DHCP"
#ifconfig_extif_ipv6="inet6 2f00::ba22 prefixlen 80"
ipv6_defaultrouter="fe80::1%extif"

local_unbound_enable="YES"

sshd_enable="YES"

ntpd_enable="YES"
ntpd_sync_on_start="YES"

moused_nondefault_enable="NO"

# Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable
dumpdev="AUTO"

zfs_enable="YES"
nvim /mnt/etc/rc.conf
clear_tmp_enable="YES"
syslogd_flags="-ss"
hostname="inner"

# Used when booting in QEMU
ifconfig_em0="DHCP"
ifconfig_em0_ipv6="inet6 accept_rtadv"

# Used when booting on hardware
ifconfig_re0_name="extif"
ifconfig_extif="DHCP"
#ifconfig_extif_ipv6="inet6 2f00::1279:9d43 prefixlen 80"
ipv6_defaultrouter="fe80::1%extif"

local_unbound_enable="YES"

sshd_enable="YES"

ntpd_enable="YES"
ntpd_sync_on_start="YES"

moused_nondefault_enable="NO"

# Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable
dumpdev="AUTO"

zfs_enable="YES"

wireguard_enable="YES"
wireguard_interfaces="wg0"

jail_enable="YES"

Power off the VM, and then power it on and ssh into it as your "outer" local user (whatever equivalent you have of my erikboot user).

Then, unset mountpoint for inner

doas zfs set mountpoint=none zroot/IROOT/default

Decrypt it

doas zfs load-key zroot/IROOT
Enter passphrase for 'zroot/IROOT':

And attempt to reboot into it

doas kenv vfs.root.mountfrom="zfs:zroot/IROOT/default"
doas reboot -r

If you're watching on VNC you'll see that it says

Trying to mount root from zfs:zroot/IROOT/default []...

and after a little bit of time you should see that it gives the login prompt with the hostname of the inner system

FreeBSD/amd64 (inner) (ttyv0)

login:

The outer and inner systems have different host keys for ssh.

In order to properly keep track of the known hosts for the outer and inner system on your client (such as your laptop), you can create entries similar to the following in your ~/.ssh/config on your client:

Host de4-recovery
	AddKeysToAgent yes
	UseKeychain yes
	HostName xxx.xxx.xxx.xxx
	HostkeyAlias "de4-recovery:22"
	IdentityFile ~/.ssh/host_specific/erik/hetzner/de4-recovery/id_ed25519_root
	User root
	RequestTTY yes

Host de4-stage4-qemu
	AddKeysToAgent yes
	UseKeychain yes
	HostName xxx.xxx.xxx.xxx
	HostkeyAlias "de4-stage4-qemu:2222"
	IdentityFile ~/.ssh/host_specific/erik/hetzner/de4-stage4/id_ed25519_erikboot
	Port 2222
	User erikboot
	RequestTTY yes

Host de4-stage4
	AddKeysToAgent yes
	UseKeychain yes
	HostName xxx.xxx.xxx.xxx
	HostkeyAlias "de4-stage4:22"
	IdentityFile ~/.ssh/host_specific/erik/hetzner/de4-stage4/id_ed25519_erikboot
	User erikboot
	RequestTTY yes

Host de4-inner-qemu
	AddKeysToAgent yes
	UseKeychain yes
	HostName xxx.xxx.xxx.xxx
	HostkeyAlias "de4-inner-qemu:2222"
	IdentityFile ~/.ssh/host_specific/erik/hetzner/de4-inner/id_ed25519_erikn
	Port 2222
	RequestTTY yes

Host de4-inner
	AddKeysToAgent yes
	UseKeychain yes
	HostName xxx.xxx.xxx.xxx
	HostkeyAlias "de4-inner:22"
	IdentityFile ~/.ssh/host_specific/erik/hetzner/de4-inner/id_ed25519_erikn
	RequestTTY yes

And then ssh using the relevant alias. In this case, for me, it's my de4-inner-qemu.

ssh de4-inner-qemu

Switch to the root user.

su -

Install some packages in the inner system.

pkg install -y doas tree neovim zsh tmux

Create config file for doas in the inner system.

cat > /usr/local/etc/doas.conf <<EOF
permit nopass :wheel
EOF

Disallow password login over ssh by setting KbdInteractiveAuthentication to no in /etc/ssh/sshd_config in the inner system.

KbdInteractiveAuthentication no

Power off the QEMU VM.

Check if UEFI boot is enabled

On the host system, in the Hetzner Rescue environment, run:

efibootmgr

If the output says:

EFI variables are not supported on this system.

then you need to send a support ticket to Hetzner to ask them to turn on UEFI for you.

https://docs.hetzner.com/robot/dedicated-server/operating-systems/uefi/

In the meantime, power off the host machine.

Moment of truth

After you have UEFI enabled by Hetzner support, or if it was already enabled according to the output of efibootmgr, boot the machine, and you should be able to ssh into outer system using your ssh host alias for it.

ssh de4-stage4

Rebooting into the inner system

Now that your machine is running FreeBSD on the metal, and you have logged in to the outer system via ssh, it's time to reboot into the inner system.

Decrypt the ZFS datasets for the inner system.

doas zfs load-key zroot/IROOT
Enter passphrase for 'zroot/IROOT':

And attempt to reboot into it

doas kenv vfs.root.mountfrom="zfs:zroot/IROOT/default"
doas reboot -r

Wait a bit for the system to reboot. Give it a minute or two. Then, ssh into the inner system using your ssh host alias for it.

ssh de4-inner

Creating zpool for data

TODO: Add this section.

Fixing problems

If problems arise booting into the system, for example after a system upgrade, boot the server into rescue mode again and ssh into it. Then

wget https://mfsbsd.vx.sk/files/iso/14/amd64/mfsbsd-14.1-RELEASE-amd64.iso

qemu-system-x86_64 \
    -cdrom mfsbsd-14.1-RELEASE-amd64.iso \
    \
    -drive format=raw,file=/dev/nvme0n1,if=virtio \
    -drive format=raw,file=/dev/nvme1n1,if=virtio \
    -drive format=raw,file=/dev/nvme2n1,if=virtio \
    -drive format=raw,file=/dev/nvme3n1,if=virtio \
    \
    -display curses \
    -boot d \
    -m 8G

And then once inside the VM, import the ZFS pool with altroot specified

zpool import -o altroot=/mnt -f zroot

Then take it from there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment