You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hetzner no longer offers direct install of FreeBSD, but we can do it ourselves. Here is how :)
Boot the server into rescue mode
Boot the Hetzner server in Hetzner Debian based rescue mode. ssh into it.
The Hetzner rescue image will tell you hardware details about the server in the login banner.
For example, with one of my servers I see:
Hardware data:
CPU1: AMD Ryzen 7 3700X 8-Core Processor (Cores 16)
Memory: 64248 MB
Disk /dev/nvme0n1: 1024 GB (=> 953 GiB) doesn't contain a valid partition table
Disk /dev/nvme1n1: 1024 GB (=> 953 GiB) doesn't contain a valid partition table
Disk /dev/nvme2n1: 1024 GB (=> 953 GiB) doesn't contain a valid partition table
Disk /dev/nvme3n1: 1024 GB (=> 953 GiB) doesn't contain a valid partition table
Total capacity 3815 GiB with 4 Disks
Network data:
eth0 LINK: yes
MAC: xx:xx:xx:xx:xx:xx
IP: xxx.xxx.xxx.xxx
IPv6: xxxx:xxx:xxx:xxxx::x/64
RealTek RTL-8169 Gigabit Ethernet driver
(MAC, IPv4 and IPv6 address redacted by me in the above example output. You'll see actual values.)
You can also run lsblk to show the drives:
lsblk
Output:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 3.2G 1 loop
nvme2n1 259:0 0 953.9G 0 disk
nvme3n1 259:1 0 953.9G 0 disk
nvme1n1 259:2 0 953.9G 0 disk
nvme0n1 259:3 0 953.9G 0 disk
Install tmux.
apt install -y tmux
Open a tmux session, so that if we lose connection to server while in the process of setup, we can quickly attach
to a tmux session and pick up work again right away.
tmux
Caution
The disadvantage of running in tmux is that it will mess up the text
shown in the FreeBSD installer a bit.
Tip
In a future update of this guide, I will check if screen works better or if there are
any steps we can take to make the bsdinstaller not mess up the text when running in tmux.
Retrieve mfsBSD and run it in QEMU with raw drives attached
basically have a mini VPS with mfsbsd running with real disk passthrough and console access, just like a KVM,
so I can install as usual - and then I can even test my installation directly by booting from it in the same way!
Then when it works I just boot the server normal (ie directly into FreeBSD) and if I ever b0rk something up
I boot the Linux rescue image and run mfsbsd again!
Qemu provides an emulated NIC to the VM. So if the physical network in the host
uses a NIC that needs a different driver, the NIC name will be different in the VM
from what it will be when running FreeBSD on the hardware.
The Qemu NIC will appear as em0.
However, in my case the physical NIC in the machine uses a different driver and
appears as re0 when running FreeBSD on the hardware.
The Hetzner Debian based rescue system will give you a minimal description of the NIC
in the machine when you ssh into it. Make note of that. If it's Intel, you can
put an entry for both igb0 in addition to em0 in your /etc/rc.conf
and then when you boot and ssh into the machine you will see which one was used,
and then you can update your /etc/rc.conf accordingly.
If the NIC has a RealTek chipset, it'll probably be re0 that you should
put an entry for in your /etc/rc.conf.
If the NIC is neither Intel nor RealTek, you have to find out what Linux commands to use
in the Hetzner Debian based rescue system to show more details about your NIC,
and then you need to figure out which FreeBSD NIC driver is correct for that one
and edit your /etc/rc.conf accordingly.
For reference, here is what the complete /etc/rc.conf from one of my Hetzner
servers looks like currently:
clear_tmp_enable="YES"
syslogd_flags="-ss"
hostname="de4"
# Used when booting in Qemu
ifconfig_em0="DHCP"
ifconfig_em0_ipv6="inet6 accept_rtadv"
# Used when booting on hardware
ifconfig_re0_name="extif"
ifconfig_extif="DHCP"
#ifconfig_extif_ipv6="inet6 2a01:4f9:5a:16cb:876a:bce7:b3c8:118a prefixlen 80"
ipv6_defaultrouter="fe80::1%extif"
local_unbound_enable="YES"
sshd_enable="YES"
ntpd_enable="YES"
ntpd_sync_on_start="YES"
moused_nondefault_enable="NO"
# Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable
dumpdev="AUTO"
zfs_enable="YES"
wireguard_enable="YES"
wireguard_interfaces="wg0"
jail_enable="YES"
Moment of truth
Reboot the host machine. All goes well, you'll be able to ssh into it and find
a running FreeBSD system :D
For many (most?) purposes, the standard install described above is sufficient.
It's straight forward, and easy to fix ifwhen something breaks.
The standard install described above however does not encrypt any parts of the system,
not even the home directories of users. And while you can add additional individual encrypted datasets
to your ZFS pool even with a standard install, you will not be able to turn on encryption
for any of the ZFS datasets that have been created by the installer.
Wouldn't it be nice if we could reduce the amount of data that is kept unencrypted at rest at least a bit?
One of the motivations of the custom install described here is to do exactly that.
Defining our goals
For my server there are some specific things I am interested in achieving:
Keep as much of the system as possible encrypted at rest. With data encrypted at rest, and the keys to decrypt
that data kept separate, we can recycle the harddrives in the future without needing to do overwrites
of the drives first. This is desirable for multiple reasons:
Big drives take a long time to fully overwrite. Especially so when you do one pass of writing zeros
followed by one or more passes of writing random data to completely cover the drives.
Hardware failures can leave us unable to fully or even partially being able to overwriting
the data, meaning that safe disposal will hinge on being able to sufficiently physically destroying the drives.
The base system should be possible to throw away and set up again quickly and easily.
Corollary: None of the system directory trees should be included in backups.
Not even /usr/home as a whole. We'll get back to this.
Anything that is important should live in jails, with their own ZFS datasets.
This way, we can back up as well as restore or rollback to past versions of those "things"
mostly independently of the host system itself.
Initial install
We will start off with a standard install.
This will form the basis for our "outer" base system. We will this one to boot the server into a state where
we can ssh into it to unlock our remaining datasets, from which we can then reboot into our "inner" base system.
On the server I am currently setting up while updating this guide, we have 4 drives total.
I go a bit back and forth from time to time, sometimes using separate pools for system and data of interest,
and sometimes setting up servers with one big pool for everything. On the last couple of servers
I set up, I was using one big pool for both system and data. This time around, I will set up the server
with a single drive for the system and a pool with 3 drives for data.
There are some tradeoffs, in both directions.
Disadvantages of having a separate pool for the system include:
If the system pool consists of a single drive, we lose out on some of the
ZFS healing properties for the system install itself.
If the total number of drives is low, we lose out on drives for our data
pool that could otherwise provide additional redundancy or capacity for our data.
The main advantage of having a separate pool for the system, as I see it is this:
As long as you remember which drive or set of drives the system was installed to, you should
be able to completely reinstall the system overwriting all data you previously
had on that or those drives, while your important data you want to keep is safely kept
in its separate pool on its separate drives.
Note
When I say "remember which", I really mean "write it down somewhere obvious, where you can find it".
Which configuration to use, in terms of number of pools and in terms of the setup
of the ZFS pool(s) themselves will depend on the number of drives you have and
what your routines for managing backups and restores will be like.
A word on backups
Regardless of whether you choose to keep separate pools for system and data, or everything
on one pool, there is one thing that is more important than all else:
Important
Always backup your data! This means:
Having backups in other physical locations. For example:
One encrypted copy of your backups on a separate server, in a different data center, and
One encrypted copy of your backups at home (if the data is yours)
or office (if the data belongs to a business with an office), and
One encrypted copy of your backups in the cloud.
Regularly verifying that backups are kept up to date, and that the backups are complete and correct.
Regularly verifying that you can actually restore from the backups.
Occasionally verifying that you can set up a new server with the services
that you need in order to replace the current server, so that whatever serving or
processing you are doing with your data on your current server can continue there.
Ideally with as little interruption to service as possible.
If you can't afford to keep as many as three separate backup locations now,
start with just one of them. One is much better than none, even though more is better.
Configuring backups is beyond the scope of this guide. I will probably write a separate guide
on that topic in the future. When that happens I will add a link to that guide from here.
Performing the install
Run
bsdinstall
For hostname I choose stage4, because the normal boot itself has 3 stages and this will be our fourth stage of sort of booting.
At the partitioning step we do guided root on ZFS, and we select:
Pool type/drives to consist of a single vdev with one drive
Encrypt disks: NO.
Remember, this is the "outer" base system. The "outer" base system is unencrypted,
but will hold none of our service configurations or any of our data short of
a default install running an SSH server.
Partition scheme "GPT (UEFI)"
At the user creation step, after you've created a password for root, create a user that has "boot" as part of its name,
to distinguish it from the kinds of users you normally make on your servers. For example, I usually make my user named
"erikn" but here I name it erikboot. When asked if you want to add the user to any additional groups,
make sure to add the user to the wheel group.
Keep ssh selected as a service to run.
For all other steps make whatever choices you'd normally make according to your preference.
Finish initial steps
Export the zpool and then power off the VM.
zpool export zroot
poweroff
Check that it works so far
Now it's time to boot the VM again, but without the mfsBSD media.
Export the zpool and shutdown the machine VM. Boot with mfsBSD media again.
zpool export zroot
poweroff
Note
Depending on what services you chose to run when you installed FreeBSD,
it might not be possible to export the zpool at this point. For example,
it might say that /var/log is busy. In that case, don't worry – power off
the machine with the poweroff command even if you were not able to export
the zpool.
Create a dataset that will reserve 20% of the capacity of the pool,
as per recommendation from Michael W Lucas in the book FreeBSD Mastery: Advanced ZFS.
zfs list -o name,used,avail,refer,mountpoint,encryption,keyformat
NAME USED AVAIL REFER MOUNTPOINT ENCRYPTION KEYFORMAT
zroot 201G 714G 96K /zroot off none
zroot/IROOT 400K 714G 200K none aes-256-gcm passphrase
zroot/IROOT/default 200K 714G 200K none aes-256-gcm passphrase
zroot/ROOT 1.11G 714G 96K none off none
zroot/ROOT/default 1.11G 714G 1.11G / off none
zroot/reservation 200G 914G 96K none off none
zfs set -u mountpoint=/mnt zroot/IROOT/default
zfs mount zroot/IROOT/default
mount
zroot/ROOT/default on / (zfs, local, noatime, nfsv4acls)
devfs on /dev (devfs)
/dev/gpt/efiboot0 on /boot/efi (msdosfs, local)
zroot on /zroot (zfs, local, noatime, nfsv4acls)
zroot/IROOT/default on /mnt (zfs, local, noatime, nfsv4acls)
Install "inner"
bsdinstall
Choose hostname as inner.
On the partitioning step, choose "Shell" ("Open a shell and partition by hand"). We've already done the partitioning and mounted the target so proceed to exit.
exit
The installer will now extract the system.
After it finishes, exit the installer and have a look at the extracted files.
ls -al /mnt
total 180
drwxr-xr-x 19 root wheel 24 Oct 11 11:36 .
drwxr-xr-x 20 root wheel 25 Oct 11 11:09 ..
-rw-r--r-- 2 root wheel 1011 May 31 09:00 .cshrc
-rw-r--r-- 2 root wheel 495 May 31 09:00 .profile
-r--r--r-- 1 root wheel 6109 May 31 09:39 COPYRIGHT
drwxr-xr-x 2 root wheel 49 May 31 09:00 bin
drwxr-xr-x 14 root wheel 70 Oct 11 11:36 boot
dr-xr-xr-x 2 root wheel 3 Oct 11 11:36 dev
-rw------- 1 root wheel 4096 Oct 11 11:36 entropy
drwxr-xr-x 30 root wheel 107 Oct 11 11:36 etc
drwxr-xr-x 3 root wheel 3 Oct 11 11:36 home
drwxr-xr-x 4 root wheel 78 May 31 09:08 lib
drwxr-xr-x 3 root wheel 5 May 31 08:58 libexec
drwxr-xr-x 2 root wheel 2 May 31 08:32 media
drwxr-xr-x 2 root wheel 2 May 31 08:32 mnt
drwxr-xr-x 2 root wheel 2 May 31 08:32 net
dr-xr-xr-x 2 root wheel 2 May 31 08:32 proc
drwxr-xr-x 2 root wheel 150 May 31 09:04 rescue
drwxr-x--- 2 root wheel 7 May 31 09:39 root
drwxr-xr-x 2 root wheel 150 May 31 09:27 sbin
lrwxr-xr-x 1 root wheel 11 May 31 08:32 sys -> usr/src/sys
drwxrwxrwt 2 root wheel 2 May 31 08:32 tmp
drwxr-xr-x 15 root wheel 15 May 31 09:49 usr
drwxr-xr-x 24 root wheel 24 May 31 08:32 var
Give same hostid to inner as outer has, so that zpool import will not think pool has been used by a different system.
Note that we specified different ssh public keys to log in to the "outer" and "inner" systems.
Edit rc conf files of outer and inner.
nvim /etc/rc.conf
clear_tmp_enable="YES"
syslogd_flags="-ss"
hostname="stage4"
# Used when booting in QEMU
ifconfig_em0="DHCP"
ifconfig_em0_ipv6="inet6 accept_rtadv"
# Used when booting on hardware
ifconfig_re0_name="extif"
ifconfig_extif="DHCP"
#ifconfig_extif_ipv6="inet6 2f00::ba22 prefixlen 80"
ipv6_defaultrouter="fe80::1%extif"
local_unbound_enable="YES"
sshd_enable="YES"
ntpd_enable="YES"
ntpd_sync_on_start="YES"
moused_nondefault_enable="NO"
# Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable
dumpdev="AUTO"
zfs_enable="YES"
nvim /mnt/etc/rc.conf
clear_tmp_enable="YES"
syslogd_flags="-ss"
hostname="inner"
# Used when booting in QEMU
ifconfig_em0="DHCP"
ifconfig_em0_ipv6="inet6 accept_rtadv"
# Used when booting on hardware
ifconfig_re0_name="extif"
ifconfig_extif="DHCP"
#ifconfig_extif_ipv6="inet6 2f00::1279:9d43 prefixlen 80"
ipv6_defaultrouter="fe80::1%extif"
local_unbound_enable="YES"
sshd_enable="YES"
ntpd_enable="YES"
ntpd_sync_on_start="YES"
moused_nondefault_enable="NO"
# Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable
dumpdev="AUTO"
zfs_enable="YES"
wireguard_enable="YES"
wireguard_interfaces="wg0"
jail_enable="YES"
Power off the VM, and then power it on and ssh into it as your "outer" local user
(whatever equivalent you have of my erikboot user).
Then, unset mountpoint for inner
doas zfs set mountpoint=none zroot/IROOT/default
Decrypt it
doas zfs load-key zroot/IROOT
Enter passphrase for 'zroot/IROOT':
And attempt to reboot into it
doas kenv vfs.root.mountfrom="zfs:zroot/IROOT/default"
doas reboot -r
If you're watching on VNC you'll see that it says
Trying to mount root from zfs:zroot/IROOT/default []...
and after a little bit of time you should see that it gives the login prompt with the hostname of the inner system
FreeBSD/amd64 (inner) (ttyv0)
login:
The outer and inner systems have different host keys for ssh.
In order to properly keep track of the known hosts for the outer and inner system on your client (such as your laptop),
you can create entries similar to the following in your ~/.ssh/config on your client:
After you have UEFI enabled by Hetzner support, or if it was already enabled according to the output
of efibootmgr, boot the machine, and you should be able to ssh into outer system using
your ssh host alias for it.
ssh de4-stage4
Rebooting into the inner system
Now that your machine is running FreeBSD on the metal, and you have logged in to the outer system via ssh,
it's time to reboot into the inner system.
Decrypt the ZFS datasets for the inner system.
doas zfs load-key zroot/IROOT
Enter passphrase for 'zroot/IROOT':
And attempt to reboot into it
doas kenv vfs.root.mountfrom="zfs:zroot/IROOT/default"
doas reboot -r
Wait a bit for the system to reboot. Give it a minute or two. Then, ssh into the inner system
using your ssh host alias for it.