ZFS or UFS?

I started writing the last post as a discussion of ZFS and UFS and it ended up an explainer about how UFS was viable with gmirror. You need to read it to understand the issues if you want redundant storage. But in simple terms, as to which is better, ZFS is. Except when UFS has the advantage.

UFS had a huge problem. If the music stopped (the kernel crashed or the power was cut) the file system was in a huge mess as it wasn’t updated in the right order as it went along. This file system was also know as FS or FFS (Fast File System) but it was more or less the same thing, and is now history. UFS2 came along (and JFS2 on AIX), which had journaling so that if the music stopped it could probably catch up with itself when the power came back. We’re really talking about UFS2 here, which is pretty solid.

Then along comes ZFS, which combines a next generation volume manager and next generation file system in one. In terms of features and redundancy it’s way ahead. Some key advantages are built and very powerful RAID, Copy-on-Write for referential integrity following a problem, snapshots, compression, scalability – the list is long. If you want any of these good features you probably want ZFS. But there are two instances where you want UFS2.

Cost

The first problem with ZFS is that all this good stuff comes at a cost. It’s not a huge cost by modern standards – I’ve always reckoned an extra 2Gb of RAM for the cache and suchlike covers the resource and performance issues . But on a very small system, 2Gb of RAM is significant.

The second problem is more nuanced. Copy on Write. Basically, to get the referential integrity and snapshots, if you change the contents of a block within a file in ZFS it doesn’t overwrite the block, writes a new block in free space. If the old block isn’t needed as part of a snapshot it will be marked as free space afterwards. This means that if there’s a failure while the block is half written, no problem – the old block is there and the write never happened. Reboot and you’re at the last consistent state no more than five seconds before some idiot dug up the power cable.

Holy CoW!

So Copy-on-Write makes sense in many ways, but as you can imagine, if you’re changing small bits of a large random access file, that file is going to end seriously fragmented. And there’s no way to defragment it. This is exactly what a database engine does to its files. Database engines enforce their own referential integrity using synchronous writes, so they’re going to be consistent anyway – but if you’re insisting all transactions in a group are written in order, synchronously, and the underlying file system is spattering blocks all over the disk before returning you’ve got a double whammy – fragmentation and slow write performance. You can put a lot of cache in to try and hide the problem, but you can’t cache a write if the database insists it won’t proceed until it’s actually stored on disk.

In this one use case, UFS2 is a clear winner. It also doesn’t degrade so badly as the disk becomes full. (The ZFS answer is that if the zpool is approaching 80% capacity, add more disks).

Best of Both

There is absolutely nothing stopping you having ZFS and UFS2 on the same system – on the same drives even. Just create a partition for your database, format it using makefs and mount it on the ZFS tree wherever it’s needed. You probably want it mirrored, so use gmirror. You won’t be able to snapshot it, or otherwise back it up while it’s running, but you can dump it to a ZFS dataset and have that replicated along with all the others.

You can also boot of UFS2 and create a zpool on additional drives or partitions if you prefer, mounting them on the UFS tree. Before FreeBSD 10 had full support for booting direct of ZFS this was the normal way of using it. The advantages of having the OS on ZFS (easy backup, snapshot and restore) mean it’s probably preferable to use it for the root.

UFS, gmirror and GPT drives

Spot the deliberate mistake

Over eight ago I wrote a post ZFS is not always the answer. Bring back gmirror!, suggesting that writing off UFS in favour of ZFS wasn’t a clear cut decision and reminding people how gmirror could be used to mirror drives is you needed redundancy. It’s still true, but it probably needs an update as things are done a little differently now.

MBR vs GPT

There have been various disk partition formats over the years. The original PDP-11Unix contained only a boot block (512b) to kick start the OS, but BSD implemented its own partitioning scheme from 386BSD onwards – 8K long consisting of a tiny boot1 section that was just enough to find boot2 in the same slice, which was then able to read UFS and therefore the kernel. This appeared 4.2BSD on the VAX.

Then from the early 1990s the “standard” partition scheme from the MS-DOS Master Boot Record (MBR) seemed like a great idea. Slices got replaced by partitions and you could co-exist with other systems on the same drive, and x86 systems were now really common.

The so-called MBR scheme had its problems (and workarounds) as Microsoft wasn’t exactly thinking ahead but these have been fixed with the wonderful GPT scheme, which was actually designed. However, GEOM Mirror and UFS predate GPT adoption and you have to be aware of a few things if you’re going to use them together. And you should be using GPT.

Why should you use GPT just because it’s “new”? It was actually dreamt up more than 25 years by Intel (on the IA-64 I believe). It has a backup header so if you lose the first blocks on your drive you’re not dead in the water – a favourite trick with DOS/Windows losing the entire drive for the sake of one sector. GPT allows drives to be more than 2Tb because it has 64-bit logical block addresses. If that’s not enough, it identifies partitions with a UUID so you can move them around physically without having problems, and if you’re mixing operating systems on the same disk the others are likely to be using GPT too, so they’ll play nice. As long as you have UEFI compatible firmware, you’re good to go. If all your drives are <2TB and you have old firmware, and only want to run FreeBSD, stick to MBR – and keep a backup of the boot block on a floppy just in case.

Gmirror and GPT

As I mentioned, GPT keeps a second copy of the partition information on the disk. In fact it stores a copy at the end of the drive, and if the one at the front is corrupt or unreadable it’ll use that instead. Specifically GPT stores header in LBA 1 and the partition table in LBA 2-33 (insanely large partition table but Intel didn’t want to be accused of making the same limiting mistakes as Microsoft).

The backup GPT header is on on the last block of the drive, and the partition table going backwards from that (for 33 LBAs).

GMirror, meanwhile, stores its metadata on the last 512-byte sector of the drive. CRUNCH.

So what to do? One method is to use the -h switch when setting up with gmirror:

gmirror label -h m0 da0 da1

This moves the metadata to the front of the disk, which will deconflict it with the GPT header okay but might crunch with other bootloaders, particularly from another OS that’s sharing the same disk, and which we have no control of. I say might. Personally, I wouldn’t be inclined to take the risk unless I’m dedicating the drive to FreeBSD.

The safe method is to NOT mirror the entire disk, only the partitions we’re interested in. Conventionally, and in the 2017 post, you mirrored the entire drive and therefore the drives were functionally identical without any further work. The downside was that if you replaced a drive you needed one exactly the same size (or larger), and not all 500Gb drives are the same number of blocks (although there’s a pretty good chances these days).

GEOMs and disks?

I’ve explained how to mirror a single partition already, but not gone into the technicalities. If you’re new to FreeBSD you might not have cottoned on what a GEOM is. It’s short for “geometry”, which probably doesn’t help with understanding it one bit.

It gets the name from disk geometry, but don’t worry about the name. It’s an abstraction layer added to FreeBSD between the physical drive (provider) and higher level functions of the OS such as filing systems (consumers). You can add GEOM classes between the provider and consumer to provide RAID, mirroring, encryption, journaling, volume management and suchlike. Before ZFS, this was how you got fancy stuff done. Now, not so much. But the GEOM mirror class (aka gmirror) is still very useful indeed.

But the bottom line is that a disk partition can be a provider in just the same way as the whole disk, so what works for a disk will also work for a partition. Chances are the installer has partitioned up your drive thus:

=>        40  5860533088  ada0  GPT  (2.7T)
          40        1024     1  freebsd-boot  (512K)
        1064         984        - free -  (492K)
        2048     4194304     2  freebsd-swap  (2.0G)
     4196352  5856335872     3  freebsd-ufs (2.7T)
  5860532224         904        - free -  (452K)

This means /dev/ada0p3 is the UFS partition we’re interested in mirroring. Believe it or not, partition number starts a one, not zero!

How to actually do it

So if you’ve installed your system and now want to add a GEOM mirror, proceed as follows. Let’s assume your second drive is ada1, which would be logical.

You’ll have to partition it so it has at least one partition the same size as the one you want to mirror. Chances are you’ll want all partitions. The quickest way to achieve this is to simply copy the partition table:

gpart backup ada0 | gpart restore -F ada1

You can sanity check this with gpart show ada1, which should be the same as gpart show ada0.

Load the geom_mirror module

kldload geom_mirror
echo 'geom_mirror_load="YES"' >> /boot/loader.conf

The second line adds it to loader.conf to make it load each time, but only do it if it’s not there already. The kldload will complain if it’s already loaded, which is a good clue you don’t need the second line.

Create the mirror

gmirror label ufsroot /dev/ada0p3 /dev/ada1p3

The “label” subcommand simply writes the metadata to the disks or partitions – remember they’re all the same to GEOM. The name “ufsroot” is chosen by me to be meaningful. Manuals use things like gm0 for GEOM mirrors and people have come to think it’s important they’re named this way, when the opposite is true. You already know it’s a GEOM mirror because the device is in /dev/mirror – it’s more helpful to know what it’s used for, e.g. UFS root, or swap, or var or whatever.

You can, while you’re at it, mirror as many partitions as you wish if you have separate ones for other purposes. You can even mirror a zfs partition without it knowing if you’re crazy enough. Mirroring the swap partitions is something you should definitely consider.

You can check it’s worked with gmirror status, which should output something like this:

  Name         Status   Components
mirror/ufsroot COMPLETE ada0p3 (ACTIVE)
                        ada1p3 (SYNCHRONIZING)

Wait until it’s finished synchronising, which will take a long time on a large disk. Perhaps go to bed.


Mount the mirror

This process will have created a new device called /dev/mirror/ufsroot but you still have to mount it in place of the “old” UFS partition. This is controlled in the normal way by /etc/fstab, so make a backup and fire up your favourite editor.

Look for the entry for /dev/ada0p3 and change it to /dev/mirror/ufsroot:

/dev/mirror/gm0 / ufs rw 1 1

Reboot and you should be good.

Boot code

Although your UFS partition is mirrored, if ada0 fails now the system won’t boot as ada1 lacks the boot code. You can add this this easily enough:

gpart bootcode -b /boot/pmbr -p /boot/gptboot -i 1 ada1

Finally, what about swap partitions? For robustness, mirror them too in the same way:

gmirror label swap /dev/ada0p2 /dev/ada1p2

Then edit fstab to swap on /dev/mirror/swap

Alternatively you can edit fstab to swap on ada1p2 as well (best for performance). Or you can just leave it as it is – if ada0 fails and you reboot you’ll have no swap until you fix it, but you’ll probably be worrying about other things.

Set up FreeBSD in two mirrored drives using UFS

I’ve written about the virtues of Geom Mirror (gmirror) in the past. Geom Mirror was probably the best way of implementing redundant storage between FreeBSD 5.3 (2004) until ZFS was introduced in FreeBSD 7.0 in 2008. Even then, ZFS is heavyweight and the Geom Mirror was tested and more practical for many years afterwards.

The Geom system also has a RAID3 driver. RAID3 is weird. It’s the one using a separate parity drive. It works, but it wasn’t popular. If you had a big FreeBSD system and wanted an array it was probably better to use an LSI host bus adapter and have that manage it with mptutil. But for small servers, especially remotely managed, Geom Mirror was the best. I’m still running it on a few twin-drive servers, and will probably continue for some time to come.

The original Unix File System (UFS2) actually has a couple of advantages over ZFS. Firstly it has much lower resource requirements. Secondly, and this is a big one, it has in-place updates. This is a big deal with random access files, such as databases or VM hard disks, as the Copy-on-Write system ZFS uses fragments the disk like crazy. To maintain performance on a massively fragmented file system, ZFS requires a huge amount of cache RAM.

What you need for random access read/write files are in-place updates. Database engines handle transaction groups themselves to ensure that the data structure’s integrity is maintained. ZFS does this at the file level instead of application level, which isn’t really good enough as the application knows what is and what isn’t required. There’s no harm in ZFS doing it too, but it’s a waste. And the file fragmentation is a high price to pay.

So, for database type applications, UFS2 still rules. There’s nothing wrong with having a hybrid system with both UFS and ZFS, even on the same disk. Just mount the UFS /var onto the ZFS tree.

But back to the twin drive system: The FreeBSD installed doesn’t have this as an option. So here’s a handy dandy script wot I rote to do it for you. Boot of a USB stick or whatever and run it.

Script to install FreeBSD on gmirror

Use as much or as little as you like.

At the beginning of the script I define the two drives I will be using. Obviously change these! If the disks are not blank it might not work. The script tries to destroy the old partition data but you may need to do more if you have it set up with something unusual.

Be careful – it will delete everything on both drives without asking!

Read the comments in the script. I have set it up to use a 8g UFS partition, but if you leave out the “-s 8g” the final partition will use all the space, which is probably what you want. For debugging I kept it small.

I have put everything on a single UFS partition. If you want separate / /usr /var then you need to modify it to what you need and create a mirror for each (and run newfs for each). The only think is that I’ve created a swap file on each drive that is NOT mirrored and configured it to use both.

I have not set up everything on the new system, but it will boot and you can configure other stuff as you need by hand. I like to connect to the network and have an admin user so I can work on a remote terminal straight away, so I have created an “admin” user with password “password” and enabled the ssh daemon. As you probably know, FreeBSD names its Ethernet adapters by manufacturer and you don’t know what you’ll have so I just have it try DHCP on every possible interface. Edit the rc.conf file how you need it once it’s running.

If base.txz and kernel.txz are in the current directory, fine. The script tries to download them at present.

And finally, I call my mirrors m0, m1, m2 and so on. Some people like to use gm0. It really doesn’t matter what you call them.

#!/bin/sh
# Install FreeBSD on two new disks set up a a gmirror
# FJL 2025
# Edit stuff in here as needed. At present it downloads
# FreeBSD 14.2-RELEASE and assumes the disks
# in use are ada0 and ada1

# Fetch the OS files if needed (and as appropriate)
fetch https://download.freebsd.org/ftp/releases/amd64/14.2-RELEASE/kernel.txz
fetch https://download.freebsd.org/ftp/releases/amd64/14.2-RELEASE/base.txz

# Disks to use for a mirror. All will be destroyed! Edit these. The -xxxx
# is there to save you if you don't
D0=/dev/da1-xxxxx
D1=/dev/da2-xxxxx

# User name and password to set up initial user.
ADMIN=admin
ADMINPASS=password

# Make sure the geom mirror module is loaded.
kldload geom_mirror

# Set up the first drive
echo Clearing $D0
gpart destroy -F $D0
dd if=/dev/zero of=$D0 bs=1m count=10

# Then create p1 (boot), p2 (swap) and p3 (ufs)
# Note the size of the UFS partition is set to 8g. If you delete
# the -s 8g it will use the rest of the disk by default. For testing
# it's better to have something small so newfs finishes quick.

echo Creating gtp partition on $D0
gpart create -s gpt $D0
gpart add -t freebsd-boot -s 512K $D0
gpart add -t freebsd-swap -s 4g $D0
gpart add -t freebsd-ufs -s 8g $D0

echo Installing boot code on $D0
# -b installs protective MBR, -i the Bootloader.
# Assumes partition 1 is freebsd-boot created above.
gpart bootcode -b /boot/pmbr -p /boot/gptboot -i 1 $D0

# Set up second drive
echo Clearing $D1
gpart destroy -F $D1
dd if=/dev/zero of=$D1 bs=1m count=10

# Copy partition data to second drive and put on boot code
gpart backup $D0 | gpart restore $D1
gpart bootcode -b /boot/pmbr -p /boot/gptboot -i 1 $D1

# Mirror partition 3 on both drives
gmirror label -v m0 ${D0}p3 ${D1}p3

echo Creating file system
newfs -U /dev/mirror/m0
mkdir -p /mnt/freebsdsys
mount  /dev/mirror/m0 /mnt/freebsdsys

echo Decompressing Kernel
tar -x -C /mnt/freebsdsys -f kernel.txz
echo Decompressing Base system
tar -x -C /mnt/freebsdsys -f base.txz

# Tell the loader where to mount the root system from
echo 'geom_mirror_load="YES"' > /mnt/freebsdsys/boot/loader.conf
echo 'vfs.root.mountfrom="ufs:/dev/mirror/m0"' \
>> /mnt/freebsdsys/boot/loader.conf

# Set up fstab so it all mounts.
echo $D0'p2 none swap sw 0 0' > /mnt/freebsdsys/etc/fstab
echo $D1'p2 none swap sw 0 0' >> /mnt/freebsdsys/etc/fstab
echo '/dev/mirror/m0 / ufs rw 1 1' >> /mnt/freebsdsys/etc/fstab

# Enable sshd and make ethernet interfaces DHCP configure
echo 'sshd_enable="YES"' >/mnt/freebsdsys/etc/rc.conf
for int in em0 igb0 re0 bge0 alc0 fxp0 xl0 ue0 igb0 xcgbe0 bnxt0 mlx0
do
echo 'ifconfig_'$int'="DHCP"' >>/mnt/freebsdsys/etc/rc.conf
done

# Create initial user suitable for ssh login
pw -R /mnt/freebsdsys useradd $ADMIN -G wheel -m
echo "$ADMINPASS" | pw -R /mnt/freebsdsys usermod -n $ADMIN -h 0
echo "$ADMINPASS" | openssl passwd -6 -stdin | pw -R /mnt/freebsdsys usermod -n $ADMIN -H 0

# Tidy up
umount /mnt/freebsdsys
echo Done. Remove USB stick or whatever and reboot.

ZFS is not always the answer. Bring back gmirror!

The ZFS bandwaggon has momentum, but ZFS isn’t for everyone. UFS2 has a number of killer advantages in some applications.

ZFS is great if you want to store a very large number of normal files safely. It’s copy-on-write (COW) is a major advantage for backup, archiving and general data safety, and datasets allow you to fine-tune almost any way you can think of. However, in a few circumstances, UFS2 is better. In particular, large random-access files do badly with COW.

Unlike traditional systems, a block in a file isn’t overwritten in place, it always ends up at a different location. If a file started off contiguous it’ll pretty soon be fragmented to hell and performance will go off a cliff. Obvious victims will be databases and VM hard disk images. You can tune for these, but to get acceptable performance you need to throw money and resources to bring ZFS up to the same level. Basically you need huge RAM caches, possibly an SLOG, and never let your pool get more than 50% full. If you’re unlucky enough to end up at 80% full ZFS turns off speed optimisations to devote more RAM to caching as things are going to get very bad fragmentation-wise.

If these costs are a problem, stuck with UFS. And for redundancy, there is still good old GEOM Mirror (gmirror). Unfortunately the documentation of this now-poor relation has lagged a bit, and what once worked as standard, doesn’t. So here are some tips.

The most common use of gmirror (with me anyway) is a twin-drive host. Basically I don’t want things to fail when a hard disk dies, so I add a second redundant drive. Such hosts (often 1U servers) don’t have space for more than two drives anyway – and it pays to keep things simple.

Setting up a gmirror is really simple. You create one using the “gmirror label” command. There is no “gmirror create” command; it really is called “label”, and it writes the necessary metadata label so that mirror will recognise it (“gmirror destroy” is present and does exactly what you might expect).

So something like:

gmirror label gm0 ada1 ada2

will create a device called /dev/mirror/gm0 and it’ll contain ada1’s contents mirrored on to ada2 (once it’s copied it all in the background). Just use /dev/mirror/gm0 as any other GEOM (i.e. disk). Instead of calling it gm0 I could have called it gm1, system, data, flubnutz or anything else that made sense, but gm0 is a handy reminder that it’s the first geom mirror on the system and it’s shorter to type.

The eagle eyed might have noticed I used ada1 and ada2 above. You’ve booted off ada0, right? So what happens if you try mirroring yourself with “gmirror label gm0 ada0 ada1“? Well this used to work, but in my experience it doesn’t any more. And on a twin-drive system, this is exactly what you want to do. But it is still possible, read on…

How to set up a twin-drive host booting from a geom mirror

First off, before you do anything (even installing FreeBSD) you need to set up your disks. Since the IBM XT, hard disks have been partitioned using an MBR (Master Boot Record) at the start. This is really old, naff, clunky and Microsoft. Those in the know have been using the far superior GPT system for ages, and it’s pretty cross-platform now. However, it doesn’t play nice with gmirror, so we’re going to use MBR instead. Trust me on this.

For the curious, know that GPT keeps a copy of the partition table at the beginning and end of the disk, but MBR only has one, stored at the front. gmirror keeps its metadata at the end of the disk, well away from the MBR but unfortunately in exactly the same spot as the spare GPT. You can hack the gmirror code so it doesn’t do this, or frig around with mirroring geoms rather than whole disks and somehow get it to boot, but my advice is to stick to MBR partitioning or BSDlabels, which is an extension. There’s not a lot of point in ever mounting your BSD boot drive on a non-BSD system, so you’re not losing much whatever you choose.

Speaking of metadata, both GPT and gmirror can get confused if they find any old tables or labels on a “new” disk. GPT will find old backup partition tables and try to restore them for you, and gmirror will recognise old drives as containing precious data and dig its heels in when you try to overwrite it. Both gpart and gmirror have commands to erase their metadata, but I prefer to use dd to overwrite the whole disk with zeros anyway before re-use. This checks that the disk is actually good, which is nice to know up-front. You could just erase the start and end if you were in a hurry and wanted to calculate the offsets.

The next thing you’ll need to do is load the geom_mirror kernel module. Either recompile the kernel with it added, or if this fills you with horror,  just add ‘load_geom_mirror=”yes”‘ to /boot/loader.conf. This does bring it in early enough in the process to let you boot from it. The loader will boot from one drive or the other and then switch to mirror mode when it’s done.

So, at this point, you’ve set up FreeBSD as you like on one drive (ada0), selecting BSDlabels or MBR as the partition method and UFS as the file system. You’ve set it to load the geom_mirror module in loader.conf.  You’re now looking at a root prompt on the console, and I’m assuming your drives are ada0 and ada1, and you want to call your mirror gm0.

Try this:

gmirror label gm0 ada0

Did it work? Well it used to once, but now you’ll probably get an error message saying it could not write metadata to ada0. If (when) this happens I know of one answer, which I found after trying everything else. Don’t be tempted to try everything else yourself (such as seeing if it works with ada1). Anything you do will either fail if you’re lucky, or make things worse. So just reboot, and select single-user mode from the loader menu.

Once you’re at the prompt, type the command again, and this time it should say that gm0 is created. My advice is to now reboot rather than getting clever.

When you do reboot it will fail to mount the root partition and stop, asking for help to find it. Don’t panic. We know where it’s gone. Mount it with “ufs:/dev/mirror/gm0s1a” or whatever slice you had it on if you’ve tried to be clever. Forgot to make a note? Don’t worry, somewhere on the boot long visible on the screen it actually tell you the name of the partition it couldn’t find.

After this you should be “in”. And to avoid this inconvenience next time you boot you’ll need to tweak /etc/fstab using an editor of your choice, although real computer nerds only use vi. What you need to do is replace all references to the actual drive with the gm0 version. Therefore /dev/ada0s1a should be edited to read /dev/mirror/gm0s1a. On a current default install, which no longer partitions the drive, this will only apply the root mount point and the swap file.

Save this, reboot (to test) and you should be looking good. Now all that remains is to add the second drive (ada1 in the example) with the line:

gmirror insert gm0 ada1

You can see the effect by running:

gmirror status

Unless your drive is very small, gm0 will be DEGRADED and it will say something about being rebuilt. The precise wording has changed over time. Rebuilding takes hours, not seconds so leave it. Did I mention it’s a good idea to do this when the system isn’t busy?