Set up FreeBSD in two mirrored drives using UFS

I’ve written about the virtues of Geom Mirror (gmirror) in the past. Geom Mirror was probably the best way of implementing redundant storage between FreeBSD 5.3 (2004) until ZFS was introduced in FreeBSD 7.0 in 2008. Even then, ZFS is heavyweight and the Geom Mirror was tested and more practical for many years afterwards.

The Geom system also has a RAID3 driver. RAID3 is weird. It’s the one using a separate parity drive. It works, but it wasn’t popular. If you had a big FreeBSD system and wanted an array it was probably better to use an LSI host bus adapter and have that manage it with mptutil. But for small servers, especially remotely managed, Geom Mirror was the best. I’m still running it on a few twin-drive servers, and will probably continue for some time to come.

The original Unix File System (UFS2) actually has a couple of advantages over ZFS. Firstly it has much lower resource requirements. Secondly, and this is a big one, it has in-place updates. This is a big deal with random access files, such as databases or VM hard disks, as the Copy-on-Write system ZFS uses fragments the disk like crazy. To maintain performance on a massively fragmented file system, ZFS requires a huge amount of cache RAM.

What you need for random access read/write files are in-place updates. Database engines handle transaction groups themselves to ensure that the data structure’s integrity is maintained. ZFS does this at the file level instead of application level, which isn’t really good enough as the application knows what is and what isn’t required. There’s no harm in ZFS doing it too, but it’s a waste. And the file fragmentation is a high price to pay.

So, for database type applications, UFS2 still rules. There’s nothing wrong with having a hybrid system with both UFS and ZFS, even on the same disk. Just mount the UFS /var onto the ZFS tree.

But back to the twin drive system: The FreeBSD installed doesn’t have this as an option. So here’s a handy dandy script wot I rote to do it for you. Boot of a USB stick or whatever and run it.

Script to install FreeBSD on gmirror

Please generate and paste your ad code here. If left empty, the ad location will be highlighted on your blog pages with a reminder to enter your code. Mid-Post

Use as much or as little as you like.

At the beginning of the script I define the two drives I will be using. Obviously change these! If the disks are not blank it might not work. The script tries to destroy the old partition data but you may need to do more if you have it set up with something unusual.

Be careful – it will delete everything on both drives without asking!

Read the comments in the script. I have set it up to use a 8g UFS partition, but if you leave out the “-s 8g” the final partition will use all the space, which is probably what you want. For debugging I kept it small.

I have put everything on a single UFS partition. If you want separate / /usr /var then you need to modify it to what you need and create a mirror for each (and run newfs for each). The only think is that I’ve created a swap file on each drive that is NOT mirrored and configured it to use both.

I have not set up everything on the new system, but it will boot and you can configure other stuff as you need by hand. I like to connect to the network and have an admin user so I can work on a remote terminal straight away, so I have created an “admin” user with password “password” and enabled the ssh daemon. As you probably know, FreeBSD names its Ethernet adapters by manufacturer and you don’t know what you’ll have so I just have it try DHCP on every possible interface. Edit the rc.conf file how you need it once it’s running.

If base.txz and kernel.txz are in the current directory, fine. The script tries to download them at present.

And finally, I call my mirrors m0, m1, m2 and so on. Some people like to use gm0. It really doesn’t matter what you call them.

#!/bin/sh
# Install FreeBSD on two new disks set up a a gmirror
# FJL 2025
# Edit stuff in here as needed. At present it downloads
# FreeBSD 14.2-RELEASE and assumes the disks
# in use are ada0 and ada1

# Fetch the OS files if needed (and as appropriate)
fetch https://download.freebsd.org/ftp/releases/amd64/14.2-RELEASE/kernel.txz
fetch https://download.freebsd.org/ftp/releases/amd64/14.2-RELEASE/base.txz

# Disks to use for a mirror. All will be destroyed! Edit these. The -xxxx
# is there to save you if you don't
D0=/dev/da1-xxxxx
D1=/dev/da2-xxxxx

# User name and password to set up initial user.
ADMIN=admin
ADMINPASS=password

# Make sure the geom mirror module is loaded.
kldload geom_mirror

# Set up the first drive
echo Clearing $D0
gpart destroy -F $D0
dd if=/dev/zero of=$D0 bs=1m count=10

# Then create p1 (boot), p2 (swap) and p3 (ufs)
# Note the size of the UFS partition is set to 8g. If you delete
# the -s 8g it will use the rest of the disk by default. For testing
# it's better to have something small so newfs finishes quick.

echo Creating gtp partition on $D0
gpart create -s gpt $D0
gpart add -t freebsd-boot -s 512K $D0
gpart add -t freebsd-swap -s 4g $D0
gpart add -t freebsd-ufs -s 8g $D0

echo Installing boot code on $D0
# -b installs protective MBR, -i the Bootloader.
# Assumes partition 1 is freebsd-boot created above.
gpart bootcode -b /boot/pmbr -p /boot/gptboot -i 1 $D0

# Set up second drive
echo Clearing $D1
gpart destroy -F $D1
dd if=/dev/zero of=$D1 bs=1m count=10

# Copy partition data to second drive and put on boot code
gpart backup $D0 | gpart restore $D1
gpart bootcode -b /boot/pmbr -p /boot/gptboot -i 1 $D1

# Mirror partition 3 on both drives
gmirror label -v m0 ${D0}p3 ${D1}p3

echo Creating file system
newfs -U /dev/mirror/m0
mkdir -p /mnt/freebsdsys
mount  /dev/mirror/m0 /mnt/freebsdsys

echo Decompressing Kernel
tar -x -C /mnt/freebsdsys -f kernel.txz
echo Decompressing Base system
tar -x -C /mnt/freebsdsys -f base.txz

# Tell the loader where to mount the root system from
echo 'geom_mirror_load="YES"' > /mnt/freebsdsys/boot/loader.conf
echo 'vfs.root.mountfrom="ufs:/dev/mirror/m0"' \
>> /mnt/freebsdsys/boot/loader.conf

# Set up fstab so it all mounts.
echo $D0'p2 none swap sw 0 0' > /mnt/freebsdsys/etc/fstab
echo $D1'p2 none swap sw 0 0' >> /mnt/freebsdsys/etc/fstab
echo '/dev/mirror/m0 / ufs rw 1 1' >> /mnt/freebsdsys/etc/fstab

# Enable sshd and make ethernet interfaces DHCP configure
echo 'sshd_enable="YES"' >/mnt/freebsdsys/etc/rc.conf
for int in em0 igb0 re0 bge0 alc0 fxp0 xl0 ue0 igb0 xcgbe0 bnxt0 mlx0
do
echo 'ifconfig_'$int'="DHCP"' >>/mnt/freebsdsys/etc/rc.conf
done

# Create initial user suitable for ssh login
pw -R /mnt/freebsdsys useradd $ADMIN -G wheel -m
echo "$ADMINPASS" | pw -R /mnt/freebsdsys usermod -n $ADMIN -h 0
echo "$ADMINPASS" | openssl passwd -6 -stdin | pw -R /mnt/freebsdsys usermod -n $ADMIN -H 0

# Tidy up
umount /mnt/freebsdsys
echo Done. Remove USB stick or whatever and reboot.

Western Digital Red Series review





I’ve got a SATA drive throwing bad sectors – not good. Its a WD Cavier Green, and it’s about a year old. But I’ve hammered it, and it was cheap. An IDE drive throwing bad sectors is never good –  once the problem is visible it’s on the way out. I doubt WD would replace it under warranty as its not on a Windows box and I therefore can’t download and run their diagnostic, but we’ll see about that.

And anyway, Western Digital  launched the ideal replacement two weeks ago – the Red series. Unlike the Green, it’s actually designed to run 24/7 – cool and reliable. They’re pitching it squarely at the NAS market, for RAID systems with five or less drives, they say. Perfect, then. And a good market offering given that last month’s IDC low-end storage forecast predicted an 80% gowth in the small/home office NAS market over the next five years.

The Red series launches with 1Tb, 2Tb and 3Tb versions, with 1Tb on each platter.

I checked the specifications with scan.co.uk – 2ms access times too! Lovely! Hang on, that’s too damn good. I suspect someone at Scan has gone through the specification sheet to add the access time to their database and found the only thing on the list measured in milliseconds. Actually it can withstand a 2ms shock! WD doesn’t mention the access times, or the spin-speed come to that (about 5400 given the hum).

Well, having now checked one of these beasts out, the access times are obviously something they’d want to keep quiet about. In comparison with the Cavier Green, which is supposed to be a low-impact desktop drive, it’s about the same on writes and about 30% slower on reads. However, once it’s in position it is about 30% faster streaming. This would be handy for an application where single files were being read, but not so brilliant if you’re jumping about the disk at the behest of multiple users – which is the intended market for this thing. Real-world performance remains to be seen, but I don’t think it’s going to be as quick as a Cavier Green, and they’re slow enough.

So why would anyone want one of these?

Compared to the Cavier Green, the red is rated for 24/7 use. Compared to the Black series, or anyone else’s nearline drives, its performance is terrible, but it is cheaper and much cooler with a lower power consumption.

If you want performance at this price point the Seagate Barracuda drives are cheaper and a lot faster, but Seagate don’t rate them for continuous use. The Hitachi Deskstar, on the other hand, is rated for 24/7 operation even though it’s a desktop drive and it outperforms the WD Red by quite a margin too. But hang on – WD recently acquired Hitachi’s HD operation so that’s a WD drive to. So for performance go for the Baracuda and for best performance running 24/7 go for the Deskstar.

The WD Red is basically a low-performance near-line drive except that it’s  not, actually rated as being as reliable as the real near-line drives. But it is claimed to be more reliable than the Green series, and they do run just cool and just as quiet (subjectively). Is it worth the 35% price premium over the Green? Well, actually, sitting here with a failing Green, the Red with the three-year warranty is looking attractive for my data warehousing application. This isn’t NAS, it’s specialised, and I need low-power (cool) reliable drives to stream large files on and off. They could be just the job for that.

As an afterthought, comparing them with the Black, they also lack the vibration sensors to protect them in a data centre environment or a box chocked full of other drives. The idea of putting them in a rack server as a low-power alternative looks less attractive than it did.