FreeBSD, ZFS and Denial of Service

I’ve been using ZFS since FreeBSD 8, but I must be missing something. I know it’s supposed to be wonderful and all that, but I was actually pretty happy with UFS.

So what’s the up-side to ZFS? Well you get more error checking and correction. And it’s more “auto” when it comes to allocating disk space. But call me old fashioned if you like; I don’t like “auto”.

Penguinistas might not “get” this next bit, but on a UNIX system you didn’t normally have One Big Disk. Instead you had several, and even if you only had one, you’d partition the slice it up so it looked like several. And then, of course, you’d mount disks or partitions on to the root filing system wherever you wanted them to appear.

For reliability, you could also create mirrors and striped RAIDs, put a FS on them and mount them wherever you wanted. And demount them, and mount them somewhere else, and so on.

ZFS does all this good stuff, but automatically, and often as One Big Disk. A good thing? Well… no.

Please generate and paste your ad code here. If left empty, the ad location will be highlighted on your blog pages with a reminder to enter your code. Mid-Post

First off, I like to know where and on which disk my data actually resides. I’m really uneasy with ZFS deciding for me. If ZFS loses it, I want to know where to find it. I also like having a FS on each drive or partition, so I can pull the drive out and mount it wherever I want to get data off – or move it from machine to machine. It’s my data, I’ll do what I want to with it, dammit!

Secondly, with UFS I get to decide what hardware is used for each kind of file. Parts of the FS that are rarely used can be put on slow, cheap, huge disks. The database goes on a velociraptor or better, and the swap partitions – well! Okay, you can use a ZFS cache drive to automatically speed up the things that are actually used a lot, but I feel better doing it myself. I’m never really convinced that the ZFS cache drives are working.

And then you get the management issues with One Big Disk. With the old way, when an FS on a drive fills up, it is  full. You can’t create more files on it. You either have to delete unwanted stuff, or you can mount a bigger drive in its place. With One Big Disk, when it’s full, it’s also full. The difference is that you can’t write any data anywhere on the entire FS.

Take, for example, /var/log. Any UNIX admin with a bit of sense will have this in its own partition. If some script kiddie then did something that caused a lot of log file activity, eventually you’d run out of space in /var/log. But the rest of the system would still be alive. Yes, you can set a limit using ZFS dataset quotas, but who does? With UFS the default installation process created partitions with sensible sizes; ZFS systems install with no quotas whatsoever. Using the One Big Disk principle, ZFS satisfies the requests of any disk-eating process until there isn’t a single byte left anywhere, and then rolls over saying the zpool is full. Or it would say it if there was a monitor connected to the server in a data centre miles away, and there was someone there to look at it.

Okay, most of this has perfectly good solutions using ZFS. and I’ve yet to have a disaster with a ZFS system that’s required me to move drives around, so I don’t really know how possible it is when the chips are down. And ZFS has is a nice unified way of doing stuff, rather than fiddling around with geom and the FS separately. But after a couple of years with FreeBSD 10, where it became practical to boot from ZFS, shouldn’t I be feeling a bit more enthusiastic about it?

 

One Reply to “FreeBSD, ZFS and Denial of Service”

  1. I always use disk partitions for ZFS vdevs. For some data you may need x-way mirrors, the other – raidz2 or raidz.
    Hard drive speed varies at the start and end area as well.
    Even more – an “operator mistake” can ruin things – with just a single pool you get a total disaster, multiple pools = “things aren’t that bad”.
    ZFS for root fs? not that good idea either, there were multiple cases when “something changed” and ZFS components went out of sync with “no way” to mount the pools. I prefer multiple independent hard drives; should anything happen – it’s easy to swap the 2 drives and get a bootable environment again.
    newsyslog with size limits is your friend for “logs overflowing FS” (2 entries for each log – time and size in newsyslog.conf). ZFS is good in compressing logs on the fly as well – lz4 gives minimal overhead.

Leave a Reply

Your email address will not be published. Required fields are marked *