Installing Apache 2.4 with PHP on FreeBSD for Drupal 8. It’s a Nightmare

I’ve been playing about the Drupal 8 (still in Beta) and one of its features is that it needs the latest version of PHP (5.5.9 or later). I have a server I keep for testing the latest whatever, and this includes Apache 2.4. So how hard can it be to compile in PHP?

Actually, it’s not straightforward. Apache 2.4 is fine, but PHP is another matter. First off, installing lang/php55 does not include mod_php for Apache. It’s not that the option to compile it hasn’t been set – the option has gone. With a bit of digging around you can find it elsewhere – in www/mod_php55. Don’t be fooled in to thinking you need to just build and install that though…

You’ll probably end up with stuff like this in your httpd error log:

Call to undefined function session_name()
Call to undefined function hash()

Digging further you’ll find www/php55-session and security/php55-hash in there, and go off to build those too. Then wonder why it still isn’t working.

The clue can be found with this log file error:

PHP Warning: PHP Startup: Unable to load dynamic library '/usr/local/lib/php/20121212-zts/session.so' - Cannot open &quote;/usr/local/lib/php/20121212-zts/session.so&quote; in Unknown on line 0

(NB. The &quote appears in the log file itself!)

Basically, mod_php expects you to compile the ZTS (Zend Thread Safe) version of everything. And why wouldn’t you? Well it turns out that this important option is actually turned off by default so you need to configure the build to include it. Any extensions you’ve compiled up until now will not have been placed in a directory tagged with -zts, which is why it’s looking in the wrong place as shown by the error log.

Please generate and paste your ad code here. If left empty, the ad location will be highlighted on your blog pages with a reminder to enter your code. Mid-Post

If you’re reading this following a Google search, you’ve probably already fallen down the Pooh trap. You need to go back to lang/php55 and start again with the correct options. The best way to do this (in case you didn’t know) is:

make clean
make config
make
make install

When you run make config it’ll give you a chance to select ZTS, so do it.

Repeat this for compiling www/mod_php55 and then go back and compile www/php55-session, security/php55-hash and anything else you got wrong the first time, You don’t have the option to configured them, but they must be compiled again once the core of PHP has been compiled using ZTS.

Incidentally, if you haven’t had this pain before, you will probably need to switch to using the new pkg system if you haven’t already. Trying to build without it, it’ll put up a curt little note about it and go in to sulk mode until you do. Unfortunately, on an older FreeBSD, any attempt to compile this will result in an O_CLOEXEC symbol undefined error in pkg.c. This is actually a flag to the open() kernel function that was added to POSIX in 2008. What it means is that if your process subsequently makes exec call, the file handle will be automatically closed. It saves leaking fds if your execution path goes awry. But what’s the solution?

Well, if you’re using an older version of the kernel then it won’t support O_CLOEXEC anyway, so my fix is to delete it from the source and try again. It only appears once, and if the code is so sloppy that it doesn’t close the handle, it’s not the end of the world. The official answer is, of course, to upgrade your kernel.

If you are running Drupal 8, here’s a complete list of the ports you’ll need to compile:

lang/php55 (select ZTS option in the configuration dialogue)
www/mod_php55 (select ZTS option in the configuration dialogue)
www/php55-session
security/php55-hash
security/php55-filter
devel/php55-json
devel/php55-tokenizer (for Drupal 8)
databases/php55-pdo
databases/php55-pdo_mysql
textproc/php55-ctype
textproc/php55-dom
textproc/php55-simplexml
graphics/php55-gd
converters/php55-mbstring (not tested during setup)

All good fun! This relates to Drupal 8.0.0 RC1 – it may be different with the final release, of course.

Docker on FreeBSD

Docker is available on FreeBSD. Yeah! Er. Hang on a minute – what’s the point.

People are talking about Docker a lot in the Linux world. It’s a system that allows a configured piece of software, together with all its ancillaries, to be in its own closed environment on any machine you choose. It’s not a VM – no emulation required. Well not much. It’s much more efficient that running multiple kernels on a hypervisor (as VirtualBox or VMWare).

But isn’t this one of the things Jails are for? Well, yes. It’s a kind of poor-man’s jail system for the poor deprived Linux users. Solaris and FreeBSD have been doing this kind of things for years with kernel support (i.e. out-of-the box and lot more efficiently).

So why should anyone be interested that FreeBSD also has Docker? Well, one of the things the Docker community has together is preconfigured applications you can just download and run. Given what a PITA it can be getting something running on a Linux box, which lacks a UNIX-like base system you can rely on, this does make sense. And running these pre-configured server applications on FreeBSD may be of interest, especially if you lack the in-house expertise to set them up yourself. But it won’t be all plain sailing. You need FreeBSD 11 (not yet released) to do it, together with the 64-bit Linux emulation library.

This does kind-of make sense. Stuff that’s currently Linux-only may be easier to deal with – I’m thinking Oracle here.

FreeBSD hr utility – human readable number filter (man page)

Several years ago I wrote a utility to convert numeric output into human readable format – you know the kind of thing – 12345678 becomes 12M and so on. Although it was very clever in the way it dealt with really big numbers (Zetabytes), and in spite of ZFS having really big numbers as a possibility, no really big numbers have actually come my way.

It was always a dilemma as to whether I should use the same humanize_number() function as most of the FreeBSD utilities, which is limited to 64-bit numbers as its input, or stick with my own rolling conversion. In this release, actually written a couple of years ago, I’ve decided to go for standardisation.

You can download it from  here. I’ve moved it (24-10-2021) and it’s not on a prettified page yet, but the file you’re looking for is “hr.tar”.

This should work on most current BSD releases, and quite a few Linux distributions. If you want binaries, leave a note in comments and I’ll see what I can do. Otherwise just download, extract and run make && make install


Extracted from the man page:

NAME

hr — Format numbers in human-readable form

SYNOPSIS

hr [-b] [-p] [-ffield] [-sbits] [-wwidth] [file ...]

DESCRIPTION
The hr utility formats numbers taken from the input stream and sends them
to stdout in a format that’s human readable. Specifically, it scales the
number and adds an appropriate suffix (e.g. 1073741824 becomes 1.0M)

The options are as follows:

-b      Put a ‘B’ suffix on a number that hasn’t been scaled (for Bytes).

-p     Attempt to deal with input fields that have been padded with spaces for formatting purposes.

-wwidth      Set the field width to field characters. The default is four
(three digits and a suffix). Widths less than four are not normally useful.

-sbits  Shift the number being processed right by bits bits. i.e. multi-
ply by 2^bits. This is useful if the number has already been scaled in to units. For example, if the number is in 512-byte
blocks then -s9 will multiply the output number by 512 before scaling it. If the number was already in Kb use -s10 and so on.
In addition to specifying the number of bits to shift as a number you may also use one of the SI suffixes B, K, M, G, T, P, E
(upper or lower case).

k-ffield      Process the number in the numbered field , with fields being numbered from 0 upwards and separated by whitespace.

The hr utility currently uses the humanize() function in System Utilities Library (libutil, -lutil) to format the numbers.  This will repeatedly divide the input number by 1024 until it fits in to a width of three digits (plus suffix), unless the width is modified by the -w option. Depending on the number of divisions required it will append a k, M, G, T, P or E suffix as appropriate. If the -b option is specified it will append a ‘B’ if no division is required.

If no file names are specified, hr will get its input from stdin. If ‘-‘ is specified as one of the file names hr will read from stdin at this point.

If you wish to convert more than one field, simply pipe the output from one hr command into another.

By default the first field (i.e. field 0) is converted, if possible, and the output will be four characters wide including the suffix.

If the field being converted contains non-numeral characters they will be passed through unchanged.

Command line options may appear at any point in the line, and will only take effect from that point onwards. This allows different options to apply to different input files. You may cancel an option by prepending it with a ‘-‘. For consistency, you can also set an option explicitly with a ‘+’.  Options may also be combined in a string. For example:

hr -b file1 -b- file2

Will add a ‘B’ suffix when processing file1 but cancel it for file2.

hr -bw5f4p file1

Will set the B suffix option, set the output width to 5 characters, process field 4 and remove excess padding from in front of the original  digits.

EXAMPLES
To format the output of an ls -l command’s file size use:

ls -l | hr -p -b -f4

This output will be very similar to the output of “ls -lh” using these options. However the -h option isn’t available with the -ls option on the “find” command. You can use this to achieve it:

find. -ls | hr -p -f6

Finally, if you wish to produce a sorted list of directories by size in human format, try:

du -d1 | sort -n | hr -s10

This assumes that the output of du is the disk usage in kilobytes, hence the need for the -s10

DIAGNOSTICS
The hr utility exits 0 on success, and >0 if an error occurs.

FreeBSD ports build fails because of gfortran

I’ve been having some fun. I wanted to install the latest ported versions of Apache and PHP for test purposes, so set the thing compiling. There are a couple of gotchas!

First off, the current ports tree will throw errors on the Makefile due to invalid ‘t’ options and other fun things. That’s because make has been updated. In order to prevent you from using old “insecure” versions of FreeBSD, it’s considered “a good thing” to cause the build to break. I’m not kidding – it’s there in the bug reports.

You can get around this by extracting the new version of make for the 8.4 iso image (oldest updated version) – just copy it over the old one.

Some of the ports also require unzip, which you can build and install from its port in archivers.

Now we get to the fun part – because the current system uses CLANG but some of the ports disagree, when you go to build things like php5_extensions (I think the gd library in particular) it depends gcc, the GNU ‘C’ compiler, and other GNU tools – so it tries to build them. The preferred version appears to be 4.7, so off it goes. Until it goes crunch. On inspection it was attempting to build Fortran at the time. Fortran? It wasn’t obvious why it broke, but I doubted I or anyone else wanted stodgy old Fortran anyway, so why was it being built?

If you look in the config options you can choose whether or not you want Java. (No thanks). But in the Makefile it lists
LANGUAGES:=    c,c++,objc,fortran
I’m guessing that’s Objective C in there – no thanks to that too. Unfortunately removing them from this assignment doesn’t solve the problem, but it helps. The next problem will come when, thanks to the new binary package system, it tries to make a tarball of the fortran stuff it never compiled. I haven’t found how this mechanism works, but if you create a couple of empty directories and a an empty file for the man page it’ll proceed oblivious. I haven’t noticed and adverse effects yet.

A final Pooh trap if you’re trying to build Apache 2.4, mod_php5 and php5-extensions is the Zen Thread-Safe options (ZTS). If you’re not consistent with these then Apache/mod_php will fail to load the extensions and print a warning in httpd-error.log. If you build www/mod_php5 you’ll see a warning like:

 

/!\ WARNING /!\
!!! If you have a threaded Apache, you must build lang/php5 with ZTS support to enable thread-safety in extensions !!!

 

Naturally, this was scary enough to make me stop the build “make config” to select the option. Unfortunately it’s also an option on lang/php5 and if you didn’t set it there then it’ll go crunch. Many, many thanks to Matthew Seaman from FreeBSD.org, who figured out what I’d done wrong.

How to hack UNIX and Linux using wildcards

Leon Juranic from Croatian security research company Defensecode has written a rather good summary of some of the nasty tricks you can play on UNIX sysadmins by the careful choice of file names and the shell’s glob functionality.

The shell is the UNIX/Linux command line, and globbing is the shell’s wildcard argument expansion. Basically, when you type in a command with a wildcard character in the argument, the shell will expand it into any number of discrete arguments. For example, if you have a directory containing the files test, junk and foo, specifying cp * /somewhere-else will expand to cp test junk foo /somewhere else when it’s run. Go and read a shell tutorial if this is new to you.

Anyway, I’d thought most people knew about this kind of thing but I was probably naïve. Leon Juranic’s straw poll suggests that only 20% of Linux administrators are savvy.

The next alarming thing he points out is as follows:
Another interesting attack vector similar to previously described 'chown'
attack is 'chmod'.
Chmod also has --reference option that can be abused to specify arbitrary permissions on files selected with asterisk wildcard.

Chmod manual page (man chmod):
--reference=RFILE
use RFILE's mode instead of MODE values

 

Oh, er! Imagine what would happen if you created a file named “–reference=myfile”. When the root user ran “chmod 700 *” it’d end up setting the access permissions on everything to match those of “myfile”. chown has the same option, allowing you to take ownership of all the files as well.

It’s funny, but I didn’t remember seeing those options to chmod and chown. So I checked. They don’t actually exist on any UNIX system I’m aware of (including FreeBSD). On closer examination it’s an enhancement of the Linux bash shell, where many a good idea turns out to be a new vulnerability. That said, I know of quite a few people using bash on UNIX.

This doesn’t detract from his main point – people should take care over the consequences of wildcard expansion. The fact that those cool Linux guys didn’t see this one coming proves it.

This kind of stuff is (as he acknowledges) nothing new. One of the UNIX administrators I work with insists on putting a file called “-i” in every directory to stop wild-card file deletes (-i as an argument to rm forces an “Are you sure?” prompt on every file. And then there’s the old chestnut of how to remove a file with a name beginning with a ‘-‘. You can easily create one with:
echo test >-example
Come back tomorrow and I’ll tell you how to get rid of it!

Update 2nd July:

Try this:
rm ./-example

Restoring cPanel backup to system without cPanel

cPanel is a web front end for “reseller” hosting accounts, and it’s very popular with web designers reselling hosting services. It’s very simple to use, and allows the web designers to set up virtual hosting accounts without giving them any real control over the server – self-service and fool proof. It’s also an expensive thing to license. It makes sense for a self-service low-cost hosting provider, where the customers do all the work, but for small-scale or “community” hosting providers you’re talking big money.

I’ve just had to rescue a number of web sites from a developer using one of these hosting services, and they’ve got a lot of sites. And the only access to the virtual server is through cPanel (and FTP to a home directory). I logged in to cPanel and there’s an option to create a backup of everything in one big tarball, and this looked like just what I wanted to get them all at once. However, it was designed to upload and unpack in another cPanel environment.

Getting out the home directories is pretty straightforward. They end up in a directory called “homedir”, and you just move it to where you want them – i.e. ~username/www/. But how about restoring the dump of the MySQL databases. Actually, that’s pretty simple too. They’re in a directory called “mysql”, but instead of it being one big dump, each is in it’s own file – and without the create commands, which are in another with the extension “.create” instead of “.sql”. Loading them all manually is going to be a time-wasting PITA, but I’ve worked out the the following shell script will do it for you if you run in while in the backup’s mysql directory:

for name in `find . -name “*.create”`; do
cat $name `echo $name | sed s/.create/.sql/` | mysql
done

You obviously have to be in the directory with the files (or edit find’s specification) and logged in as root (or add the root login as a parameter to the mysql utility).

You’ll also want to set the user/password combination on these. The tarball will have a file called mysql.sql in its root directory. Just feed it in thus:

mysql < mysql.sql

Please be aware that I figured this out looking at the files in the dump and NOT by reading any magic documentation. It works on the version of cPanel I encountered, and I was restoring to FreeBSD. By all means add a comment if you have a different experience when you try it, and don’t go this way if you’re not sure how to operate a MySQL database or you could do a lot of damage!

The final hurdle is configuring Apache for all these new sites. cPanel creates a directory in the dump called “userdata”, and this seems to contain a file with information about each web site. I decided to automate and wrote the following script:


#!/bin/sh

# Convert cPanel dump of "userdata" in to a series of Apache .conf files
# (c) F J Leonhardt 17 April 2014 - www.fjl.co.uk
# You may use this script for your own purposes, but must not distribute it without the copyright message above left intact

# Directory to write config files
# Normally /usr/local/etc/apache22/Include but you might want to write
# them somewhere else to check them first!

confdir=/usr/local/etc/apache22/Include

# oldhome and newhome are the old and new home directories (where the web sites are stored
# oldtestname and newtestname are used (together with a sub-domain) to implement test web sites before
# they have a real domain name pointed at them. They will be substituted in server names and aliases

oldhome=/data03/exampleuser/public_html
newhome=/home/exampleuser/www
oldtestname=exampleuser.oldisp.co.uk
newtestname=newuser.fjl.org.uk

# Now some static information to add to all virtual hosts
# vhost is the IP address or hostname you're using for virtual hosting (i.e. the actual name of the server)
# serveradmin is the email address of the server admin
# logfiles is the directory you want to put the log files in (assuming you're doing separate ones). If
# you do this you must uncomment the lines that write the .conf file

vhost=web.exampleuser.com
serveradmin=yourname@example.com
logdir=/var/log

getvalue()
{
grep ^$1: $name | sed s!$1:\ !! | sed s!$oldtestname!$newtestname!
}

# Start of main loop We DO NOT want to process a special file in the directory called "main" so
# a check is made.

for name in `ls`; do
if [ "$name" != "main" ]
then
echo -n "Processing $name "

if grep ^servername: $name >>/dev/null
then

# First we get some info from the file

sitename=`getvalue servername`
serveralias=`getvalue serveralias`
documentroot=`getvalue documentroot`

# Below we're setting the .conf pathname based on the first part of the file name (up to the first '.')
# This assumes that the file names are in the form websitename.isp.test.domain.com
#
# If the sitename in the source file is actually the name of the site (rather than a test alias) use
# this instead with something like:
#
# Basically, you want to end up with $givensitename as something meaningful when you see it
#
#givensitename=$sitename

givensitename=`echo $name | cut -d \. -f1`

confname=$confdir/$givensitename.conf

echo to $confname

echo "" >$confname
echo -e \\tServerAdmin $serveradmin >>$confname
echo -e \\tServerName $sitename >>$confname
for aname in $serveralias; do
echo -e \\tServerAlias $aname >>$confname
done
echo -e \\tDocumentRoot `echo $documentroot | sed s!$oldhome!$newhome!` >>$confname
echo -e \\tErrorLog $logdir/$givensitename-error.log >>$confname
echo -e \\tCustomLog $logdir/$givensitename-access.log combined >>$confname
echo "
" >>$confname

#from check that servername present
else
echo "- ignoring file - no servername therefore wrong format?"
fi

#fi from check it wasn't called "main"
fi
done

All of the above assumes you’re familiar with setting up virtual hosting on an Apache 2.2 http server in an UNIX-like environment. It’s just too complicated to explain that in a single blog post. Drop me a line if you need assistance.

FreeBSD 10.0 and ZFS

It’s finally here: FreeBSD 10.0 with ZFS. I’ve been pretty happy for many years with twin-drive systems protected using gmirror and UFS. It does what I want. If a disk fails it drops it out and sends me an email, but otherwise carries on. When I put a replacement blank disk it can re-build the mirror. If I take one disk out, put it into another machine and boot it, it’ll wake up happy. It’s robust!

So why mess around with ZFS, the system that puts your drives in to a pool and decides where things are stored, so you don’t have to worry your pretty little head about it? The snag is that the old ways are dying out, and sooner or later you’ll have no choice.

Unfortunately, the transition hasn’t been that smooth. First off you have to consider 2Tb+ drives and how you partition them. MBR partition tables have difficulties with the number of sectors, although AF drives with larger sectors can bodge around this. It can get messy though, as many systems expect 512b sectors, not 4k, so everything has to be AF-aware. In my experience, it’s not worth the hassle.

The snag with the new and limitless “GPT” scheme is that it keeps safe copies of the partition at the end of the disk, as well as the start. This tends to be where gmirror stores its meta-data too. You can’t mix gmirror and GPT. Although the code is hackable, I’ve got better things to do.

So the good new is that it does actually work as a replacement for gmirror. To test it I stuck two new 3Tb AF drives into a server and installed 10.0 using the new procedure, selecting the menu option zfs on root option and GPT partitioning. This is shown in the menu as “Experimental”, but seems to work. What you end up with, if you select two drives and say you want a zfs mirror, is just that.

Being the suspicious type, I pulled each of the drives in turn to see what had happened, and the system continues without a beat just like gmirror did. There were also a nice surprises when I stuck the drives back in and “onlined” them:

First-off the re-build was almost instant. Secondly, HP’s “non-hot-swap” drive bays work just fine for hot-swap under FreeBSD/ZFS. I’d always suspected this was a Windoze nonsense. All good news.

So why is the re-build so fast? It’s obvious when you consider what’s going on. The GEOM system works a block level. If the mirror is broken it has no way of telling which blocks are valid, so the only option is to copy them all. A major feature of ZFS, however, is that the directories and files have validation codes in the blocks above, going all the way to the root. Therefore, by starting at the root and chaining down, it’s easy to find the blocks containing changed data, and copy them. Nice! Getting rid of separate volume managers and file systems has its advantages.

So am I comfortable with ZFS? Not yet, but I’m a lot happier with it when its a complete, integrated solution. Previously I’d only been using on data drives in multi-drive configurations, as although it was possible to install root on ZFS, it was a real PITA.

Pipe stdout to more than one process on FreeBSD

There are odd times when you may wish to use the stdout of a program as the stdin to more than one follow-on. bash lets you do this using a > to a command instead of just a file. I think the idea is that you just have to make sure the command is in ( ) and it works. One up to bash, but what about somthing that will work on standard shells?

The answer is to use a named pipe or fifo, and it’s a bit more hassle – but not too much more.

As an example, lets stick to the “hello world” theme. The command sed s/greeting/hello/ will replace the word “greeting” on stdin with “world” on stdout – everything else will pass through unchanged. Are you okay with that? Try it if you’re not comfortable with sed

Now I’m going to send a stdout to two different sed instances at once:

sed s/greeting/hello/
sed s/greeting/world/

To get my stdout for test purposes I’ll just use “echo greeting”. To pipe it to a single process we would use:

echo greeting | sed s/greeting/hi/

Our friend for the next part is the tee command (as in a T in plumbing). It copies stdin to two different places, stdout and (unfortunately for us) a file. Actually it can copy it to as many files as you specify, so it should probably have been called “manifold”, but this is too much to ask for an OS design that spells create without the training ‘e’.

Tee won’t send output to another processes stdin directly, but the files can be a fifos (named pipes). In older versions of UNIX you created a pipe with the mknod command, but since FreeBSD moved to character only files this is deprecated and you should use mkfifo instead. Solaris also uses mkfifo, and it came in as far back as 4.4BSD, but if you’re using something old or weird check the documentation. It’ll probably be something like mknod <pipename> .

Here’s an example of it in action, solving our problem:

mkfifo mypipe
sed s/greeting/hello/ < mypipe &
echo greeting | tee mypipe | sed s/greeting/world/  
rm mypipe

It works like this: First off we create a named pipe called mypipe. Next (and this is the trick), we run the first version of sed, specifying its input to come from “mypipe”. The trailing ‘&’ is very important. In case it had passed you by until now, it means run this command asynchronously – or in background mode. If we omitted it, sed would sit there waiting for input it would never receive, and we wouldn’t get the command prompt back to enter the further commands.

The third line has the tee command added to send a copy of stdout to the pipe (where the first sedis still waiting). The first copy is piped in the normal way to the second sed.

Finally we remove the pipe. It’s good to be tidy but it doesn’t matter if you want to leave it there use it again.

As a refinement, pipes with names like “mypipe” in the working directory could lead to trouble if you forgot to delete it or if another job picks the same name. Therefore it’s better to create them in the /tmp directory and add the current process ID to the name in order to avoid a clash. e.g.:

mkfifo /tmp/mypipe.1.$$

$$ expands to the process-ID, and I added a .1. in the example so I can expand the scheme to have multiple processes receiving the output – not just one. You can use tee to send to as many pipes and files as you wish.

If you run the example, you’ll probably get “hello world” output on two lines, but you might get “world hello”. The jobs have equal status, so there’s no way to of knowing which one will complete first, unless you decided to put sleep at the start of one for force the issue.

When I get around to it a more elaborate example might appear here.

Using MX records to create backup mail server

There’s a widely held misunderstanding about “main” and “backup” MX records in the web developer world. The fact is that there no such thing! Anyone who tells you different is plain wrong, but there are a lot of web developers who believe there is the case, and some ISPs give in and provide them as it’s simpler than arguing. It’s possible to use two MX records in some crazy scheme that looks like a backup server, in practice it does very little to help and quite possibly rather a lot to hinder. It won’t make your email more robust in practical terms.

If you are using an email server at a data centre, with reasonable expectation of an always-on connection, you need a single MX record. If your processing requirements are great you can have multiple records at the same level to spread the load between peered servers, but none would be a backup any more than any other. Senders simply get one server at random. I have a single MX record.

“But you must have a backup!”, is the usual response. I do, of course, but it has nothing to do with having multiple MX records. Let me explain:

A domain’s MX record gives the address of the server to which its email should be sent. In practice, this means the company’s mail sever; or if they have multiple servers, the incoming one. Most companies have one mail server address, and this is fine. If that mail server dies it needs to be repaired or replaced, and the replacement gets the same address.

But what of having a second MX record with an alternative, lower-priority server? It may sound good, but it’s nuts. Think about it – the company’s mail server is where the mail ends up. It’s where the users expect to log in and read it. If you have an alternative server, the mail will go there instead, but the user’s won’t be able to read it. This assumes that the backup is on a different site, available if only the first site goes down. If it’s on the same site it’s even more pointless, as it will be affected by the same connectivity issues that took the first one offline. Users will have their existing mail on the broken server, new mail will be on a different server, and you’ll be in a real bugger’s muddle trying to reconcile the two later. It makes much more sense to just fix the broken one, or switch in a backup at the same location and on the existing IP address. In extremis, you can change the MX record to point to a replacement server elsewhere.

There’s a vague idea that if you don’t have a second MX, mail will be lost. Nothing can be further from the truth. If your one and only mail server is off-line, the sender’s server will queue up the message and keep trying until it comes back – it will normally do this for a week. It won’t lose it. Most mail servers will report back to the sender if it hasn’t been able to get through for four hours, so they’ll know there’s a problem and won’t worry that you haven’t replied.

If you have two mail servers, one on a different site, the secondary server will start receiving emails when the first one goes off-line. It’ll just queue them up, waiting to forward them to the primary one, but in this case the sender won’t get notification of the delay. Okay, if the primary server is off-line for more than a week it will prevent mail loss – but why would the primary server possibly be off-line for a week – the company won’t function unless it’s repaired quickly.

In the old days of dial-up, before POP3 came in to being, some people did use SMTP in a way where a server in a data centre forwarded to the remote site when it connected. I remember Cliff Stanford had a PC mail client called Turnpike that did just this in the early days of Demon. But SMTP was designed for always-on connections and POP3 was designed for dial-up, so POP3 won out.

Let’s get real: There are two likely scenarios for having a mail server off-line. Firstly, the hardware could be dead. If so, get it repaired, and in less than a week. Secondly, the line to the server could be down, and this could be medium-term if someone with a JCB has done a particularly “good job” on it. This gives you a week to make alternative arrangements and direct the mail down another line, which is plenty of time.

So, the idea of having a “backup” MX is pointless. It can only send mail to an off-site server; it doesn’t prevent any realistic mail loss and your email ends up where your users can’t get it until the primary server is repaired. But is there any harm in having one if it makes you feel better?

Actually, in practice, yes. It does make matters worse. In theory mail will just pile up on a spare server and get forwarded later. However, this spare server probably isn’t going to be up to the same specification as the primary one. They never are – they sit there idling, with nothing to do nearly all the time. They won’t necessary have the fastest line; their spam and virus filtering will be out-of-date or non-existent and they have a finite amount of disk space to absorb mail. This can really matter if you end up storing and forwarding a large amount of spam, as is often the case these days. The primary server can be configured to discard it quickly, but this isn’t a job appropriate for the secondary one. So it builds up until it’s ancient and meagre disk space is exhausted, and then it tells the sender to give up trying due to a “disk full” error – and the email is bounced off in to the ether. It’d have been much better to leave it on the sender’s server is the first place.

There are other security issues to having a secondary server. One problem comes with spam filtering. This is best done at the end of the line; it’s not the place of a secondary server to determine what gets delivered and what doesn’t. For starters, it doesn’t see the corpus of legitimate emails, so won’t be so adept at comparing and sorting. It’s probably going to be some old spare kit that’s under-powered for modern spam processing anyway. However, when it stores and forwards it, the primary server will see it comes from a “friend” rather than a dubious source in a lawless part Internet. Spammers do use secondary MX records as a back door to get around virus and spam filters for this very reason.

You could, of course, specify and configure a secondary mail server to be up to the job, with loads of disk space to prevent a DoS attack and fully functional spam filters, regularly maintained and sharing Bayesian analysis data and local rules with the actual server. And then have this expensive resource sitting there doing nothing all day but converting electricity in to heat. Realistically, it’s not going to happen.

By now you may be wondering, if multiple MX records are so pointless, why they exist? It’s one of these Internet myths; a paradigm that users feel comfortable with, without questioning the technology behind it. There is a purpose, but it’s not for “backup”.

When universal Internet email was new, messages would be sent to a user “@” computer, and computers were normally shared, so each would have multiple possible users. The computer would receive the email and put it in the mailbox corresponding to the user part of the address.

When the idea of sending email to a domain rather than a specific server came in to being, MD and MF records also came in to being. A MD record gave the IP address of the server where mail should end up (the Mail Destination). An MF record, if it existed, allowed the mail to be forwarded through another machine first (Mail Forward). This was sometimes necessary, for example if the MD was on a dial-up connection or behind a firewall and unable to accept direct connections over the Internet. The mail would go to the MF instead, and the MF would send it to the MD – presumably by batching it up and opening a line, transiting a firewall or using some other non-public mechanism.

In the mid 1980’s it was felt that having both MD and MF records placed double the load on DNS servers, so unified MX records, which could be read with a single lookup, were born. To allow for multiple levels of mail forwarding through firewalls, they were prioritised to 99 levels, although if you need more than three for any scheme you’re just being silly.

Unfortunately, the operation of MX records, rather than the explicitly named MF and MD, is a bit subtle. So subtle it’s often very misunderstood.

The first thing you need to understand is that email delivery should be controlled from the DNS for the domain, NOT from the individual mail servers that exist on that domain. This may not be obvious, but this is how it’s designed to work, and when you think of it, a central point of control is a good thing.

Secondly, DNS records should be universal. Every computer on the Internet, making the same DNS query, should get the same result. With the later addition of asymmetric NAT, there is now an excuse for varying this, but you can come unstuck if you get it wrong and that’s not what it was designed for.

If you want to reconfigure the route that mail takes to a domain, you do it by editing the single master DNS record (zone file) for that domain – you leave the multiple mail servers alone,

Now consider this problem: an organisation (called “theorganisation”) has a mail server called A. It’s inside the theorganisation’s firewall, for its own protection. Servers on the Internet can’t talk to A directly, because the firewall won’t let them through, but local users send and receive mail through it all day long. To receive external mail there’s another server called B, this time outside the firewall. A rule on the firewall allows specific traffic from B to get to A. The relevant part of the zone file may look something like this (at least logically):

MX 1 A.theorganisation
MX 2 B.theorganisation

So how do these simple lines tell the world, and servers A and B, how to operate? You need to understand the rules…

When another server, which I shall call C, sends a message to alice@theorganisation it will look up the MX records for theorganisation, and see the records above. C will then attempt to contact alice at the lowest numbered MX it finds, which points to server A. If C is located within the same department, it will be within the firewall and mail can be delivered directly; otherwise the firewall will block it. If C can’t contact A because of the firewall it will try the next highest on the list, in this case B. B is on the Internet, and will accept connections from C (and anyone else). The message arrives at B for Alice, but alice is not a user of B. However, B knows that it’s not the final destination for mail to theorganisation because the MX record says there’s a lower numbered server called A, so it attempts to forward it there. As B is allowed through the firewall, it can deliver the message to A, where it’s finally put in alice’s mailbox.

This may sound a bit complicated, but the rules for MX records can be made to route mail through complex paths simply by editing the DNS zone file, and this is how it’s supposed to work. The DNS zone file MX records control the path the mail will take. If you try to use the system for some contrary purpose (like a poor-man’s backup), you’re going to come unstuck.

There is another situation where you might want multiple MX records: If your mail server has multiple network interfaces on different (redundant) lines. If the MX priority value is the same for both, each IP address will (or should) be used at random, but if you have high-cost and low-cost lines you can change the priority to favour one route over another. With modern routing this artifice may not be necessary – let the router choose the line and mangle the IP addresses in to one for you. But sometimes a simple solution works just as well.

In summary, MX record forwarding is not designed for implementing backup mail servers and any attempt to use them for the purpose is going to do more harm than good. The ideas that this is what they’re all about is a myth.