Pipe stdout to more than one process on FreeBSD
There are odd times when you may wish to use the stdout of a program as the stdin to more than one follow-on. bash lets you do this using a > to a command instead of just a file. I think the idea is that you just have to make sure the command is in ( ) and it works. One up to bash, but what about somthing that will work on standard shells?
The answer is to use a named pipe or fifo, and it’s a bit more hassle – but not too much more.
As an example, lets stick to the “hello world” theme. The command sed s/greeting/hello/
will replace the word “greeting” on stdin with “world” on stdout – everything else will pass through unchanged. Are you okay with that? Try it if you’re not comfortable with sed
Now I’m going to send a stdout to two different sed
instances at once:
sed s/greeting/hello/ sed s/greeting/world/
To get my stdout for test purposes I’ll just use “echo greeting”. To pipe it to a single process we would use:
echo greeting | sed s/greeting/hi/
Our friend for the next part is the tee
command (as in a T in plumbing). It copies stdin to two different places, stdout and (unfortunately for us) a file. Actually it can copy it to as many files as you specify, so it should probably have been called “manifold”, but this is too much to ask for an OS design that spells create without the training ‘e’.
Tee won’t send output to another processes stdin directly, but the files can be a fifos (named pipes). In older versions of UNIX you created a pipe with the mknod
command, but since FreeBSD moved to character only files this is deprecated and you should use mkfifo
instead. Solaris also uses mkfifo
, and it came in as far back as 4.4BSD, but if you’re using something old or weird check the documentation. It’ll probably be something like mknod <pipename>
.
Here’s an example of it in action, solving our problem:
mkfifo mypipe sed s/greeting/hello/ < mypipe & echo greeting | tee mypipe | sed s/greeting/world/ rm mypipe
It works like this: First off we create a named pipe called mypipe. Next (and this is the trick), we run the first version of sed, specifying its input to come from “mypipe”. The trailing ‘&’ is very important. In case it had passed you by until now, it means run this command asynchronously – or in background mode. If we omitted it, sed
would sit there waiting for input it would never receive, and we wouldn’t get the command prompt back to enter the further commands.
The third line has the tee command added to send a copy of stdout to the pipe (where the first sed
is still waiting). The first copy is piped in the normal way to the second sed
.
Finally we remove the pipe. It’s good to be tidy but it doesn’t matter if you want to leave it there use it again.
As a refinement, pipes with names like “mypipe” in the working directory could lead to trouble if you forgot to delete it or if another job picks the same name. Therefore it’s better to create them in the /tmp directory and add the current process ID to the name in order to avoid a clash. e.g.:
mkfifo /tmp/mypipe.1.$$
$$
expands to the process-ID, and I added a .1. in the example so I can expand the scheme to have multiple processes receiving the output – not just one. You can use tee to send to as many pipes and files as you wish.
If you run the example, you’ll probably get “hello world” output on two lines, but you might get “world hello”. The jobs have equal status, so there’s no way to of knowing which one will complete first, unless you decided to put sleep
at the start of one for force the issue.
When I get around to it a more elaborate example might appear here.
Using MX records to create backup mail server
There’s a widely held misunderstanding about “main” and “backup” MX records in the web developer world. The fact is that there no such thing! Anyone who tells you different is plain wrong, but there are a lot of web developers who believe there is the case, and some ISPs give in and provide them as it’s simpler than arguing. It’s possible to use two MX records in some crazy scheme that looks like a backup server, in practice it does very little to help and quite possibly rather a lot to hinder. It won’t make your email more robust in practical terms.
If you are using an email server at a data centre, with reasonable expectation of an always-on connection, you need a single MX record. If your processing requirements are great you can have multiple records at the same level to spread the load between peered servers, but none would be a backup any more than any other. Senders simply get one server at random. I have a single MX record.
“But you must have a backup!”, is the usual response. I do, of course, but it has nothing to do with having multiple MX records. Let me explain:
A domain’s MX record gives the address of the server to which its email should be sent. In practice, this means the company’s mail sever; or if they have multiple servers, the incoming one. Most companies have one mail server address, and this is fine. If that mail server dies it needs to be repaired or replaced, and the replacement gets the same address.
But what of having a second MX record with an alternative, lower-priority server? It may sound good, but it’s nuts. Think about it – the company’s mail server is where the mail ends up. It’s where the users expect to log in and read it. If you have an alternative server, the mail will go there instead, but the user’s won’t be able to read it. This assumes that the backup is on a different site, available if only the first site goes down. If it’s on the same site it’s even more pointless, as it will be affected by the same connectivity issues that took the first one offline. Users will have their existing mail on the broken server, new mail will be on a different server, and you’ll be in a real bugger’s muddle trying to reconcile the two later. It makes much more sense to just fix the broken one, or switch in a backup at the same location and on the existing IP address. In extremis, you can change the MX record to point to a replacement server elsewhere.
There’s a vague idea that if you don’t have a second MX, mail will be lost. Nothing can be further from the truth. If your one and only mail server is off-line, the sender’s server will queue up the message and keep trying until it comes back – it will normally do this for a week. It won’t lose it. Most mail servers will report back to the sender if it hasn’t been able to get through for four hours, so they’ll know there’s a problem and won’t worry that you haven’t replied.
If you have two mail servers, one on a different site, the secondary server will start receiving emails when the first one goes off-line. It’ll just queue them up, waiting to forward them to the primary one, but in this case the sender won’t get notification of the delay. Okay, if the primary server is off-line for more than a week it will prevent mail loss – but why would the primary server possibly be off-line for a week – the company won’t function unless it’s repaired quickly.
In the old days of dial-up, before POP3 came in to being, some people did use SMTP in a way where a server in a data centre forwarded to the remote site when it connected. I remember Cliff Stanford had a PC mail client called Turnpike that did just this in the early days of Demon. But SMTP was designed for always-on connections and POP3 was designed for dial-up, so POP3 won out.
Let’s get real: There are two likely scenarios for having a mail server off-line. Firstly, the hardware could be dead. If so, get it repaired, and in less than a week. Secondly, the line to the server could be down, and this could be medium-term if someone with a JCB has done a particularly “good job” on it. This gives you a week to make alternative arrangements and direct the mail down another line, which is plenty of time.
So, the idea of having a “backup” MX is pointless. It can only send mail to an off-site server; it doesn’t prevent any realistic mail loss and your email ends up where your users can’t get it until the primary server is repaired. But is there any harm in having one if it makes you feel better?
Actually, in practice, yes. It does make matters worse. In theory mail will just pile up on a spare server and get forwarded later. However, this spare server probably isn’t going to be up to the same specification as the primary one. They never are – they sit there idling, with nothing to do nearly all the time. They won’t necessary have the fastest line; their spam and virus filtering will be out-of-date or non-existent and they have a finite amount of disk space to absorb mail. This can really matter if you end up storing and forwarding a large amount of spam, as is often the case these days. The primary server can be configured to discard it quickly, but this isn’t a job appropriate for the secondary one. So it builds up until it’s ancient and meagre disk space is exhausted, and then it tells the sender to give up trying due to a “disk full” error – and the email is bounced off in to the ether. It’d have been much better to leave it on the sender’s server is the first place.
There are other security issues to having a secondary server. One problem comes with spam filtering. This is best done at the end of the line; it’s not the place of a secondary server to determine what gets delivered and what doesn’t. For starters, it doesn’t see the corpus of legitimate emails, so won’t be so adept at comparing and sorting. It’s probably going to be some old spare kit that’s under-powered for modern spam processing anyway. However, when it stores and forwards it, the primary server will see it comes from a “friend” rather than a dubious source in a lawless part Internet. Spammers do use secondary MX records as a back door to get around virus and spam filters for this very reason.
You could, of course, specify and configure a secondary mail server to be up to the job, with loads of disk space to prevent a DoS attack and fully functional spam filters, regularly maintained and sharing Bayesian analysis data and local rules with the actual server. And then have this expensive resource sitting there doing nothing all day but converting electricity in to heat. Realistically, it’s not going to happen.
By now you may be wondering, if multiple MX records are so pointless, why they exist? It’s one of these Internet myths; a paradigm that users feel comfortable with, without questioning the technology behind it. There is a purpose, but it’s not for “backup”.
When universal Internet email was new, messages would be sent to a user “@” computer, and computers were normally shared, so each would have multiple possible users. The computer would receive the email and put it in the mailbox corresponding to the user part of the address.
When the idea of sending email to a domain rather than a specific server came in to being, MD and MF records also came in to being. A MD record gave the IP address of the server where mail should end up (the Mail Destination). An MF record, if it existed, allowed the mail to be forwarded through another machine first (Mail Forward). This was sometimes necessary, for example if the MD was on a dial-up connection or behind a firewall and unable to accept direct connections over the Internet. The mail would go to the MF instead, and the MF would send it to the MD – presumably by batching it up and opening a line, transiting a firewall or using some other non-public mechanism.
In the mid 1980’s it was felt that having both MD and MF records placed double the load on DNS servers, so unified MX records, which could be read with a single lookup, were born. To allow for multiple levels of mail forwarding through firewalls, they were prioritised to 99 levels, although if you need more than three for any scheme you’re just being silly.
Unfortunately, the operation of MX records, rather than the explicitly named MF and MD, is a bit subtle. So subtle it’s often very misunderstood.
The first thing you need to understand is that email delivery should be controlled from the DNS for the domain, NOT from the individual mail servers that exist on that domain. This may not be obvious, but this is how it’s designed to work, and when you think of it, a central point of control is a good thing.
Secondly, DNS records should be universal. Every computer on the Internet, making the same DNS query, should get the same result. With the later addition of asymmetric NAT, there is now an excuse for varying this, but you can come unstuck if you get it wrong and that’s not what it was designed for.
If you want to reconfigure the route that mail takes to a domain, you do it by editing the single master DNS record (zone file) for that domain – you leave the multiple mail servers alone,
Now consider this problem: an organisation (called “theorganisation”) has a mail server called A. It’s inside the theorganisation’s firewall, for its own protection. Servers on the Internet can’t talk to A directly, because the firewall won’t let them through, but local users send and receive mail through it all day long. To receive external mail there’s another server called B, this time outside the firewall. A rule on the firewall allows specific traffic from B to get to A. The relevant part of the zone file may look something like this (at least logically):
MX 1 A.theorganisation
MX 2 B.theorganisation
So how do these simple lines tell the world, and servers A and B, how to operate? You need to understand the rules…
When another server, which I shall call C, sends a message to alice@theorganisation it will look up the MX records for theorganisation, and see the records above. C will then attempt to contact alice at the lowest numbered MX it finds, which points to server A. If C is located within the same department, it will be within the firewall and mail can be delivered directly; otherwise the firewall will block it. If C can’t contact A because of the firewall it will try the next highest on the list, in this case B. B is on the Internet, and will accept connections from C (and anyone else). The message arrives at B for Alice, but alice is not a user of B. However, B knows that it’s not the final destination for mail to theorganisation because the MX record says there’s a lower numbered server called A, so it attempts to forward it there. As B is allowed through the firewall, it can deliver the message to A, where it’s finally put in alice’s mailbox.
This may sound a bit complicated, but the rules for MX records can be made to route mail through complex paths simply by editing the DNS zone file, and this is how it’s supposed to work. The DNS zone file MX records control the path the mail will take. If you try to use the system for some contrary purpose (like a poor-man’s backup), you’re going to come unstuck.
There is another situation where you might want multiple MX records: If your mail server has multiple network interfaces on different (redundant) lines. If the MX priority value is the same for both, each IP address will (or should) be used at random, but if you have high-cost and low-cost lines you can change the priority to favour one route over another. With modern routing this artifice may not be necessary – let the router choose the line and mangle the IP addresses in to one for you. But sometimes a simple solution works just as well.
In summary, MX record forwarding is not designed for implementing backup mail servers and any attempt to use them for the purpose is going to do more harm than good. The ideas that this is what they’re all about is a myth.
FreeBSD 8.4 released today
FreeBSD 8.4 has just been released. But I thought we were up to 9.1? Actually version 8 is still being maintained for those who don’t want to change too much in one go, as is the way for FreeBSD.
Given this conservatism, why bother upgrading from 8.3 to 8.4? If you want the latest, why not go straight to 9.1; otherwise be conservative and leave well alone? This time I might upgrade, because 8.4 contains fixed versions of BIND and OpenSSL. Certain high-profile DDoS attacks amplified by a feature of BIND are a good enough reason to suggest everyone keeps up with the latest BIND release.
You could, of course, update BIND and OpenSSL by just pulling them from the repository but there are a number of other good bug fixes in there anyway, especially in several on the Ethernet drivers. And ZFS has been improved, if you want crazy disks.
I’m not expecting 9.2 to show up until early next year, if convention holds, which is a pity because some of the BIND and OpenSSL problems were found after 9.1 was released. Don’t wait until January, apply the patches now! (Follow the links above).
Rename file extensions in UNIX/Linux/FreeBSD
I had a directory with thousands of files from a Windoze environment with inconsistent file extension Some ended in .hgt, others in .HGT. They all needed to be in lower case, for some Windows-written cross-compiled software to find them. UNIX is, of course, case-sensitive on such things but Windoze with its CP/M-like file system used upper-case only, and when the shift key was invented, decided to ignore case.
Anyway, rather than renaming thousands of files by hand I thought I’d write a quick script. Here it is. Remember, the old extension was .HGT, but I needed them all to be .hgt:
for oldname in `find . -name "*.HGT"` do newname=`echo $oldname | tr .HGT .hgt` mv $oldname $newname done
Pretty straightforward but I’d almost forgotten the tr (translate) command existed, so I’m now feeling pretty smug and thought I’d share it with the world. It’ll do more than a simple substitution – you could use “[A-Z] [a-z]” to convert all upper case characters in the file to lower case, but I wanted only the extensions done. I could probably have used -exec on the find command, but I’ll leave this as an exercise for the reader!
It could me more compact if you remove the $newname variable and substitute directly, but I used to have an echo line in there giving me confirmation I was doing the right thing.
FreeBSD, Wake-on-LAN and HP Microservers – WOL compatible Ethernet
I’ve been having some difficulties getting Wake-on-LAN (WOL) to work with an HP Microserver thanks to its Broadcom Ethernet adapter not doing the business with the FreeBSD drivers – setting WOL in the Microserver BIOS doesn’t have any effect. I’m pleased to report a solution that works.
The on-board Broadcom Ethernet adaptor still refuses to play ball, for reasons described in my earlier post. The pragmatic solution is to use a better supported chip set and I’ve had no difficulties with Realtek (at the low end of the market) so it was an obvious choice. Just bung a cheap Realtek-based card in and use it as a remote “on” switch – what could possibly go wrong?
First off, the HP Microserver has PCI-Express slots, but weird looking ones. I’d assumed one was PCI when I’d glanced it, but it’s a PCIe 1-channel slot with something strange behind it – possibly a second 1-channel slot. The documentation says its for a remote management card; presumably one which doesn’t need access to the back. There’s a 16-channel PCIe next to it.All very curious but irrelevant here. The point is that you’ll need a PCIe Ethernet card – a surplus 100M PCI one with a well supported, bog-standard chip, won’t do. The PCIe cards tend to be 1Gb, and are therefore not as cheap.
The first card I bought was a TP-Link TG-3458, which has standard Realtek 8168B adapter chip. Or at least mine did; I note that there is a Mk2 version out there. Mine’s definitely a revision 1.2 PCB, but if you buy one now it may have the newer chip (which is a problem – read on below). Anyway, this Mk1 card worked like a charm. On sending it the magic packet and the Microserver bursts in to life. There’s only one snag: It has a full-height bracket and the Microserver has a half-height slot, so you have to leave the card floating in its socket. This works okay as long as no one trips over the cable.
My second attempt was an Edimax EN-9260TX-E, ordered because it was (a) cheap-ish; (b) had a Realtek chip; and (c) had the all-important half-height bracket. It fitted in the Microserver all right, but refused to act on a WOL, at least to begin with…
It turns out there was a little bug-ette in the driver code (prior to 8.3 or 9.1), spotted and fixed by the maintainer about a year ago. If you want to fix it yourself the patch is here. I decided I might as well use the latest drivers rather than re-working those shipped with 8.2, so pulled them, compiled a new if_re.ko and copied it to /boot/kernel in place of the old one. It didn’t work. Ha! Was I naive!
Further investigation revealed that it was completely ignoring this kernel module, as it was using a driver compiled in to the kernel directly. There was no point having the module there, all it does is trick you in to believing that it’s installed. I only realised “my” mistake when, to my astonishment, removing the file completely didn’t disable the network interface. I solved the problem by compiling a new kernel with the built-in Realtek driver commented out, and I’m currently loading the new driver specifically in loader.conf. It works a treat. I could have changed the kernel Realtek driver, but while it’s under review it’s easier to have it loaded separately. Incidentally, the driver is for 9.1 onwards but it works fine on 9.0 so far.
The next task is to fix the Broadcom driver so it works. I may be gone some time…
Lighttpd in a FreeBSD Jail (and short review)
Lighttpd is an irritatingly-named http daemon that claims to be light, compared to Apache. Okay, the authors probably have a point although this puppy seems to like dragging perl in to everything and there’s nothing minuscule about that.
I thought it might be worth a look, as Apache is a bit creaky. It’s configuration is certainly a lot simpler than httpd.conf,although strangely, you tend to end up editing the same number of lines. But is it lighter? Basically, yes. If you want the figures it’s currently running (on AMD64) a size of 16M compared to Apache httpd instances of 196M.
But we’re not comparing like for like here, as Lighttpd doesn’t have PHP; only CGI. If you’re worried about that being slow, there’s FastCGI, which basically keeps instances of the CGI program running and Lightttpd hands tasks off to an instance when they crop up. Apache can do this (there’s the inevitable mod
), but most people seem happy using the built-in PHP these days so I don’t think FastCGI is very popular. It’s a pity, as I’ve always felt CGI is under-rated and I’m very comfortable passing off to programs written in ‘C’ without there being an noticeable performance issues. Using CGI to run a perl script and all that entails is horrendous, of course. But FastCGI should level the playing field and allow instances of perl or any other script language of your dreams to remain on standby in much the same way PHP currently remains on standby in Apache. That doesn’t make perl or PHP good, but it levels their use with PHP on Apache, giving you the choice. And you can also choose high-performance ‘C’.
This is all encouraging, but I haven’t scrapped Apache just yet. One simple problem, with no obvious solution, is the lack of support for the .htaccess file much loved by the web developers and their content management systems. Another worry for me is security. Apache might be big and confusing, but it’s been out there a long time and has a good track record (lately). If it has holes, there are a lot of people looking for them.
Lighttpd doesn’t have a security pedigree. I’m not saying it’s got problems; it’s just that it hasn’t been thrashed in the same way as Apache and I get the feeling that the development team is much smaller. Sometimes this helps, as it’s cleaner code, but it’s statistically less likely to have members adept at spotting security flaws too. I’m a bit concerned about the FastCGI servers all running on the same level, for example.
Fortunately you can mitigate a lot of security worries by running in a jail on FreeBSD (it will also chroot
on Linux, giving some degree of protection). It was fairly straightforward to compile from the ports collection, but it does have quite a few dependencies. Loads of dependencies, in fact. I saw it drag m4 in for some reason! Also the installation script didn’t work for me but it’s easy enough to tweak manually (find the directory with the script and run make in it to get most of the job done). The other thing you have to remember is that it will store local configurations in /usr/local on BSD, instead of the base system directories.
To get it running you’ll need to edit /usr/local/etc/lighttpd/lighttpd.conf
, and if you’re running in a jail be sure to configure the IP addresses to bind to correctly. Don’t be fooled: There’s a line at the bottom that sets the IP address and port but you must find the entry server.bind
in the middle of the file and set that to the address you’ve configured for the jail to have passed through. This double-entry a real pooh trap, especially as it tries to bind to the loopback interface and barfs with a mysterious message. Other than that, it just works – and when it’s in the jail it will happily co-exist with Apache.
I’ve got it running experimentally on a production server now, and I’ve also cross-compiled to ARM and it runs on Raspberry Pi (still on FreeBSD), but it was more fun doing that with Apache.
When I get time I’ll do a full comparison with Hiawatha.
Using ISO CD Images with Windows – Burn.Now problems
When CD-R drives first turned up you needed special software to write anything – originally produced by Adaptec but they were soon overtaken by Nero, with NTI and Ulead having lower cost options. Now, when you get a PC it will usually come with one of the above bundled, and Microsoft has added the functionally to Windows since XP (for CD, if not DVD). This is not good news for the independent producers, but Microsoft’s offering doesn’t quite cut the mustard, so most people will want something better.
My new Lenovo PC came bundled with Corel Burn.Now. Corel recently bought the struggling Ulead, and this is fundamentally the same product as Ulead burn.now. Unfortunately Burn.Now is also pretty feeble – it just can’t do the basics.
To duplicate a CD you need to copy all the data on it. Pretty obvious really. If you’re not copying drive-to-drive it makes sense to copy the data to a .ISO image on your hard disk. You can then transfer it to another machine, back it up or whatever; and write it to a new blank disk later. Burn.Now will create a CD from an ISO image, but if you ask it to copy a disk it uses its own weird and whacky .ixb format. Some versions of Burn.Now gave you the choice, but not the new Corel. It’s .ixb or nothing. This matters, because whilst everyone can write .ISO files, only Burn.Now can write from .IXB format.
Burn.Now is crippled. What about Microsoft’s current built-in options? You can actually write an ISO image using Windows 7 – just right-click on the file and select “Burn disc image”. Unfortunately there is no way to create such a file with Windows. To do this you need add Alex Feinman’s excellent ISO Recorder, which basically does the opposite: Right-click on the CD drive and select Create Image from CD/DVD.
Unfortunately ISO Recorder doesn’t read all disks – it won’t handle Red Book for a start. This is a bit of a limitation – was its author, Mr Feinman concerned about music piracy? Given Windows Media Player can clone everything on an Audio CD without difficulty, his conciousness efforts won’t make a lot of difference.
So – Windows is its usual painful self. If you just want to simply create an image of a CD or DVD with no bells and whistles, go to UNIX where it’s been “built in” since the 1980’s (when CD-ROMs first appeared). Just use the original “dd” command:
# dd if=/dev/acd0 of=my-file-name.iso bs=2048
An ISO file is simply a straight copy of the data on the disk, so this will create one for you. You can write it back using:
# burncd -f /dev/acd0 data my-file-name.iso fixate
Or
# cdrecord dev=1,2,3 my-file-name.iso
Burncd is built in to FreeBSD (and Linux, IIRC), but only works with atapi drives. In the example it assumes the CD recorder is on /dev/acd0 (actually the default).
Cdrecord works with non atapi drives to, but has to be built from ports on FreeBSD and for other platforms it’s available here – along with lots of other good stuff. The example assumes the device is 1,2,3 – which is unlikely! Run cdrecord -scanbus
to locate the parameters for your drive.
Once you have your ISO file, of course, you could use Windows to write it. The choice depends on whether you have strongly held views on whether Windows is a worthy desktop operating system. Corel Burn.Now is, however, a long way from being a worth CD/DVD writing utility.
Can’t get PuTTY and FreeBSD with OpenSSH to do a Certificate Login – Myths
Following yesterday’s post about issues getting “Server Refused Our Key” errors when trying to use PuTTY to log in to FreeBSD with a certificate, I thought I’d just lay to rest a few myths I’ve seen on various web sites where people have tried to explain how to do this. It’s easy to see how these myths develop – I’ve laboured for years under the misapprehension that I needed to do something or other when it was just a coincidence it had started working the first time the idea came to me. So here goes with a few of the myths. If you’re not getting this to work, it’s not for one of these reasons:
Myth: You need to specify 0600 permissions for the authorized_keys file (or the .ssh directory)
Simply not true. It may be a good idea to stop others from reading your keys, although they are “public” keys and won’t let anyone else in anyway (unless a they have a suitable cracking tool and a lot of processing power – and I mean a lot). Only your private key needs to be a secret. The only stipulation is that they must only writeable by the user – 0644 is okay, 0664 or 0666 isn’t.
But as I mentioned yesterday, you MUST ensure that your home directory is also not world-writable! You mustn’t have 0777 permissions! 0755 is okay, as is 0711. I’ve not seen this documented anyway, but it’s true for FreeBSD 7.0 to 9.0.
Myth: OpenSSH requires the authorized_keys file to be owned by the user trying to log in
Again no – it simply doesn’t. It has to be readable to that user (not just root) – this may be because it’s world readable or group readable for the user in question. It might as well be owned by root:wheel as long as it’s Other read bit is set.
Myth: If you’re using SSH2, you need a file called authorized_keys2
This might be true on some installations, but not current ones! I’ve no reason to believe that this file would even be considered, never mind required. The file used is defined in the /etc/ssh/sshd_config, and on current versions of FreeBSD (7.0-9.0) it’s definitely authorized_keys
Myth: You must generate the keys using the OpenSSH keygen utility on FreeBSD – puttygen doesn’t work
Well, there’s a bit of truth in this, but not much. Put simply, the format is different, but this only extends as far as the header and comment.
OpenSSH keys look like this:
ssh-rsa AAAAB3NzaC1y… very long line … sXi+fF noone@example.com
PuTTYGen Keys look like this:
---- BEGIN SSH2 PUBLIC KEY ---- Comment: "no one@example.com" AAAAB3NzaC1y … long line, possibly with breaks … sXi+fF ---- END SSH2 PUBLIC KEY ----
You can convert one to the other using any text editor of your choice, as long as it handles long lines properly (like vi).
I can see there could be all sorts of fun and games if you simply cut/pasted these end ended up with extra line breaks, spaces or truncation – but the key data and its encoding is exactly the same, and that’s the bit that makes it work or not.
If you generate your key using OpenSSH tools you will need to load it into PuTTY Gen and write a Private .ppk key on your Windoze box. Or not. It’s just a text file and you could put the appropriate wrapper on it, but you might as well just use PuTTY Gen.
Myth: You need to edit /etc/ssh/sshd_conf to enable certificate login
No you don’t. The default values as shipped work just fine. Because the file consists of commented out lines of parameters with their default values, I suspect people though that some have been confused about whether the ‘#’ needed to be removed before the parameter came in to effect. They don’t – you only need to remove the comment if you want to change the default value. If you do remove the comment, but don’t edit the value, it’ll make no difference to anything.
What’s Real
In my experience, problems are almost always down to either directory permissions (see above) or errors transcribing public keys from one machine to another – and chaos and confusion caused by the abovementioned myths!
PuTTY, FreeBSD and SSH certificate logins
I’ve just gone crazy trying to figure out why PuTTY kept getting a “Server Refused Our Key” error when I tried to log in to a host using a certificate for the first time. Looking around the web, there are a lot of interesting theories about how to generate the certificates, and out of desperation I tried them all – nothing worked. So, for what it’s worth, here’s what does.
Generate your certificate on FreeBSD using the OpenSSH utility:
ssh-keygen -t rsa
With the default options this will create a couple of files in the .ssh directory within your home directory, and by default they’ll be called “id_rsa” and “id_rsa.pub”. In other words, if you’re user ID is fred the files will be in /usr/home/fred/.ssh/ with the above names. One’s private, the other is public.
You need to add the public key to the list of authorised keys in the .ssh directory:
cat id_rsa.pub >> ~/.ssh/authorized_keys
(The name authorized_keys with the American spelling is set in /etc/ssh/sshd_config)
Next you need to get the private key back to the machine running PuTTY. It’s just text – you can cut/paste it into a text editor and save it. For PuTTY to use it, however, it needs to be converted in to PuTTY’s own format, which you do using the PuTTY Key Generator, puttygen.exe. Run this, click on the Load button and read in your text file, then use the Save Private button to put write the .ppk file somewhere safe. You may wish to set a passphrase on it if there’s any chance someone else can get hold of it!
You may now get rid of the id_rsa.* files on the FreeBSD host, although you might want to add the public key to more than one user on more than one host – it’s a “public” key so there’s no harm in using it all over the place.
It is possible to use PuttyGen to make the keys and copy them to the FreeBSD host instead. A lot of people seem to have had trouble with this in the past (myself included), and it’s probably easier not to, especially if you’re going to use the keys in OpenSSH format for other purposes on the FreeBSD host anyway.
You’ll see a lot about setting the files in .ssh in some very restricted ways – basically all you need to do is ensure that they’re only writable by you. You can make your .ssh directory only readable by you if you wish but it won’t stop it from working. Also, the default /etc/ssh/sshd_config files is fine, and you don’t need to uncomment anything (in spite of what you might read). The default settings are all good, and all commented out, as it says on the top of the file. (Not quite true now – see 2024 update below)
Now, here’s the trick! What will cause a problem, as I eventually figured out, is if your home directory is writable by others. Don’t ask me how or why this should be true, but I tried this after I’d tried eliminating everything else on comparing working and non-working boxes. I know this for sure with FreeBSD 8.1 – ensure your home directory is drwxr-xr-x (or possibly less).
The final stage is to set up a session profile in PuTTY. This isn’t a tutorial for PuTTY, so I’ll be brief. In the options category open to Connection/Data and set the auto-login username you wish to use (if you haven’t already). Then under Connection/SSH/Auth select the private (.ppk) file you want to use. Remember, you can use this file with as many hosts and user accounts as you’ve added the public key to the .ssh/authorized_keys file. Save the session, and that’s it done. If it doesn’t do it for you, take a look in /var/log/auth.log.
Update 2024:
And finally, twelve years later, there’s a problem. newer versions of SSH will barf at RSA keys. You’ll get a “The server refused our key” message and something like this in auth.log…
sshd[1539]: userauth_pubkey: signature algorithm ssh-rsa not in PubkeyAcceptedAlgorithms [preauth]
Don’t worry – there’s a quick fix. In /etc/ssh/sshd_config add the following line somewhere that makes sense.
PubkeyAcceptedAlgorithms +ssh-rsa
You might want to use soemethign other than RSA keys going forward, but this is an update to a 2012 article – watch out for a new one.