Amazon is about to shed 14,000 jobs to “stay nimble”. Would it be rude to tell the Senior Vice President of People Experience and Technology at Amazon that this bird has already flown? Engineers trying to use AWS will tell you that they days of Amazon’s amazing technical prowess are well behind them. And it’s across the board – from their smart devices (has anyone used the new Alexa App) to AWS services that only half work. Only their retail system remains best-in-class.
Amazon blamed the recent DNS outage that took so many of their customers offline a week ago on a “race condition” where the DynamoDB back-end to their unified DNS solution simply failed. They explained it in great detail here:
What they didn’t say why it failed; why the race condition existed. Why they screwed up. I’ll hazard a guess that they made the people who really knew how their systems worked redundant by mistake, replacing them with new and cheaper hires to fill the hole in a spreadsheet. There was no one left to spot the flaw until it became manifest. Engineers are interchangable, right? And you just need the right number to fit the project plan.
You don’t need large teams of qualified people to make this stuff work. You need small teams of experienced people who stick with the job and are treated enough respect that they’re empowered to do it. The good ones are not going to stick around to play redundancy roulette every six months, hoping that HR actually understand they’re necessary to keep the show on the road. HR take pride in saying they’re “people people”. Good engineers are not people people; they’re more likely to be neurodiverse. Their managers are unlikely to understand what they do, and HR certainly won’t.
I dare say there is a lot of dead wood in Amazon. They were recruiting like crazy during the pandemic; anyone who looked vaguely like an engineer with a pulse. The trick is identifying who to keep; if, indeed, there is anyone left who they haven’t made redundant already, or who simply got too spooked and left.
To use AIX you really need working function keys on your keyboard – programs like SMIT use them a lot. But if you’re using Guacamole you’ll notice that F1..F5 don’t work. You can verify you have the problem by keying them in and they appear to produce the letters A..E. The workaround is to key ESC-1 for F1 ESC-2 for F2 etc but it’s a pain.
If you do a hex dump of the keyboard input you’ll discover F1 is actually sending 1b 5b 5b 41, which are the ASCII codes for the Escape key followed by ‘[[‘ followed by upper-case A. What?
Normal terminals output ^[[[11~ for F1, ^[[[12~ for F2, ^[[[13~ for F3 and so on. (^[ generates 0x1b – i.e. the same as the ASCII Escape key. ^ conventionally means type the next character with the Ctrl key). Guacamole uses the conventional definitions for F6 onwards, but not the first five. Although function keys are programmable on real terminals, if you remember those, no one ever programmed them because if they were used in an application it wouldn’t recognise the macro you’d changed them to, as it’d be expecting the default sequence. The application on the host could reprogram them, of course, but it would have to do this for every terminal type – including new ones it didn’t know about – and the whole thing got silly. So anyone with any sense left them to send the macro as standard, out-of-the-box.
So why on earth does Guacamole send something completely different for the first five? It’s something to do with Linux, where a long time ago someone who didn’t understand what was going on broke the convention. And, it turns out, Guacamole is emulating a Linux keyboard/terminal by default. There is a plan to fix this, but it can’t do it in the current 1.6.0 version. Incidentally, I’m talking about AIX 7.3 (and not expecting a new version for that any time ever).
No problem – you can fix this by setting the terminal-type parameter in user-mapping.xml to something AIX knows about, right? VT220 perhaps. Right? Wrong! It turns out that Guacamole ignores this – it just feeds it to the host when it asks for a terminal ID. It doesn’t change the way it behaves itself at all – the function keys are still wonky.
To fix this issue you’ll need fix the termcap on AIX, creating a Guacamole-specific set of mappings and then (ideally) get Guacamole to ask AIX to select it for you.
There are two ways to fix the termcap, DIY or cut/paste from the one I’ll include below. It’s possibly better to get used to doing it yourself as you might find some other things that need tweaking.
First dump the existing xterm information, which is otherwise “close enough”:
infocmp xterm >xterm-guac.ti
Next open xterm-guac.ti in the editor of your choice (vi or vi with AIX) to find the function key mappings and change them to what Guacamole actually sends. The best way to see what’s actually being sent by any key is to use the standard Unix utility “hd” to hex dump stdin, but it’s not no AIX so you’ll have to use “od -x” instead. Come on IBM! We stopped using octal when we went from 12 to 16-bit (PDP-11 in 1970)
The first (non-comment) line of xterm-guac.ti file defines which terminal this is – it’s not taken from the filename. Having just dumped the xterm definitions, it will say it’s “xterm”. We want to define a new terminal rather than redefining xterm, because although this would work, the next time someone logs on using, say, PUTTY or actual xterm they’ll be swearing at whoever broke the function keys for them.
So the top line should read something like:
xterm-guac|Guacamole terminal emulator
Further down you’ll find entries like this (for F1): “kf1=\E[[[11~” Change them to what’s actually sent, i.e. “kf1=\E[[A“. The “\E” is the way the termcap system specifies the Escape key, because you can never have enough conventions. The definitions are comma separated – the ‘,’ is not part of the sequence!
Once you’ve saved the file you need to compile and install it, which is really easy:
tic -v xterm-guac.ti
The -v is optional – it just outputs a couple of lines so you know it’s done something:
Working in /usr/share/lib/terminfo Created x/xterm-guac
And now you’re good to go – AIX has a new type of terminal that matches Guacamole. All you need to do now is tell Guacamole it’s there by editing the connection data in user-mapping.xml (or the equivalent if you’re using a different authentication module). Something like this works well to get that IBM 3270 feeling.
You’ll notice that the shifted function keys look a bit suspect (e.g. kf13=…) but fixing these is left as an exercise for any reader that actually uses shifted function keys. I can’t find an AIX program that uses them to test it!
Finally, if you’re trying to get the backspace key to work on-screen as well as in the input buffer, try adding “stty echoe” to your .profile.
Incidentally, you may need to fix the standard xterm TI to work with standard XTERM (e.g. PUTTY) as that’s wonked on the AIX side – it depends on how you have PUTTY set up. But that’s another story.
Researchers at the University of California in Irvine have discovered that high resolution gaming mice can act as a microphone with a high enough sample rate, using special software installed on the PC. Vibrations in the air, transferred to the mouse mat can be picked up by the sensors, filtered and produce recognisable speech in the right circumstances.
Users of normal mice have little to worry about, but as they point out, anti malware vendors don’t currently treat mouse input as an attack vector.
Read all about it here:
Invisible Ears at Your Fingertips: Acoustic Eavesdropping via Mouse Sensors
https://arxiv.org/html/2509.13581v1
Unix/BSD users are not in danger, as the attack won’t work with a keyboard.
I’ve written about how bad passwords are for years, and also how to set up certificate login on Unix. But nothing for over ten years – because it hasn’t changed. So today, to make a change, let’s connect Windows 1x command line to AIX (IBM’s Unix).
Windows actually has command line ssh built-in these days. Type ‘ssh’ to make sure yours has it. It also has the standard program to generate ssh keys, called ssh-keygen. So we should be good to go. You can just run it with no options as the defaults are sensible. It looks like this:
C:\Windows> ssh-keygen
Generating public/private ed25519 key pair.
Enter file in which to save the key (C:\Users\FrankL/.ssh/id_ed25519):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in C:\Users\FrankL/.ssh/id_ed25519
Your public key has been saved in C:\Users\FrankL/.ssh/id_ed25519.pub
The key fingerprint is:
SHA256:x5cMyeR7OoIRl1OrD5hFbPyMYQ+J6dLqqwGAzKeKDeE frankL@FrankPC
The key's randomart image is:
+--[ED25519 256]--+
| =..o |
|+ o.O* o |
|+o . o.o=B* |
|o.o . o*.+++ . |
|oE o+ S + = |
|o+ . o + + |
|o o. . . + |
| .. . . |
| …. |
+----[SHA256]-----+
What you can’t see above is that I’ve actually entered a passphrase (i.e. password) twice. This is highly recommended if you’re not sure your certificates are in a safe place, and as they’re stored on a Windows PC it’s a pretty safe bet they’re not! This password will be required each time the certificate is used. If you’re sure the certificate is safe, you can leave it blank.
Running ssh-keygen will have created a pair of certificates in C:\Users\FrankL\.ssh\ – where the directory name will depend on your username (mine is ‘FrankL’, in case that isn’t obvious). Note the full stop before .ssh – it’s a Unix thing!
There are two certificates – id_ed25519 and id_ed25519.pub. The .pub means it’s the public one. It’s safe (and even required) that everyone can read its contents. The other one is your private certificate, and must be kept completely secure.
The first part (up to the space) is the type of key, the middle part is a magic number that can be used with the private key to prove it’s genuine, and the last bit (frankL@FrankPC) is just a comment about where the key comes from. You can edit it to something that makes more sense if required. The key is all on one line, however it appears here.
Next you need to log in to your AIX system using your password. You’ll be in your home directory. SSH keys are kept in a hidden directory called .ssh, which may or may not exist already.
mkdir .ssh
cd .ssh
cat >>authorized_keys
You may already have an authorized_keys file, but it won’t matter as this appends the new key. Paste the public key into the window and type Ctrl-D to end. “authorized_keys” is aa text file and can be modified using an editor of your choice if required.
You can then log out of AIX and return to Windows.
To use your new ssh keys to log in we’re going to have to deal with the fact that your username on Windows may not be the same as on AIX/Unix/Linux, and therefore tell ssh which certificate it needs to use. Let’s assume the AIX machine is called “unixhost” and your user-id on it is just frank (without the L)
The “-i C:\Users\FrankL\.ssh\id_ed25519” here tells ssh which certificate to use, and we’re forcing the user name to be frank with “frank@” in front of the hostname. If, however, all your user IDs match it will just go to your home directory and use the key it find there. It will, by default, try log you in using the same username on both machines.
If you’ve set a password on your certificate you’ll be prompted for it when you connect. This prevents a stolen certificate from being used. If you haven’t set a password just make sure no one else can get hold of your private certificate! If they do, find the certificate in the authorized_keys file and delete it.
You may also want to move your private certificate to a more convenient location, and you can rename it anything you like.
Using a certificate login is far more secure and convenient that a password – a win-win. For added security, disable password login on the AIX/Unix machine. Just don’t lose your private key! It’s not a bad idea to keep a backup admin account on a host with a very long password that’s only kept on a piece of paper in a sealed envelope in a safe. Because this password isn’t used except in an emergency there no chance it can be pilfered using a keylogger.
In part one I described how to set up PPP and the pf firewall to provide NAT with port forwarding and other good things. In Part 2 I’ll add DCHP, and as a bonus I’ll add configuration for an IP address blockfor if you have that kind of ISP. If you want that kind of ISP but can’t find one, I can point at a few that do. In Part 3 I’ll cover DNS and BIND.
DHCP
There’s never been a DHCP server in the FreeBSD base, but it’s installed easily by compiling the port or installing the package. Your best bet for FreeBSD is the DHCP daemon written for OpenBSD, AKA the ISC dhcpd. But beware – the OpenBSD one, although called version 6.6, lags behind the other package isc-dhcp44 as it doesn’t have support peer servers. If you’ve only got one DHCP server on your network, it’s fine. If you want to have primary and secondary servers, or load balance them, look at the latest ISC one instead. I’ll deal with that in another post.
pkg install dhcpd
Before you kick it off you really ought to edit the configuration file, /usr/local/etc/dhcpd.conf. There’s usually a second copy of it postfixed with .sample, and it’s pretty self documenting. I’m posting the basics from a real configuration, which I shall annotated to death. But first, something about the network we’re defining:
I’m going to have a LAN with 192.168.1.0/24 – which means IP addresses in the range 192.168.1.1 to 192.168.1.254. This isn’t a tutorial on routing – just leave the first and last address (0 and 255) alone for now. The network will have a domain. This is optional, but if you’re doing your own DNS you’ll want one. You don’t have to register this domain externally – you can make it up (please end it in .local!) – but let’s assume you have a real one: “example.com”. You’ve created an subdomain for this site called mysite.example.com and it has an A record to prove it, and you’ll probably want to delegate the DNS to it later. But if you’re not worried about domain names, don’t worry about all of this.
The router (i.e. the FreeBSD box) is going to be on 192.168.1.2, which is set up in rc.conf. It can’t be assigned automatically by DHCP because, well, we’re also the DHCP server and that would be silly.
Assuming your LAN-side network interface is bge0 (remember the modem is on bge1 in Part 1) the following line would do it:
Obviously change bge0 to the name of your actual Ethernet interface! You might wonder why I’m putting the router on 192.168.1.2 instead of 192.168.1.1, which is a common convention. It’s simple: There are so many home user network appliances that come with 192.168.1.1 as their default IP address, and if you plug one in to your LAN the clash will cause merry hell before you’ve been able to go to their web interface to configure it to something else.
I want some devices to have a fixed IP address supplied by DHCP, and other things to have dynamically allocated ones – friends using the guest WiFi, for example. Having network infrastructure like switches and WAPs on a static addressed, defined by DHCP, is a good way to go. Connecting network printers to Windoze is smoother if they’re on a fixed IP too. But going around and setting it on each device is a pain, so do it by DHCP where it’s defined in one place and can be managed in one place. It works by recognising the MAC address in the request and giving back whatever IP address you have chosen.
As a final tip, keep your network address plan as comments in dhcpd.conf – it’s where you want the information anyway. And with that, here’s the sample file:
# This is the domain name that will be supplied to everything on
# the LAN by default. This is the domain that will be searched if you
# enter a host name. For example, if you want to connect to "fred-pc" it
# will look for it as fred-pc.mysite.example.com, which if you have
# your DNS set up correctly, will find it quickly.
option domain-name "mysite.example.com";
# This specifies the DNS server(s) the machines on the LAN
# will use. We're specifying the same as the router, because
# we'll be running DNS there. If you don't want to, just use the
# IP address of DNS server supplied by your ISP.
option domain-name-servers 192.168.1.2;
# These just specify the time a machine on the LAN gets to hold
# on to a dynamic address before it needs to renew it.
default-lease-time 43200;
max-lease-time 86400;
# This defines our pool of dynamically allocated addresses,
# and I've chosen the range 100..199. Options here override the
# options above (outside the {...}) in the way you might expect.
# I've set the default lease time to 900 seconds (15 minutes)
# for testing purposes only. 2h is normal but it's up to you.
# I normally go for 12h.
subnet 192.168.1.0 netmask 255.255.255.0 {
range 192.168.1.100 192.168.1.199;
option broadcast-address 192.168.1.255;
default-lease-time 900;
}
# The next block is assigning a fixed IP address to
# a switch, because I don't want it to move. This just needs the
# MAC address of the device and the fixed-address you want to give it.
# You can have as many of these as you like. The name "switch1" is really
# just for your own reference.
host switch1 {
hardware ethernet 00:02:FC:CB:1E:7D;
fixed-address 192.168.1.3;
}
For more information see this post about assigning names, and the dhcpd.conf.sample, which has scenarios far more complex than you’ll need on a simple LAN.
Enable it on reboot with:
sysrc dchpd_enable=yes
You can then start it manually with service dhcpd start.
If you want to make changes to dhcpd.conf you can at any time, but they won’t take effect until you restart dhcpd (with service dhcpd restart). There’s no way of having it just do a reload. Details of the leases it has issued are /var/db/dhcpd.leases, which is just a text file and you can easily read it.
Routing a whole subnet
Supposing you have more than one IP address coming down the PPPoE tunnel at you? This is a service you can get from your ISP, giving you multiple IP addresses for various purposes – such as running servers. Other ISPs give you a single dynamic address, or worse, an IP address generated by CG-NAT. I’d argue this ceases to meet the definition of “Internet Service” at this point.
But assuming you have a block of static addresses, how do you get ppp to use them? I haven’t seen this documented ANYWHERE and figuring it out involved a great deal of trial and error. Shout out to shurik for encouraging me to keep going where ppp.linkup was concerned.
The easy way to add an alias to your tunnel (which you’ll recall we called wan0) is to use ifconfig and simply add it. But the trick with tunnels is to add the alias IP address and the remote tunnel address (i.e. HISADDR). You can find out what HISADDR is using ifconfig:
# ifconfig wan0
wan0: flags=1008051<UP,POINTOPOINT,RUNNING,MULTICAST,LOWER_UP> metric 0 mtu 1492
options=80000<LINKSTATE>
inet 1.2.3.4 --> 44.33.22.11 netmask 0xffffffff
groups: tun
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
Opened by PID 658
In the output above, 1.2.3.4 is the IP address supplied by LCP – i.e. your public IP address. 44.33.22.11 is the IP address of the other end of the tunnel. In the parlance of the PPP utility, HISADDR. Earlier we set the default route to HISADDR. There are good reasons why HISADDR is dynamic, not least of which is having a pool of gateways for redundancy, so you have to check what it actually IS today before you assign an alias public address to the tunnel.
Then it’s a simple matter of adding further addresses using ifconfig:
ifconfig wan0 alias 1.2.3.41/32 44.33.22.11
Yes, it’s not quote the same format as adding an alias to an Ethernet interface, as the remote address follows the local one.
You can write a little script to do them automatically:
#!/bin/sh
HISADDR=$(ifconfig wan0 | grep "inet 1.2.3.4" | cut -w -f 5)
ALIASES="1.2.3.41 1.2.3.42 1.2.3.43 1.2.3.44"
for a in $ALIASES
do
ifconfig wan0 delete $a
ifconfig wan0 alias $a/32 $HISADDR
done
Note that I’m using grep to find the correct inet address based on the static address I know the interface has. Fiddle this to suit your static address, or if you don’t have one, grep for inet and hope the first it finds is correct. I’m also deleting the old aliases as they might need to be recreated using the new HISADDR.
This is all well and good, but when do you run the script? Automating it is the trick. Fortunately there’s a hook in ppp, where it processes the file /etc/ppp/ppp.linkup when the link comes up. As far as I can tell it’s the same format as ppp.conf, and you have to label the service name in the same way. What’s not documented is how you add alias addresses, but I’ve found a way by getting it to run ipconfig for you. If you start a line with ” !bg “, what follows is run. It’s run without an environment so you have to specify all paths to whatever you want to run in full, but it does work and does expand macros like HISADDR. The space in front of the ! is important! Incidentally, there’s also a ppp.linedown.
Here’s my /etc/ppp/ppp.linkup
cloudscape:
!bg /sbin/ifconfig wan0 alias 1.2.3.40/32 HISADDR
!bg /sbin/ifconfig wan0 alias 1.2.3.41/32 HISADDR
!bg /sbin/ifconfig wan0 alias 1.2.3.42/32 HISADDR
!bg /sbin/ifconfig wan0 alias 1.2.3.43/32 HISADDR
I would very much like to find the documentation for this, but the author (Brian Somer) has moved on to other things and the documentation that’s out there appears to be all there is. It was written for dial-up connections and wasn’t really designed for fixed lines with multiple public IP addresses.
Meanwhile the other PPP demon, mpd5, which is supposed to be better, was listed in the FreeBSD Handbook as being for PPPoA, pushing user-ppp for PPPoE. This isn’t actually the case, and I may be revisiting this using mpd5 at some point because it’s faster and more efficient, and I don’t need all the extra wonderful NAT and firewall features of user-ppp.
Prepending is, of course, adding stuff to the front of a file whereas appending is sticking it on the end. Prepending is an affront to sequential file handling and should never be attempted, but if you really want to do it anyway, it got a few of us thinking.
To append stuff to a file it’s easy.
echo New stuff >> existing-file
The first thing that came into our heads was to reverse the file, append the new stuff and then reverse it again. Except “rev” reverses characters, as well as lines, so the stuff you want to append would have to be reversed to it came out the right way and… it was getting messy.
Someone suggested tac to reverse the file line-by-line, which was better, but tac???? Apparently it’s a Linux reverse “cat” that writes stuff out backwards line by line. On Unix it appears to be the equivalent of “tail -r”, but the -r option doesn’t exist on the GNU version. I see that a lot on Linux – they have a cut-down version of some Unix utility and spawn another utility to fill the shortcomings rather than adding it.
But, even with tail -r (or tac) it gets a bit messy – you really need a temp file as far as I can figure it. Does anyone have devious way to avoid it?
This is only going to work with bash, which has an extension that captures the output of a process and feeds it into stdin: <(....). You do the same easily Bourne shell using a named pipe, but it’s just making things more complicated when there’s a simpler solution when a utility uses a – to stand for stdin in a list of file arguments:
Then it was suggested that sed -i might do it “in place”. If course it’s not in-place on the disk, but it looks neater and you can pretend it’s okay.
echo New Stuff | sed -i "1i $(cat)" existing-file
This was a team effort – Stefan persevered with sed, I added the stdin capture using cat. Note that this is GNU sed – the original sed would require you to specify a backup file extension – i.e. stick a ” after the -i (that’s two single quotes for an empty string). Actually, specifying “.backup” might be wise.
I’m looking at media reporting of the disruption caused to airports by the latest ransomware attack and I’m once again struct by the lack of detail. The victims are, as always, tight-lipped about it and this translates to the media as “we don’t know what happened apart from it was an attack”.
Anyone who knows how this stuff works will have a pretty good idea what went down. So let’s look at the Collins Aerospace system at the heart of it: It’s reported as being MUSE but it’s actually cMUSE
cMUSE stands for common-use Multi-User System Environment, and it allows airlines to share check-in desks. It’s what’s known as a common-use passenger processing system, or CUPPS. When the self-loading cargo presents itself a the check-in it tracks their bags using integration with systems like BagLink, sorts out boarding stuff and so on. It’s main competitor, if you look at it that way, is SITA’s BagManager, but this only handles and tracks luggage.
Now here’s the thing – cMUSE makes a big thing of being cloud based. It runs on AWS. A SaaS product. It is possible to run it on your own infrastructure, but they sell the benefits of not needing your own servers and expensive IT people to manage it – just let them do it for everyone on AWS.
So what went wrong? They haven’t said, but a penny to a pound it’s the AWS version that got hit. This is why so many airlines got their check-in hijacked in one go. A nice juicy target for the ransomware gangs. At Heathrow, I believe it’s deployed on over 1,500 terminals on behalf of more than 80 airlines. It’s used in over 100 airports worldwide, which isn’t a huge share of the total number (there are over 2000 big ones according to the ACI), but it’s been sold extensively to the big european ones – high-traffic multi-carrier hubs. The ones that matter. Heathrow renewed for another six-year contract this April.
Collins claims it will save $100K per airport going to AWS, but that must seem like a false economy right now. Its predecessor, vMUSE, dates before cloud-mania and users of the legacy system must be feeling quite smug. Many airports have a hybrid of cMUSE and vMUSE and it’s hard to know the mix.
Ottawa International went cloud with a fanfare in 2017, and Shannon Airport chugged down the kool-aid, renewing for cloud-only in 2025. Heathrow is likely mostly cloud. Cincinnati/Northern Kentucky, Indira Gandhi International (Delhi) are publicly know to be cloud users. What bet Brussel and Berlin Brandenburg are on the list? Lesser problems at Dublin and Cork, which use the system, suggest they’re hybrid or still on vMUSE.
Subscribing to a cloud service for anything important is such a bad idea. You’re only as safe as your cloud provider. There’s no such thing as a virtual air-gap and large-scale attacks are only possible because everyone’s using the same service. If airports save $100K by switching, they’d be much better off having servers on-site and paying someone to look after them – part-time if it’s such a small amount in question.
If you want a games server in the cloud go ahead. If my business depended on it, I’d want to know where my data was and who could get at it.
This means that if someone steals a contactless card they can spend as much as they like from your bank account. But don’t worry, the FCA says the banks will have to refund you if it happens.
Part of their justification is that digital wallets (Apple Pay and Google Pay) allow for much higher contactless transactions than the current £100 limit. For anything over £100 and the card system asks for a PIN to prove it’s really you. Even with that safeguard, criminals make a series of transactions of around £90 before the banks fraud system detects something suspicious.
You might remember that the contactless limit was £30 from 2015 to pandemic, after which it was raised to £45 and then £100 in 2021 to reduce the amount of contaminated cash in circulation. It was never reduced, which some say was a mistake.
The difference between physical cards and Apple Pay/Google Wallet is that they require you to unlock the phone first, which is arguably more secure than a four-digit PIN. Claiming that because these are unlimited that the PIN security should be stripped from physical cards is the craziest thing I’ve heard in years. And the FCA is going out of its way to blame the government.
This morning I woke up to an expired TLS certificate on this blog. This is odd, as it’s automatically renewed from LetsEncrypt using acme.sh, kicked off by a cron job. So what went wrong?
I don’t write about LetsEncrypt or ACME much as I don’t understand everything about it, and it keeps surprising me. But I had discovered a problem with FreeBSD running the latest Apache 2.4 in a jail. As I run my web servers in jails, this applies to me.
I like acme.sh. It’s a shell script. Very clever. No dependencies. Dependencies are against my religion. Why anyone would use a more complex system when there’s something simple that works?
For convenience reasons the certificates are renewed outside of a jail, and the sites are created using a script that sets it all up for me. One source of certificates for multiple jails; it’s easier to manage. It manages sites on other hosts using a simple NFS mount.
When you use acme.sh to renew a certificate for Apache you need to be able to plonk something on the web site. This is easy enough – the certificate host (above the jails) can either get direct access through the filing system, or via NFS. It then gets the new certificate and copies it into the right place. When you first issue yourself a certificate you specify the path you want the certificate to go, and the path to the web site. You also specify the command needed to get your web server to reload. It magically remembers this stuff so the cron job just goes along and does them all. But that’s where the fun starts.
I rehosted the blog on a new instance of Apache, and created a new temporary website to make sure SSL worked – getting acme.sh to issue it a certificate in the process. All good, except I noticed that inside a jail, the new version of Apache stops but doesn’t restart after an “apachectl graceful”. The same with “apachectl reload”. Not great, but I tried using “service -j whatever apache24 restart”. A bit drastic but it worked, and I’ve yet to figure out why other methods like “jexec whatever apachectl graceful” stall.
So what happened this morning at 6am? There were some certificates to renew and acme.sh –cron accidentally KOed Apache. It’s the first time any had expired.
Running acme.sh manually between restarting Apache manually worked, but it’s hardly the dream of automation promised by Unix. Debugging the script I found it was issuing a graceful restart command, and I thought I’d specified something more emphatic. So I started grepping for the line in was using, assuming it must be in a config file somewhere. Nothing.
Long story short, I eventually found where it had hidden the command: in .acme.sh/domain.name/domain.name.conf , in spite of having looked there already. It turns out that it’s the line “Le_ReloadCmd=”, and its unique for each domain (sensible idea), but it’s base64 encoded instead of being plain text! And it’s wrapped between “_ACME_BASE64__START_” and “_ACME_BASE64__END_”. I assume this is done to avoid difficulties with certain characters in shell scripts but it’s a bit of a pain to edit it. You can create a new command by piping it through base64 and editing very carefully, but readable it ain’t.
There is an another way – just recopy the certificate. Unfortunately you need to know, and use, the same options as when you originally created it – you can’t just issue a different –reloadcmd. You can check these by looking at the domain.name.conf file, where fortunately these are stored in plain text. Assuming they’re all the same, this little script will do them all for you at once. Adjust as required.
#!/bin/sh
# Make sure you're in the right directory
cd ~/.acme.sh
# Jail containing web site, assumed all the same.
WJAIL=web
for DOM in $(find . -type d -depth 1 | sed "s|^\./||")
do
echo acme.sh $TEST -d $DOM --install-cert \
--cert-file /jail/$WJAIL/data/certs/$DOM/cert.pem \
--key-file /jail/$WJAIL/data/certs/$DOM/cert.key \
--fullchain-file /jail/$WJAIL/data/certs/$DOM/Fullchain.pem \
--reloadcmd "service -j $WJAIL apache24 restart"
done
You will notice that this only echos the command needed, so if anyone’s crazy enough to copy/paste it then it won’t do any damage. Remove the “echo” when you’re satisfied it’s doing the right thing for you.
Or you could just edit all the conf files and replace the Le_ReloadCmd= line – you only have to generate it once, after all.
Following on from Basic UNIX file commands, here’s a bit there wasn’t time for on changing metadata on files.
These two commands change the files permissions and ownership. Permissions is information associated with a file that decides who can do what with it. This was once called the file mode, which is why the command is chmod (Change MODe). Files also have owners, both individuals and group of users, and the command to change this is chown (CHOWNer).
chown is easiest, so I’ll start there. To make a file belong to fred the command is:
chown fred myfile
To change the owning group to accounts:
chown :accounts myfile
And to change both at once:
chown fred:accounts myfile
Changing a files permissions is more tricky, and there are several ways of doing it, but this is probably the easiest to remember. You’ll recall that each file has three sets of permissions: Owner, Group and Other. The permissions themselves are read, write and execute (i.e. it’s an executable program).
chown can set or clear a load of permissions in one go, and the format is basically the type of permission, and ‘+’ or ‘-’ for set or clear, and the permissions themselves. What? It’s probably easier to explain with a load of examples.:
chmod u+w myfile
Allows the user of the file to write to it (u means user/owner)
chmod g+w myfile
Allows any user in the group the file belongs to write to it.
chmod o+r myfile
Allows any user who is not in the files group or the owner to read it. (o means “other”).
You can combine these options
chmod ug+rw
Allows the owner and the group to read and write the file.
chmod go-w
Prevents anyone but the user from being able to modify the file.
If you want to run a program you’ve just written called myprog.
chmod +x myprog
If you don’t specify anything before the +/- chmod assumes you mean yourself.
You might notice an ‘x’ permission on a directory – in this case it means the directory is searchable to whoever has the permission.