Freeloaders step in to fund Open Source thanks to OpenSSL fiasco

Some good has come out of the heartbleed bug – some of the larger organisations using it have decided to put some money in to its developemnt. Quite a lot in fact. it’s through an initiative of the Linux Foundation, and is supported by the likes of Microsoft, Cisco, Amazon, Intel, Facebook, Google and IBM. The idea is to fund some critical open source projects.

While this is welcome news for the open source community in general, and certainly vindicates the concept, I have to question its effectiveness. The vulnerability was actually reported by the community two years ago, and had already been fixed. However, it persisted in several releases until it had been. One could blame the volunteers who developed it for sloppy coding; not spotting it themselves and not fixing it when it was pointed out to them earlier. But I can’t blame volunteers.

It’s up to people using Open Source to check its fit for purpose. They should have carried out their own code reviews anyway. At the very least, they should have read the bug reports, which would have told them that these versions were dodgy. Yet none of them did, relying on the community to make sure everything was alright.

I dare say that the code in OpenSSL, and other community projects, is at last as good as much of the commercially written stuff. And on that basis alone, it’s good to see the freeloading users splashing a bit bit of cash.

I wonder, however, what will happen when Samba (for example) comes under the spotlight. Is Microsoft really going to fund an open-source competitor to its server platform? Or vmware pay to check the security of VirtualBox? Oracle isn’t on the current list of donors, incidentally, but they’re doing more than anyone to support the open source model already.

Restoring cPanel backup to system without cPanel

cPanel is a web front end for “reseller” hosting accounts, and it’s very popular with web designers reselling hosting services. It’s very simple to use, and allows the web designers to set up virtual hosting accounts without giving them any real control over the server – self-service and fool proof. It’s also an expensive thing to license. It makes sense for a self-service low-cost hosting provider, where the customers do all the work, but for small-scale or “community” hosting providers you’re talking big money.

I’ve just had to rescue a number of web sites from a developer using one of these hosting services, and they’ve got a lot of sites. And the only access to the virtual server is through cPanel (and FTP to a home directory). I logged in to cPanel and there’s an option to create a backup of everything in one big tarball, and this looked like just what I wanted to get them all at once. However, it was designed to upload and unpack in another cPanel environment.

Getting out the home directories is pretty straightforward. They end up in a directory called “homedir”, and you just move it to where you want them – i.e. ~username/www/. But how about restoring the dump of the MySQL databases. Actually, that’s pretty simple too. They’re in a directory called “mysql”, but instead of it being one big dump, each is in it’s own file – and without the create commands, which are in another with the extension “.create” instead of “.sql”. Loading them all manually is going to be a time-wasting PITA, but I’ve worked out the the following shell script will do it for you if you run in while in the backup’s mysql directory:

for name in `find . -name “*.create”`; do
cat $name `echo $name | sed s/.create/.sql/` | mysql
done

You obviously have to be in the directory with the files (or edit find’s specification) and logged in as root (or add the root login as a parameter to the mysql utility).

You’ll also want to set the user/password combination on these. The tarball will have a file called mysql.sql in its root directory. Just feed it in thus:

mysql < mysql.sql

Please be aware that I figured this out looking at the files in the dump and NOT by reading any magic documentation. It works on the version of cPanel I encountered, and I was restoring to FreeBSD. By all means add a comment if you have a different experience when you try it, and don’t go this way if you’re not sure how to operate a MySQL database or you could do a lot of damage!

The final hurdle is configuring Apache for all these new sites. cPanel creates a directory in the dump called “userdata”, and this seems to contain a file with information about each web site. I decided to automate and wrote the following script:


#!/bin/sh

# Convert cPanel dump of "userdata" in to a series of Apache .conf files
# (c) F J Leonhardt 17 April 2014 - www.fjl.co.uk
# You may use this script for your own purposes, but must not distribute it without the copyright message above left intact

# Directory to write config files
# Normally /usr/local/etc/apache22/Include but you might want to write
# them somewhere else to check them first!

confdir=/usr/local/etc/apache22/Include

# oldhome and newhome are the old and new home directories (where the web sites are stored
# oldtestname and newtestname are used (together with a sub-domain) to implement test web sites before
# they have a real domain name pointed at them. They will be substituted in server names and aliases

oldhome=/data03/exampleuser/public_html
newhome=/home/exampleuser/www
oldtestname=exampleuser.oldisp.co.uk
newtestname=newuser.fjl.org.uk

# Now some static information to add to all virtual hosts
# vhost is the IP address or hostname you're using for virtual hosting (i.e. the actual name of the server)
# serveradmin is the email address of the server admin
# logfiles is the directory you want to put the log files in (assuming you're doing separate ones). If
# you do this you must uncomment the lines that write the .conf file

Please generate and paste your ad code here. If left empty, the ad location will be highlighted on your blog pages with a reminder to enter your code. Mid-Post

vhost=web.exampleuser.com
serveradmin=yourname@example.com
logdir=/var/log

getvalue()
{
grep ^$1: $name | sed s!$1:\ !! | sed s!$oldtestname!$newtestname!
}

# Start of main loop We DO NOT want to process a special file in the directory called "main" so
# a check is made.

for name in `ls`; do
if [ "$name" != "main" ]
then
echo -n "Processing $name "

if grep ^servername: $name >>/dev/null
then

# First we get some info from the file

sitename=`getvalue servername`
serveralias=`getvalue serveralias`
documentroot=`getvalue documentroot`

# Below we're setting the .conf pathname based on the first part of the file name (up to the first '.')
# This assumes that the file names are in the form websitename.isp.test.domain.com
#
# If the sitename in the source file is actually the name of the site (rather than a test alias) use
# this instead with something like:
#
# Basically, you want to end up with $givensitename as something meaningful when you see it
#
#givensitename=$sitename

givensitename=`echo $name | cut -d \. -f1`

confname=$confdir/$givensitename.conf

echo to $confname

echo "" >$confname
echo -e \\tServerAdmin $serveradmin >>$confname
echo -e \\tServerName $sitename >>$confname
for aname in $serveralias; do
echo -e \\tServerAlias $aname >>$confname
done
echo -e \\tDocumentRoot `echo $documentroot | sed s!$oldhome!$newhome!` >>$confname
echo -e \\tErrorLog $logdir/$givensitename-error.log >>$confname
echo -e \\tCustomLog $logdir/$givensitename-access.log combined >>$confname
echo "
" >>$confname

#from check that servername present
else
echo "- ignoring file - no servername therefore wrong format?"
fi

#fi from check it wasn't called "main"
fi
done

All of the above assumes you’re familiar with setting up virtual hosting on an Apache 2.2 http server in an UNIX-like environment. It’s just too complicated to explain that in a single blog post. Drop me a line if you need assistance.

Heartbleed bug not as widespread as thought

Having tested a few servers I’m involved with, many of which are using old or very old versions of OpenSSL, I can’t say I’ve found many with the problem. You can test a server here: http://filippo.io/Heartbleed/ on a site recommended by Bruce Schneier.

So what’s going on? Does this affect very specific nearly-new releases. This story could turn out to be a serious but solvable problem, and a media panic. I recall spending most of 1999 doing interviews on how the “year 2000 bug” was going to be a damp squib, but it’s early days yet.

Heartbleed bug

Someone’s finally found a serious bug in OpenSSL. It allows a remote attacker to snoop around in the processes memory, and this is seriously bad news because this is where you will find the private keys its using. They’re called “private keys” because, unlike public keys, they need to remain private.

This is going to affect most web sites using https, and secure email (if you’re using it – most aren’t). But before user’s rush off to change their passwords (which are different for each site, aren’t they?) – there’s no point in doing this if an attacker is watching. The popular press reckons your passwords are compromised; I don’t. If I understand it correctly, this exploit theoretically allows an attacker to intercept encrypted traffic by pretending to be someone else, and in doing so can read everything you send – including your password. So don’t log in until the server is fixed. They can’t read your password until you use it.

To cure this bug you need a new version of OpenSSL, which is going to be a complete PITA for server operators who aren’t on-site. Hell, it’ll be a PITA even if you are on-site with the servers. Once this is done you’ll also need new certificates, and the certificate authorities aren’t geared up for everyone in the world changing at once.

But the big fun one is when you can’t update OpenSSL. It’s used everywhere, including in embedded systems for which there was never any upgrade route. I’m talking routers, smart TVs – everythign.

I believe that SSH isn’t affected by this, which is one good thing, but I’m waiting for confirmation. Watch this space.

But, if you’re using a secure web site to log in over SSL, consider the password compromised if you’ve used it in the last few days and be prepared to change it soon.

NSPCC claims that 6% of teenage boys read Pornhub

Petee Wanless, CEO of the NSPCC, has made a fool of himself and the organisation he represents by call8ng for unworkable restrictions to be placed o. Porn websites to prevent access by min
ors. This is on the back of some dubious looking research from avstat, who have made simar headline grabbing claims that 6% of males aged 12-16 have been looking at a site called Pornhub during the course of one month. This is based on a survey of traffic, apparently.

It’s pretty obvious to anyone in a position to see net traffic that this is most improbable, and it’s only a matter of time before the research is ripped to shreds. That the NSPCC is taking it seriously raises more questions of the organisation’s competence. Time for a new CEO, methinks.

MH370 – One week later, wreckage found. Really?

So, an Australian satellite has potted debris in the Indian Ocean at the far end of the arc MH370’s engine data fixed the aircraft on for seven hours. There’s now going to be a rush to find it, no doubt.

Fuzzy picture of what Australia hopes is wreckage of MH370
Its it a plane? Is it a wave? It is a statistical certainty

Apparently these images are four days old and have only just come back from analysis.

I think this could well be a wild goose. What we’re looking at is a cluster of white dots in a texture of black and white. Experts have declared this likely debris; to me it looks more like waves. Or perhaps it’s a container washed off a ship, or who knows what? That’s it’s part of MH370 seems very unlikely. Probability is against it.

Let’s look at that probability. Firstly, why is the aircraft presumed to be on this arc leading north and south from Malaysia? It’s actually the line of equal distance (more or less) from the Inmarsat satellite collecting the data from the engines, and this is based on a 1d fix; namely the elevation. I believe it’s known to be 40 degrees declination from the satellite. That’s sound.

The arc ends where the aircraft stopped transmitting, which is also when it is likely to have run out of fuel, and the maximum distance it could have flown along the arc.

However, to get to the far end of the arc, someone would have to have flown it there – or set the autopilot to follow THAT course. Not any of the other courses it could have taken from the point, but that precise arced course. It’s not impossible; it could have taken this course. But is it likely? Probability says “no”.

What seems more probable to me is that the aircraft hung around in a holding pattern close to where it was lost. That’s where to look. If the satellites have found it, great – and the explanation as to why it followed that precise course will be interesting, but I’m not hopeful.

If you’re working on a conspiracy theory, the data sent to Inmarsat could have come form a ground-based transmitter; it could be fake to throw investigators off the scent.

Missing Malaysian Airliner

I’ve got more interest than usual in this, as I happened to be on a ‘plane in the same airspace a few hours afterwards. It makes you think while waiting to board in Singapore.

Three days later, no wreckage has been found and there are rumours of the aircraft changing course. Hijack? That’s what it looks like to me, based on the facts released. First off, there was no distress call. The same was true of the Air France 477 in 2009 (discounting the automated transmissions), but that was way out over the ocean a long way in to the flight; MH370 had only recently departed and was in crowded airspace, in range of ATC and showing up on civil radar.

Much was made of the passengers travelling on stolen passports; given that part of the world I’d be surprised if there weren’t several on every flight out of KL. If it was a terrorist attack, someone would have claimed it by now anyway. And if it was external hijackers, the crew would have raised the alarm.

So what could have happened? The release of the final radio message is a huge clue – they were handing over from Malaysia to Vietnam, mid-way across the sea. Hand-overs are important – they say goodbye, change frequency and says hello. Only the goodbye happened.

If the aircraft had suffered a very sudden and catastrophic failure, the wreckage would be floating on the ocean below at that point. So that leaves the aircrew. They could have turned off the transponder and done what they liked.

If external agents had hijacked an aircraft the pilots would have triggered the hijack alarm on the transponder and made a distress call. They were in radar range, and radio range. And the security on the cockpit door would have allowed them time.

If I was flying an aircraft and wanted to take it over, mid-sea on ATC handover would be the obvious place to do it. Malaysia wouldn’t expect contact because they’d left; Vietnam wouldn’t notice loss of contact because none had been made; they’d assume they were still talking to Malaysia. Just speculating out loud…

Only military radar would be taking any interest in the aircraft, and in that part of the world you bet they were watching but don’t really want to talk about it.

Criminals using self-assessment tax filing deadline to drop Trojans

I’ve intercepted rather a lot of these:

From: <gateway.confirmation@gateway.gov.uk>
To: <**************>
Date: Mon, 3 Feb 2014 20:33:49 +0100
Subject: Your Online Submission for Reference 485/GB6977453 Could not process

The submission for reference 485/GB6977453 was successfully received and was not processed.

Check attached copy for more information.

This is an automatically generated email. Please do not reply as the email address is not monitored for received mail.

Someone (via France, and the sender certainly does not speak proper English) is taking advantage of people’s panic about getting self-assessment tax forms in before the 31st January deadline to avoid a fine The attached ZIP file contains an executable with a .scr extension. It doesn’t show as being anything recognisable as nasty, so someone’s planned this well. Be careful; this is slipping through ISP malware scanners (and all the Windoze desktop scanners I’ve checked it against).

 

FreeBSD 10.0 and ZFS

It’s finally here: FreeBSD 10.0 with ZFS. I’ve been pretty happy for many years with twin-drive systems protected using gmirror and UFS. It does what I want. If a disk fails it drops it out and sends me an email, but otherwise carries on. When I put a replacement blank disk it can re-build the mirror. If I take one disk out, put it into another machine and boot it, it’ll wake up happy. It’s robust!

So why mess around with ZFS, the system that puts your drives in to a pool and decides where things are stored, so you don’t have to worry your pretty little head about it? The snag is that the old ways are dying out, and sooner or later you’ll have no choice.

Unfortunately, the transition hasn’t been that smooth. First off you have to consider 2Tb+ drives and how you partition them. MBR partition tables have difficulties with the number of sectors, although AF drives with larger sectors can bodge around this. It can get messy though, as many systems expect 512b sectors, not 4k, so everything has to be AF-aware. In my experience, it’s not worth the hassle.

The snag with the new and limitless “GPT” scheme is that it keeps safe copies of the partition at the end of the disk, as well as the start. This tends to be where gmirror stores its meta-data too. You can’t mix gmirror and GPT. Although the code is hackable, I’ve got better things to do.

So the good new is that it does actually work as a replacement for gmirror. To test it I stuck two new 3Tb AF drives into a server and installed 10.0 using the new procedure, selecting the menu option zfs on root option and GPT partitioning. This is shown in the menu as “Experimental”, but seems to work. What you end up with, if you select two drives and say you want a zfs mirror, is just that.

Being the suspicious type, I pulled each of the drives in turn to see what had happened, and the system continues without a beat just like gmirror did. There were also a nice surprises when I stuck the drives back in and “onlined” them:

First-off the re-build was almost instant. Secondly, HP’s “non-hot-swap” drive bays work just fine for hot-swap under FreeBSD/ZFS. I’d always suspected this was a Windoze nonsense. All good news.

So why is the re-build so fast? It’s obvious when you consider what’s going on. The GEOM system works a block level. If the mirror is broken it has no way of telling which blocks are valid, so the only option is to copy them all. A major feature of ZFS, however, is that the directories and files have validation codes in the blocks above, going all the way to the root. Therefore, by starting at the root and chaining down, it’s easy to find the blocks containing changed data, and copy them. Nice! Getting rid of separate volume managers and file systems has its advantages.

So am I comfortable with ZFS? Not yet, but I’m a lot happier with it when its a complete, integrated solution. Previously I’d only been using on data drives in multi-drive configurations, as although it was possible to install root on ZFS, it was a real PITA.

Advertorial in Process Engineering Control & Maintenance

The relationship between journals and advertisers has always been tricky, with many of them forced to say nice things, or at least avoid saying anything bad concerning major advertisers. In my day as an editor I was free to say what I liked, as no advertiser could afford to stop advertising because it was the best route to reaching potential customers before the Internet.

Times have certainly changed, and today marks a new low. We’ve intercepted several spammed messages offering to sell editorial in Process Engineering Control and Maintenance. Normally I wouldn’t draw attention to this, but they were sent to a spamming list and picked up by no less than six honeypots – addresses than no legitimate sender of bulk mail should be using. Therefore they’re fair game.

Dear Public Relations Manager

I deal with the editorial content for the Process Engineering Control & Maintenance publication, and are just putting together our editorial feature pages within our February edition, this is a very special edition as this will not only be distributed to our exclusive 100,000 named circulation but an extra 5,000 copies will also be distributed at MAINTEC, Sustainability Live & National Electronics Week to the wide range of purchasing professionals that attend.

I wanted to contact you to see if you would be able to provide some editorial content for this special edition.

The only cost to include a press release within this special edition would be a small editorial set up fee of just £85…

…As I am only able to offer this editorial opportunity to the first few companies to respond to this offer, please email me the editorial content that you would like to include, and please confirm that you would be happy to pay the £85 set up fee.

Kind Regards

******* ******** CIE

[name and telephone number deleted]

If you’re one of the 105,000 people “lucky” enough to get a copy of the magazine, you have been warned.