Thoughts on Infosec, 2014 – first day

I usually post a show report about Infosec somewhere, and for various painful reasons, this year it has to go here. And this year I’m at a bit of a loss.

Normally there’s a theme to the show; the latest buzzword and several companies doing the same thing. I wasn’t able to spend as long as normal there today, thanks to the RMT, but I think it’s probably “Cloud Security” this year. As with “cloud” anything, this is a pretty nebulous term.

Needless to say, the first day of the show lacked the buzz, with a smaller than usual number of visitors, haggared by disrupted journeys, mooched around the booths.

I was a bit surprised to see very little on the “heartbleed bug”, although there were a couple of instances. Either the marketing people didn’t understand it, or had uncharacteristically been put in their places.

One stand that’s always interesting is Bit9, a company after my own heart with alternatives to simple virus scanning. They went on a spending spree earlier in the year and have purchased and integrated Carbon Black. This is technology to allow their customers to monitor exactly what’s happening on all their (Windows) computers; which applications launch with others, what initiates a network connection and so on. It’s all very impressive; a GUI allows you to drill down and see exactly what’s happening in excruciating details. What worries me is the volume of data it’s likely to generate if its being used for IDS. There will be so much it’ll be hard to see the wood for the trees. When I questioned this I was told that software would analyse the “big data”, which is a good theory. It’s one to watch.

Plenty of stands were offering the usual firewalls. Or is that integrated solutions to unified threat management. Nothing has jumped out yet.

At the end of the day there was a very sensible keynote address by Google’s Dr Peter Dickman that was definitely worth a listen. All solid stuff, but from Google’s perspective as an operator of some serious data centre hardware. He pointed out that Google’s own company is run on its cloud services, so they’re going to take care of everyone’s data as they would their own. Apparently they also have an alligator on guard duty at one of their facilities.

I was a bit saddened to see a notice saying that next year’s show will now be in early June and Olympia. I’ve got fond memories of Earls Court going back more than thirty years to the Personal Computer World show. And Earls Court just has better media facilities!

 

US judge tells Microsoft to hand over data on foreign servers

Yesterday, a judge in a New York court ordered Microsoft to hand over information stored on a server in Ireland following a US search warrant. Magistrate Judge James Francis reckons a search warrant for servers is different to a search warrant for anywhere else – more of a subpoena to hand over documents. Unsurprisingly, Microsoft plans to roll the dice again with a Federal judge this time.

Microsoft, of course, has recently been soothing its cloud customers by saying that if the data is held outside the US, Uncle Sam won’t be able to plunder it in violation of the users’ local rights. In particular, the EU legislation being drafted to prevent companies sharing EU citizens’ data with foreign powers unless explicitly allowed by international treaty or another EU law. The NSA, or US corporations, would not be allowed to just look at whatever they wanted.
This plays right in to Angela Merkel’s proposal for an EU communications network that can’t be legally snooped on by the yanks by avoiding the use of US-based servers.

In a statement to Reuters, Microsoft said:

“A U.S. prosecutor cannot obtain a U.S. warrant to search someone’s home located in another country, just as another country’s prosecutor cannot obtain a court order in her home country to conduct a search in the United States. (Microsoft) thinks the same rules should apply in the online world, but the government disagrees.”

Is Microsoft really so naive? Although the ruling followed its challenge of a search warrant concerning a Microsoft account, its implications apply to all US cloud service providers. Although they intend to appeal, in the mean time any US company holding your data off-shore might as well have its servers in America – they’ll be forced to hand over all your data either way.

This isn’t to say that data held in the UK, for example, is any more secure. There’s RIPA to worry about – the Act allows authorities can plunder what they like, although it does make it illegal for anyone other than the State to do this.

 

Infosec 2014 set to be disrupted by tube strike

It could hardly come at a worse time for Infosec, the UK’s best Information Security show due to take place at Earls Court next week. The RMT is planning a tube strike through the middle of it. Infosec 2014 runs from 29th April to 1st May; the strike runs from the evening before and services aren’t expected to resume until the 1st May. As many exhibitors shut up early on that day and head for home, and the real networking happens in the evenings at the hostelries around Earl’s Court, this is something of a disaster.

On a personal note, the largest outlet for my scribblings on the show in recent years shut up shop at the end of 2013; I’ll be putting the trade stuff in the Extreme Computing newsletter and probably blogging a lot more of it here. If I can get there. I shall try my best, and blog live as the show continues.

Freeloaders step in to fund Open Source thanks to OpenSSL fiasco

Some good has come out of the heartbleed bug – some of the larger organisations using it have decided to put some money in to its developemnt. Quite a lot in fact. it’s through an initiative of the Linux Foundation, and is supported by the likes of Microsoft, Cisco, Amazon, Intel, Facebook, Google and IBM. The idea is to fund some critical open source projects.

While this is welcome news for the open source community in general, and certainly vindicates the concept, I have to question its effectiveness. The vulnerability was actually reported by the community two years ago, and had already been fixed. However, it persisted in several releases until it had been. One could blame the volunteers who developed it for sloppy coding; not spotting it themselves and not fixing it when it was pointed out to them earlier. But I can’t blame volunteers.

It’s up to people using Open Source to check its fit for purpose. They should have carried out their own code reviews anyway. At the very least, they should have read the bug reports, which would have told them that these versions were dodgy. Yet none of them did, relying on the community to make sure everything was alright.

I dare say that the code in OpenSSL, and other community projects, is at last as good as much of the commercially written stuff. And on that basis alone, it’s good to see the freeloading users splashing a bit bit of cash.

I wonder, however, what will happen when Samba (for example) comes under the spotlight. Is Microsoft really going to fund an open-source competitor to its server platform? Or vmware pay to check the security of VirtualBox? Oracle isn’t on the current list of donors, incidentally, but they’re doing more than anyone to support the open source model already.

Restoring cPanel backup to system without cPanel

cPanel is a web front end for “reseller” hosting accounts, and it’s very popular with web designers reselling hosting services. It’s very simple to use, and allows the web designers to set up virtual hosting accounts without giving them any real control over the server – self-service and fool proof. It’s also an expensive thing to license. It makes sense for a self-service low-cost hosting provider, where the customers do all the work, but for small-scale or “community” hosting providers you’re talking big money.

I’ve just had to rescue a number of web sites from a developer using one of these hosting services, and they’ve got a lot of sites. And the only access to the virtual server is through cPanel (and FTP to a home directory). I logged in to cPanel and there’s an option to create a backup of everything in one big tarball, and this looked like just what I wanted to get them all at once. However, it was designed to upload and unpack in another cPanel environment.

Getting out the home directories is pretty straightforward. They end up in a directory called “homedir”, and you just move it to where you want them – i.e. ~username/www/. But how about restoring the dump of the MySQL databases. Actually, that’s pretty simple too. They’re in a directory called “mysql”, but instead of it being one big dump, each is in it’s own file – and without the create commands, which are in another with the extension “.create” instead of “.sql”. Loading them all manually is going to be a time-wasting PITA, but I’ve worked out the the following shell script will do it for you if you run in while in the backup’s mysql directory:

for name in `find . -name “*.create”`; do
cat $name `echo $name | sed s/.create/.sql/` | mysql
done

You obviously have to be in the directory with the files (or edit find’s specification) and logged in as root (or add the root login as a parameter to the mysql utility).

You’ll also want to set the user/password combination on these. The tarball will have a file called mysql.sql in its root directory. Just feed it in thus:

mysql < mysql.sql

Please be aware that I figured this out looking at the files in the dump and NOT by reading any magic documentation. It works on the version of cPanel I encountered, and I was restoring to FreeBSD. By all means add a comment if you have a different experience when you try it, and don’t go this way if you’re not sure how to operate a MySQL database or you could do a lot of damage!

The final hurdle is configuring Apache for all these new sites. cPanel creates a directory in the dump called “userdata”, and this seems to contain a file with information about each web site. I decided to automate and wrote the following script:


#!/bin/sh

# Convert cPanel dump of "userdata" in to a series of Apache .conf files
# (c) F J Leonhardt 17 April 2014 - www.fjl.co.uk
# You may use this script for your own purposes, but must not distribute it without the copyright message above left intact

# Directory to write config files
# Normally /usr/local/etc/apache22/Include but you might want to write
# them somewhere else to check them first!

confdir=/usr/local/etc/apache22/Include

# oldhome and newhome are the old and new home directories (where the web sites are stored
# oldtestname and newtestname are used (together with a sub-domain) to implement test web sites before
# they have a real domain name pointed at them. They will be substituted in server names and aliases

oldhome=/data03/exampleuser/public_html
newhome=/home/exampleuser/www
oldtestname=exampleuser.oldisp.co.uk
newtestname=newuser.fjl.org.uk

# Now some static information to add to all virtual hosts
# vhost is the IP address or hostname you're using for virtual hosting (i.e. the actual name of the server)
# serveradmin is the email address of the server admin
# logfiles is the directory you want to put the log files in (assuming you're doing separate ones). If
# you do this you must uncomment the lines that write the .conf file

vhost=web.exampleuser.com
serveradmin=yourname@example.com
logdir=/var/log

getvalue()
{
grep ^$1: $name | sed s!$1:\ !! | sed s!$oldtestname!$newtestname!
}

# Start of main loop We DO NOT want to process a special file in the directory called "main" so
# a check is made.

for name in `ls`; do
if [ "$name" != "main" ]
then
echo -n "Processing $name "

if grep ^servername: $name >>/dev/null
then

# First we get some info from the file

sitename=`getvalue servername`
serveralias=`getvalue serveralias`
documentroot=`getvalue documentroot`

# Below we're setting the .conf pathname based on the first part of the file name (up to the first '.')
# This assumes that the file names are in the form websitename.isp.test.domain.com
#
# If the sitename in the source file is actually the name of the site (rather than a test alias) use
# this instead with something like:
#
# Basically, you want to end up with $givensitename as something meaningful when you see it
#
#givensitename=$sitename

givensitename=`echo $name | cut -d \. -f1`

confname=$confdir/$givensitename.conf

echo to $confname

echo "" >$confname
echo -e \\tServerAdmin $serveradmin >>$confname
echo -e \\tServerName $sitename >>$confname
for aname in $serveralias; do
echo -e \\tServerAlias $aname >>$confname
done
echo -e \\tDocumentRoot `echo $documentroot | sed s!$oldhome!$newhome!` >>$confname
echo -e \\tErrorLog $logdir/$givensitename-error.log >>$confname
echo -e \\tCustomLog $logdir/$givensitename-access.log combined >>$confname
echo "
" >>$confname

#from check that servername present
else
echo "- ignoring file - no servername therefore wrong format?"
fi

#fi from check it wasn't called "main"
fi
done

All of the above assumes you’re familiar with setting up virtual hosting on an Apache 2.2 http server in an UNIX-like environment. It’s just too complicated to explain that in a single blog post. Drop me a line if you need assistance.

Heartbleed bug not as widespread as thought

Having tested a few servers I’m involved with, many of which are using old or very old versions of OpenSSL, I can’t say I’ve found many with the problem. You can test a server here: http://filippo.io/Heartbleed/ on a site recommended by Bruce Schneier.

So what’s going on? Does this affect very specific nearly-new releases. This story could turn out to be a serious but solvable problem, and a media panic. I recall spending most of 1999 doing interviews on how the “year 2000 bug” was going to be a damp squib, but it’s early days yet.

Heartbleed bug

Someone’s finally found a serious bug in OpenSSL. It allows a remote attacker to snoop around in the processes memory, and this is seriously bad news because this is where you will find the private keys its using. They’re called “private keys” because, unlike public keys, they need to remain private.

This is going to affect most web sites using https, and secure email (if you’re using it – most aren’t). But before user’s rush off to change their passwords (which are different for each site, aren’t they?) – there’s no point in doing this if an attacker is watching. The popular press reckons your passwords are compromised; I don’t. If I understand it correctly, this exploit theoretically allows an attacker to intercept encrypted traffic by pretending to be someone else, and in doing so can read everything you send – including your password. So don’t log in until the server is fixed. They can’t read your password until you use it.

To cure this bug you need a new version of OpenSSL, which is going to be a complete PITA for server operators who aren’t on-site. Hell, it’ll be a PITA even if you are on-site with the servers. Once this is done you’ll also need new certificates, and the certificate authorities aren’t geared up for everyone in the world changing at once.

But the big fun one is when you can’t update OpenSSL. It’s used everywhere, including in embedded systems for which there was never any upgrade route. I’m talking routers, smart TVs – everythign.

I believe that SSH isn’t affected by this, which is one good thing, but I’m waiting for confirmation. Watch this space.

But, if you’re using a secure web site to log in over SSL, consider the password compromised if you’ve used it in the last few days and be prepared to change it soon.

NSPCC claims that 6% of teenage boys read Pornhub

Petee Wanless, CEO of the NSPCC, has made a fool of himself and the organisation he represents by call8ng for unworkable restrictions to be placed o. Porn websites to prevent access by min
ors. This is on the back of some dubious looking research from avstat, who have made simar headline grabbing claims that 6% of males aged 12-16 have been looking at a site called Pornhub during the course of one month. This is based on a survey of traffic, apparently.

It’s pretty obvious to anyone in a position to see net traffic that this is most improbable, and it’s only a matter of time before the research is ripped to shreds. That the NSPCC is taking it seriously raises more questions of the organisation’s competence. Time for a new CEO, methinks.

MH370 – One week later, wreckage found. Really?

So, an Australian satellite has potted debris in the Indian Ocean at the far end of the arc MH370’s engine data fixed the aircraft on for seven hours. There’s now going to be a rush to find it, no doubt.

Fuzzy picture of what Australia hopes is wreckage of MH370
Its it a plane? Is it a wave? It is a statistical certainty

Apparently these images are four days old and have only just come back from analysis.

I think this could well be a wild goose. What we’re looking at is a cluster of white dots in a texture of black and white. Experts have declared this likely debris; to me it looks more like waves. Or perhaps it’s a container washed off a ship, or who knows what? That’s it’s part of MH370 seems very unlikely. Probability is against it.

Let’s look at that probability. Firstly, why is the aircraft presumed to be on this arc leading north and south from Malaysia? It’s actually the line of equal distance (more or less) from the Inmarsat satellite collecting the data from the engines, and this is based on a 1d fix; namely the elevation. I believe it’s known to be 40 degrees declination from the satellite. That’s sound.

The arc ends where the aircraft stopped transmitting, which is also when it is likely to have run out of fuel, and the maximum distance it could have flown along the arc.

However, to get to the far end of the arc, someone would have to have flown it there – or set the autopilot to follow THAT course. Not any of the other courses it could have taken from the point, but that precise arced course. It’s not impossible; it could have taken this course. But is it likely? Probability says “no”.

What seems more probable to me is that the aircraft hung around in a holding pattern close to where it was lost. That’s where to look. If the satellites have found it, great – and the explanation as to why it followed that precise course will be interesting, but I’m not hopeful.

If you’re working on a conspiracy theory, the data sent to Inmarsat could have come form a ground-based transmitter; it could be fake to throw investigators off the scent.

Missing Malaysian Airliner

I’ve got more interest than usual in this, as I happened to be on a ‘plane in the same airspace a few hours afterwards. It makes you think while waiting to board in Singapore.

Three days later, no wreckage has been found and there are rumours of the aircraft changing course. Hijack? That’s what it looks like to me, based on the facts released. First off, there was no distress call. The same was true of the Air France 477 in 2009 (discounting the automated transmissions), but that was way out over the ocean a long way in to the flight; MH370 had only recently departed and was in crowded airspace, in range of ATC and showing up on civil radar.

Much was made of the passengers travelling on stolen passports; given that part of the world I’d be surprised if there weren’t several on every flight out of KL. If it was a terrorist attack, someone would have claimed it by now anyway. And if it was external hijackers, the crew would have raised the alarm.

So what could have happened? The release of the final radio message is a huge clue – they were handing over from Malaysia to Vietnam, mid-way across the sea. Hand-overs are important – they say goodbye, change frequency and says hello. Only the goodbye happened.

If the aircraft had suffered a very sudden and catastrophic failure, the wreckage would be floating on the ocean below at that point. So that leaves the aircrew. They could have turned off the transponder and done what they liked.

If external agents had hijacked an aircraft the pilots would have triggered the hijack alarm on the transponder and made a distress call. They were in radar range, and radio range. And the security on the cockpit door would have allowed them time.

If I was flying an aircraft and wanted to take it over, mid-sea on ATC handover would be the obvious place to do it. Malaysia wouldn’t expect contact because they’d left; Vietnam wouldn’t notice loss of contact because none had been made; they’d assume they were still talking to Malaysia. Just speculating out loud…

Only military radar would be taking any interest in the aircraft, and in that part of the world you bet they were watching but don’t really want to talk about it.