FBI VoIP system conference call intercepted by Anonymous?

Major embarrassment today as Anonymous intercepts a conference call between several European and American law enforcement agencies, according to something I’ve just seen on the BBC. It’s on YouTube right now if you want to hear it for yourself, click here.

It got my attention – someone breaking into a VoIP system would. But on further investigation it’s pretty obvious to me that it wasn’t an intercept at all. The clues are in the intercepted email  and the start of the recording – Anonymous read an email circular inviting people to the conference call, where the access number and password were given.

This makes the authorities concerned seem even more incompetent that if they’d had their VoIP service compromised.

 

Certificate “Errors” on Internet Explorer 9 – and how to stop them

Like recent versions of Internet Explorer, Version 9 has a Microsoft-style way of handling SSL certificates. It won’t let lusers access anything over a secure connection if there’s anything wrong with the certificate the remote end has presented. On the face of it, this is all very reasonable, as you don’t want the lusers being tricked by nasty criminals. But in reality it’s not as simple as that.

A bit of background, because everyone should make an informed choice about this…

SSL (or TLS) has two purposes – authentication and encryption. When you send data over SSL then two things occur. Firstly it’s only readable by the receiving computer (i.e. it’s encrypted), and secondly you know you’re talking to the right server (the link is authenticated – both computers recognise each other). The computers don’t exactly exchange passwords, but they have a way of recognising each other’s SSL certificate. Put simply, if two computers need to talk they have a copy of each other’s certificate stored on their disk  and they use to make sure they’re not talking to an impostor (gross over-simplification, but it’s a paradigm that works). Should one computer not have the certificate needed to authenticate the other end it will be supplied, and this is supplied certificate is checked to see if its “signed” by an “signing authority” using a certificate it does already have has. In other words, the unknown remote certificate arrives and the computer checks with a “signing authority” certificate to see if it’s been signed, and is therefore to be trusted. If it’s okay, it’s stored and used.

Now here’s where it breaks in Microsoft-land: For your computer’s certificate (the one it sends) to be signed by a “signing authority”, money has to change hands. Quite a lot of money, in fact. If it’s not signed, the recipient will have no way of knowing it’s really you.

In the rest of the world (where SSL came from), on receipt of an unknown certificate,  you’d see a message saying that the remote computer says it can be recognised using the supplied certificate, but I’ve never seen it before: Do we trust it? In most cases the answer would be “yes” and the two computers become known to each other on subsequent connections. It’s okay to do this – it’s normal. Something like this happens on Windows with Firefox and other browsers, but not, apparently, Internet Explorer. Not until you did a bit deeper, anyway. Actually, Internet Explorer 9 can be made to recognise unsigned security certificates, and here’s how.

First off, we really need to know what we’re about to do. What are the symptoms? The address bar goes red and you get a page saying there’s a problem with the certificate every time you visit a “site”. You can click on something to proceed anyway, but the implication is that you’re heading for your doom. The “error” message you see is normally for one of three reasons, and reading it might be enlightening. On a bad day you might get all three! But taking them in turn:

“The security certificate presented by this website was not issued by a trusted certificate authority.”

This just means that no one has paid to have this certificate signed by anyone of Microsoft’s liking. It may be a private company-wide certificate, or that belonging to a piece of network equipment such as a router. If it’s a web site belonging to your bank or an on-line shop, then you should be worried! Otherwise, if there’s a reason why someone isn’t paying to have their certificate approved (indirectly) by Microsoft, make your own decision as to whether you trust it.

Please generate and paste your ad code here. If left empty, the ad location will be highlighted on your blog pages with a reminder to enter your code. Mid-Post

So how do you get around it? Actually it’s pretty simple but Microsoft aren’t giving out any clues! The trick is to run Internet Explorer as Administrator (not just when logged in as Administrator).  In current versions of Windows you do this by right-clicking on IE in the start menu and selecting “Run as Administrator” from the pop-up menu. If you don’t, the following won’t work.

Go to the site who’s certificate you wish to import, and proceed to view the site in spite of the warnings. Then in the address bar you’ll see “Certificate error”. Click on this and you’ll see an option to “View Certificate”, and (assuming you’re in Administrator mode) there’s be a button the “General” tab to “Install Certificate”. Follow the prompts. For maximum effectiveness(!) choose the option to “Place all certificates in…” and browse to the “Trusted Root Certification Authorities”. This probably isn’t necessary in most cases, but if you do this it’ll cover you for pretty much every use. Your PC will happily accept anything from the remote machine hereafter; so make sure you’re importing the right certificate!

“The security certificate presented by this website has expired or is not yet valid.”

This means the certificate is out-of-date, or exceptionally, too new. In most cases encountering a certificate that isn’t valid suggests that your computer’s clock has reset itself to 1980. If this sounds plausible, just proceed to use the certificate anyway (there’s a clear option on the screen to do this). You’ll still get a scary red address bar, then it’s up to the server operator to fix this, but before you get on the ‘phone and give them what for, make sure you’re computer’s idea of the time and date is actually correct.

“The security certificate presented by this website was issued for a different website’s address”

This third case is a bit more tricky. Basically the name of the computer is embedded into the certificate, but you might be referring to it by another name (i.e. an alias). Or it could be using a pinched certificate. If you’re talking to a network router like a Draytek 2820 by going to its IP address and it’s giving you a built-in certificate, it would have no way of knowing what name or address the router is ultimately going end up on. The certificate is bound to be wrong in this respect. However, fishing around in the Internet Explorer options, under Advanced (and right down near the bottom) there’s a check-box – “Warn about certificate name mismatches”. Un-check it and it’ll stop squawking. Unfortunately it’s either on or off; you can’t set it to ignore a mis-match for particular names only. Because of the risk that someone might be impersonating your bank, you’d probably be best to leave this one checked and put up with the red warnings.

Final word of warning

Some people reading this will reckon this advice is reckless. Why circumvent a security feature? Simple – if the authentication part of SSL isn’t working you still want it for the encryption. In an ideal world everyone would have signed certificates so you can verify everything you talk and know it’s what it claims to be the first time you meet it. Subsequent visits will be authenticated with your newly installed certificate, so if something turns up impersonating it alter it’ll be detected. In the real world you probably want your data encrypted regardless. A signed certificate is better, but not that much better.

Hassling everyone over security certificates, as Microsoft is doing, may be justifiable on some levels, but as far as I’m concerned, anything that makes the use of encrypted data paths more difficult or expensive to use than they need be is a bad thing. They’re throwing the baby out with the bathwater.

 

PAM authentication in PHP on FreeBSD

I have several groups of lusers who want to be able to set/change their mail vacation settings but aren’t up to using ssh to edit their .forward and .vacation.msg files. I thought I’d write a quick PHP application to allow them to do it in a luser-friendly way using a web browser. If this isn’t what PHP is for, I don’t know what good it is. The snag: you need to make sure the right user is editing the right file.

The obvious answer is to authenticate them with their mail user-name and password pair using PAM. (This is the system that will check user-name/password combinations against whatever authentication you see fit – by default /etc/passwd).

PHP has a module available for doing just this – it’s called “PAM” and there’s even a FreeBSD port of it you can install from /usr/ports/security/pecl-pam. If you want to use it, just “make” and “make install” – it’ll add it to the PHP extensions automatically, but don’t forget to restart Apache if you’re planning to use it there.

You’ll also have to configure PAM itself. This involves listing the authentication methods applicable to your module in /etc/pam.d/. In this case the php module will have the default name ‘php’ unless you’ve changed it in /etc/php.ini using a line like pam.servicename = "php";

Adding the above line above obviously does nothing as it’s the default, but it’s useful as a reminder of what the default is set to. I don’t like implicit defaults, but then again I don’t like a lot of the shortcuts taken by PHP.

The only thing you need to do to get it workings is to add a PAM module definition file called /etc/pam.d/php. The easy way to create this is copy an existing one, such as /etc/pam.d/ftp. This will be about right for most people, but read /etc/pam.d/README if you want to understand exactly what’s going on.

So – to test it. A quick PHP program such as the following will do the trick:

<?php
var_dump (pam_auth('auser','theirpassword',&$error,0));
print $error;
?>

If there’s an entry in /etc/passwd that matches then it’ll return true, otherwise false, and $error will contain the reason. Actually, it checks the file /etc/master.passwd – the one that isn’t world readable and therefore can contain the MD5 password hashes. And there’s the rub…

This works fine when run as root, but not as any other users; it always returns false. This makes it next to useless. It might be a bug in the code, but even if it isn’t it leads to interesting questions about security. For example, it would allow a PHP user to hammer away trying to brute-force guess passwords. I’ve seen it suggested to Linux users can overcome the need to run as root by making their shadow password group or world readable. Yikes!

If you’re going to use this with PHP inside Apache, you’re talking about giving the “limited” Apache user access to one of the most critical system files as far as security goes. I can see the LAMP lusers clamouring for for me to let them do this, but the answer is “no!” Pecl-pam is not a safe solution to this, especially on a shared machine. You could probably persuade it to use a different password file, but what’s the point? If the www user can read it, all web hosting users can and you might just as well read it from the disk directly (or use a database). PAM only makes sense for using system-wide passwords for authenticating real users.

I do now have a work-around: if you want your Apache PHP script to modify files in a user’s home directory you can do this using FTP. I’ve written some code to achieve this (not hard) and I’ll post it here if there’s any interest, and after I’ve decided it’s not another security nightmare.

 

Spamassassin, spamd, FreeBSD and “autolearn: unavailable”

I recently built a mail server using FreeBSD 8.2 and compiled spamassassin from the current ports collection, to run globally. spamd looked okay and it was adding headers, but after a while I noticed the Baysian filtering didn’t seem to be working in spite of it having had enough samples through.

A closer look at the added headers showed “autolearn: no”, or “autolearn: unavailable” but never “autolearn: ham/spam”.

What was going on? RTFM and you’ll see spamassassin 3.0 and onwards has added three new autolearn return codes: disabled, failed and unavailable. The first two are pretty self-explanatory: either you’d set bayes_auto_learn 0 in the config file or there was some kind of error thrown up by the script. But I was getting the last one:

unavailable: autolearning not completed for any reason not covered above. It could be the message was already learned.

I knew perfectly well that the messages hadn’t already been learned, so was left with “any reason not covered by the above”. Unfortunately “the above” seemed to cover all bases already. There wasn’t any clue in /var/maillog or anywhere else likely.

I don’t much care for perl scripts, especially those that don’t work, so after an unpleasant rummage I discovered the problem. Simply put, it couldn’t access its database due to file permissions.

The files you need to sort are at /root/.spamassassin/bayes_* – only root will be able to write to them, not spamd – so a chmod is in order.

A better solution is to move the Bayesian database out of /root – /var would probably be more appropriate. You can achieve this by adding something like this to /etc/spamd.cf (which should link to /usr/local/etc/mail/spamassassin/local.cf):

bayes_path /var/spamassassin/bayes/bayes
bayes_file_mode 0666

I suspect that the lower-security Linux implementation avoids these problems by setting group-write-access as default, but FreeBSD, being a server OS, doesn’t. It’s also a bug in the error handling for the milter – it should clearly report as a “failed” and write something to the log file to tell you why.

You should be able to restart spamd after the edit with /usr/local/sbin/spamdreload, but to be on the safe side I use the following after shutting down Sendmail first.

/usr/local/etc/rc.d/spamass-milter restart
/usr/local/etc/rc.d/sa-spamd/restart

I don’t know if Sendmail can cope well with having spamass-milter unavailable, but why take the risk?

 

Phone hacking gets serious

A committee of MPs are currently grilling the management of News International trying to find someone to blame for the ‘phone “hacking” scandal. It has to be someone convenient; definitely not the people who are actually responsible. That’d lose them votes. This is because those ultimately responsible are the readers of the tabloid newspapers with their insatiable appetite for the personal details of anyone famous, or in the news.

Readers of the Daily Mirror and the Sun/News of the Screws are mostly to blame, together with the Daily Mail, Express and “celebrity” magazines. They’re creating the demand; the publishers are in business to satisfy a demand. This isn’t to say I approve of the business – the cult of celebrity is one of the most rotten things about modern society – but blaming those making a living by never underestimating the public’s bad taste is like condemning a lion for eating an antelope. The tabloids are profitable; proper newspapers are a money pit.

But the politicians don’t want to blame the tabloid readers (aka most of the electorate), and neither does the news media want to blame their best customers. Instead they’re nervously jostling for position in a circular firing squad.

Politically, blaming the Murdoch Press is the best answer. Politicians would love to control the media, but in the west this is a tricky position to engineer. The fact that a sub-contracted investigator to one tabloid accessed the voice-mail of a missing person who subsequently turned out to have been murdered is a pretty flimsy pretext, but they appear to be making the most of it. Oh yes – they messed with a police investigation by deleting old messages. Hmm. My mobile ‘phone voicemail does this automatically – why blame the hack? Just convenient, and it makes it seem more shocking and no one is going to mention this obvious explanation as a possibility. This morning I heard Neil Kinnock suggesting the press needed regulating. Well it worked for Castro, Stalin and Kim Jung Il, his socialist role models?

Last weekend the News of the World was forced to close; a newspaper (in the broad sense of the word) was muzzled to cheers of delight. They were doing something illegal, and they had to go. Actually it was only made illegal in 2000 by Blair’s government (arguably it only came in to force in 2002). Prior to this it was dodgy ground, but there was always a public interest defence. This is key. Journalists used to be able to snoop on whoever they chose as long as it was in the public interest. Each individual case had to be argued on its merits; it was safe. Now journalists face a very real risk of prosecution simply for looking into the dealings of corrupt politicians, organised criminals and dodgy police officers (especially). New Labour’s idea is that only the police and security services were allowed to do anything like this – i.e. The state should have a monopoly on snooping. This is the same model used by the Gestapo, the KGB, the OVRA and the Stasi. It’s used in various countries in the modern world; there was no free press to hold the secret police and politicians to account.

Does this mean Blair and New Labour deserve to be lumped in with the dictatorial heads of police states? Probably not – they produced a large amount of stupid legislation in a hurry and I could well believe this was simple incompetence. However, it’s notable that politicians now are hardly lining up to condemn these totalitarian laws. Why would they? One of the major beneficiaries have been the politicians themselves, who like to have a protect “private life” outside the glare of publicity.

As a final note, watch for the Mirror – they were the subject of more complaints about illegal intercepts (by a long way) than The Sun, Screws or anyone else on Fleet Street (or Wapping). So far they’re being protected. If you think this is a conspiracy theory, check the complaints for yourself on the Ofcom web site. Don’t expect the news media to report it – not in their interests!

Infosec Europe 2011 – worrying trend

Every Infosec (the Information Security show in London) seems to have have a theme. It’s not planned, it just happens. Last year it was encrypted USB sticks; in 2009 it was firewalls. 2011 was the year of standards.

As usual there were plenty of security related companies touting for business. Most of them claimed to do everything from penetration testing to anti-virus. But the trend seemed to be related to security standards instead of the usual technological silver bullets. Some of the companies were touting their own standards, others offering courses so you could get a piece of paper to comply with a standard, and yet others provided people (with aforementioned paper) to tick boxes for you to prove that you met the standard.

This is bad news. Security has nothing to do with standards; proving security has nothing to do with ticking boxes. Security is moving towards an industry reminiscent of Total Quality Assurance in the1990’s.

One thing I heard a lot was “There is a shortage of 20,000 people in IT security” and the response appears to be to dumb-down enough such that you can put someone on a training course to qualify them as a box-ticker. The people hiring “professionals” such as this won’t care – they’ll have a set of ticked boxes and a certificate that proves that any security breach was “not their fault” as they met the relevant standard.

Let’s hope the industry returns to actual security in 2012 – I’ll might even find merit in the technological fixes.

Google Phishing Tackle

In the old days you really needed to be a bit technology-savvy to implement a good phishing scam. You need a way of sending out emails, a web site for them to link back to that wouldn’t be blacklisted and couldn’t be traced, plus the ability to create an HTML form to capture and record the results.

Bank phishing scam form created using Google Apps
Creating a phishing scam form with Google Apps is so easy

These inconvenient barriers to entry have been swept away by Google Apps.

A few days back I received a phishing scam email pointing to a form hosted by Google. Within a couple of minutes of its arrival an abuse report was filed with the Google Apps team. You’d might expect them to deal with such matters, but this still hadn’t been actioned two days later.

If you want to have a go, the process is simple. Get a Gmail account, go to Google Docs and select “Create New…Form” from on the left. You can set up a data capture form for anything you like in seconds, and call back later to see what people have entered.

Such a service is simply dangerous, and Google doesn’t appear to be taking this at all seriously. Given their “natural language technology” it shouldn’t be hard for them to spot anything looking like a phishing form so, I decided to see how easy it was and tried something blatant. This is the result:

No problem! Last time I checked the form was still there, although I haven’t asked strangers to fill it in.

Christmas Hackers 2010

 The 2010/2011 cybercrime season has been one of the most prolific I remember. There have been the usual script-kiddie attacks, wasting bandwidth. These largely consist of morons trying to guess passwords using an automated script, and they’re doomed to failure because no serious UNIX administrator would have left guessable passwords on proper accounts. And besides which they’re guessing system account names you only find on Windows or Linux.

What seems to be a bigger feature this year is compromised “web developer” software written in PHP. This is set up by designers, not systems people, and they really don’t understand security – hence they’re a soft target.

This year it appears that phpMyAdmin has been hit hard. This seems to be a vulnerability caused by poor installation (leaving the configuration pages up after use) and using a weak version of the code that was actually fixed a year ago. When I looked I found several copies of the old version, still active, and dating from the time when the web designer had initially commissioned the site.

The criminals appear to be using a mechanism that’s slightly different from the original exploit documentation, but is fairly obvious to any programmer looking a the setup.php script. It allows arbitary uploads to any directory that Apache has write access too.

The nature of the attacks has also been interesting. I’ve seen scripts dropping .htaccess files into all likely directories, redirecting accesses elsewhere using the mod_rewirte mechanism. This appears to intended as a simple DoS attack by overloading target servers (homelandsecurity.gov and fbi.gov being favourite targets).

That this is the work of script kiddies there is no doubt. They’ve left botnet scripts written in perl and python all over the place on honeypot machines. Needless to say this makes them really easy to decode and trace, and you can probably guess which part of the world they seem to be controlled from.

My advice to users of phpMyAdmin (a web based front end for administering mySQL) is to learn how to use SQL properly from the command line. If you can’t do that (or your hosting company won’t let you, which is a problem with low-cost web hosts), at least secure it properly. Upgrade to the latest version, keep it upgraded and remove it from the server when not in use. If you don’t want to remove it, at least drop a .htaccess file in the directory to disable it, or make it password protected.

chkrootkit finds bindshell infected on port 465

The current version of chkrootkit will throw up a warning that bindshell is INFECTED on port 465 in some circumstances when this is nothing to worry about. What it’s actually doing (in case you can’t read shell scripts, and why should you when there’s a perfectly good ‘C’ compiler available) is running netstat and filtering the output looking for ports that shouldn’t be being used. Port 465 is SMTP over SLL, and in my opinion should very definitely be used, but it is normally disabled by default.

As to whether this should worry you depends on whether you’re using secure SMTP, probably with sendmail. If you set up the server you should know this. If someone else set it up and you’re not too familiar with sendmail, the tell-tail line in the .mc file is DAEMON_OPTIONS(`Port=smtps, Name=TLSMTA, M=s')dnl. Note the ‘s’ on the end of smtp.

Assuming you are using SMTPS, you can easily stop chkrootkit from printing an error (or returning an error code) simply by modifying the bindshell() subroutine to remove 465 from the list of ports to check. It’s on line 269 on the current, 0.49, version of the script.

I’m not so convinced that chkrootkit is any substitute for an experienced operator, but it’s out there, people use it and its better than nothing.

FBI hacks every VPN on the planet

Can VPN’s be trusted?

I got wind of an interesting rumour yesterday, passed to me by a fairly trustworthy source. I don’t normally comment on rumours until I’ve had a chance to check the facts for myself, but this looks like it’s going to spread.

Basically, the FBI paid certain developers working on the OpenBSD IPsec stack to and asked for back-doors or key leaking mechanisms to be added. This occurred in 2000/2001. Allegedly.

The code in question is open source and is likely to have been incorporated in various forms in a lot of systems, including VPN and secure networking infrastructure.

Whilst I have names of the developers in question and the development company concerned, it wouldn’t be fair to mention them publicly, at least until such code is found. If you’re using the IPsec stack in anything might want to take a good look at the code, just in case.

However, if the code has been there for nearly ten years in open source software, how come no one has noticed it before?