Blocking script kiddies with PF

OpenBSD’s PF firewall is brilliant. Not only is it easy to configure with a straightforward syntax, but it’s easy to control on-the-fly.

Supposing we had a script that scanned through log files and picked up the IP address of someone trying random passwords to log in. It’s easy enough to write one. Or we noticed someone trying it while logged in. How can we block them quickly and easily without changing /etc/pf.conf? The answer is a pf table.

You will need to edit pf.conf to declare the table, thus:

# Table to hold abusive IPs
table <abuse> persist

“abuse” is the name of the table, and the <> are important! persist tells pf you want to keep the table even if it’s empty. It DOES NOT persist the table through reboots, or even restarts of the pf service. You can dump and reload the table if you want to, but you probably don’t in this use case.

Next you need to add a line to pf.conf to blacklist anything in this table:

# Block traffic from any IP in the abuse table
block in quick from <abuse> to any

Make sure you add this in the appropriate place in the file (near or at the end).

And that’s it.

To add an IP address (example 1.2.3.4) to the abuse table you need the following:

pfctl -t abuse -T add 1.2.3.4

To list the table use:

pfctl -t abuse -T show

To delete entries or the whole table use one of the following (flush deletes all):

pfctl -t abuse -T delete 1.2.3.4
pfctl -t abuse -T flush

Now I prefer to use a clean interface, and on all systems I implement a “blackhole” command, that takes any number of miscreant IP addresses and blocks them using whatever firewall is available. It’s designed to be used by other scripts as well as on the command line, and allows for a whitelist so you don’t accidentally block yourself! It also logs additions.

#!/bin/sh

/sbin/pfctl -sTables | /usr/bin/grep '^abuse$' >/dev/null || { echo "pf.conf must define an abuse table" >&2 ; exit 1 ; }

whitelistip="44.0 88.12 66.6" # Class B networks that shouldn't be blacklisted

for nasty in "$@"
do
        echo "$nasty" | /usr/bin/grep -E '^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$' >/dev/null || { echo "$nasty is not valid IPv4 address" >&2 ; continue ; }

        classb=$(echo "$nasty" | cut -d . -f 1-2)

        case " $whitelistip " in
                *" $classb "*)
                echo "Whitelisted Class B $nasty"
                continue
                ;;
        esac

        if /sbin/pfctl -t abuse -T add "$nasty"
        then
                echo Added new entry $nasty
                echo "$(date "+%b %e %H:%M:%S") Added $nasty" >>/var/log/blackhole
        fi
done

That’s all there is two it. Obviously my made-up whitelist should be set to something relevant to you.

So how do you feed this blackhole script automatically? It’s up to you, but here are a few examples:

/usr/bin/grep "checkpass failed" /var/log/maillog | /usr/bin/cut -d [ -f3 | /usr/bin/cut -f1 -d ] | /usr/bin/sort -u

This goes through mail log and produces a list of IP addresses where people have used the wrong password to sendmail

/usr/bin/grep "auth failed" /var/log/maillog | /usr/bin/cut -d , -f 4 | /usr/bin/cut -c 6- | /usr/bin/sort -u

The above does the same for dovecot. Beware, these are brutal! In reality I have an additional grep in the chain that detects invalid usernames, as most of the script kiddies are guessing at these and are sure to hit on an invalid one quickly.

Both of these examples produce a list of IP addresses, one per line. You can pipe this output using xargs like this.

findbadlogins | xargs -r blackhole

The -r simply deals with the case where there’s no output, and will therefore not run blackhole – a slight efficiency saving.

If you don’t have pf, the following also works (replace the /sbin/pfctl in the script with it):

/sbin/route -q add $nasty 127.0.0.1 -blackhole 2>/dev/null

This adds the nasty IP address to the routing table and directs packets from it to somewhere the sun don’t shine. pf is probably more efficient that the routing table, but only if you’re using it. This is a quick and dirty way of blocking a single address out-of-the-box.

Google shoots own foot in war on child abuse images

If you believe the Daily Mail and the BBC, Google and Microsoft have buckled under pressure from the Government to block images of child abuse on the Internet. What they’ve actually done is block around 100,000 search terms that are used by peodphiles looking for material, whether such search terms could be used to locate other content or not. Great.

Actually, this is rubbish. Google (about which I know more) has not even been indexing such sites, so search terms won’t have found any that it knew about anyway. I’m sure the other search engines have similar programmes in place. This is a public relations exercise, with a piece by Eric Schmidt in the Mail today. It’s a desperate PR stunt that will back-fire on Google.

Eric Schmidt of Google, seeming desperate (from Wikipedia)
Eric Schmidt of Google, seeming desperate

The fact is that household names like Google don’t have a case to answer here. They’re not ISPs, they’re not providing hosting space for illegal material and they’re not actually responsible for it in any way. The only thing they can do is spend their money researching such sites, dropping them from there indices and alerting the relevant authorities to their research. This they already do. So when the likes of Mr Cameron criticize them, as an easy target, the correct response is “Don’t be silly, it’s not us, and it’s the job of your Police to catch the criminals whether they’re using the Internet or not”. What Google has done with this move is give legitimacy to the original false accusation.

As anyone concerned with cybercrime will tell you, the major criminal activity takes place in areas outside the World Wide Web – areas not indexed by Google or any legitimate company. It travels around the Internet, encrypted and anonymous; and the peodophiles seem to be able to find it anyway. All this move will achieve is pushing the final remnants underground, where they’ll be much harder to track.

Looking at the comments that have appeared on the Daily Mail site since it was published is depressing. They’re mostly from people who have been taken in by this line (originally spun by the Daily Mail, after all), and they clearly don’t understand the technical issues behind any of this. I can’t say I blame them, however, as the majority of the population has little or no understanding of what the Internet is or how it works. They simply see a web browser, normally with Google as a home-page, and conflate the Internet with Google. The Prime Ministers advisors are either just as simple-minded, or are cynically exploiting the situation.

 

Don’t use your real birthday on web sites

You’d have to be completely crazy to enter your name, address and date-of-birth when registering on a web site if you had any inkling of the security implications. Put simply, these are security questions commonly used by your bank and you really don’t want such information falling in to the wrong hands. So, security-savvy people use a fake DOB on different web sites. If you want to play fair with a site that’s asking this for demographic research, use approximately the correct year by all means, but don’t give them you mother’s real maiden name or anything else used by banks or government agencies to verify your identity, or the criminals will end up using it for their own purposes (i.e. emptying your bank account).

That banks, or anyone else, use personal details that can be uncovered with a bit of research at the public record office is a worry in itself. It’s only a minor hindrance to fraudulent criminals unless you provide random strings and insist to your bank that your father married a Miss Iyklandhqys. The bank might get uppity about it, but they should be more interested in security than genealogy.

This common knowledge, and common sense advice was repeated by civil servant from the Cabinet Office called Andy Smith at the Parliament and the Internet Conference at Portcullis House a few days ago. I’ve never met him, but he seems to have a better grasp of security than most of the government and civil service.

Enter Ms Goodman – Labour MP for Bishop Auckland. She heard this and declared his advice as “totally outrageous”, and went on to say that “I was genuinely shocked that a public official could say such a thing.”

I wish I was genuinely shocked at the dangerous ignorance of many MPs, but I can’t say that I am. Her political masters (New Labour) haven’t acted nearly quickly enough to suppress this foolish person. In her defence, she used the context that people used anonymous account to bully others. This doesn’t bear any scrutiny at all.

When are we going to find a politician with the faintest clue about how cyber security works? The fact that this ignoramus hasn’t disappeared under a barrage of criticism suggests that this isn’t an isolated problem – they’re all as culpable. Her biography shows just how qualified she is to talk about cyber security (or life outside of the Westminster bubble). I’ve no idea what she’s like as a person or MP, but a security expert she isn’t.

I do hope they listen to Andy Smith.

 

TLS used in web browsers is vulnerable to cookie grabbing

I heard something really worrying yesterday – someone’s got a proof-of-concept that defeats TLS (previously known as SSL) encryption. Security researchers Thai Duong and Juliano Rizzo are planning to demonstrate this at Ekoparty in Argentina this week.

Fortunately this isn’t as bad as it sounds as it doesn’t actually involve breaking TLS, but it’s still pretty bad. It only applies to some web browsers, but it does allow the theft of supposedly encrypted login cookies and it seems to me a very practical method, although details aren’t officially published as yet. Basically, it involves putting some Javascript on a web site which causes the browser to fetch something from the site being targeted – say Paypal. The browser sends the request, encrypted along with the login cookie – compressed and then encrypted using TLS. You can’t read what’s in the TLS packets, but you can see how long they are.

Fundamentally, compression works by removing repeated information found in the uncompressed data. Therefore if you have repetition, the data compresses better. By making a number of requests for differing data (like bogus image file names) you’ll know by the size of the compressed packet if data in the unknown login cookie contains data repeated in the file requested simply because the combined encrypted packet will get shorter. In other words, because the unknown cookie and known file request are compressed into the same packet, you can determine whether there is any repetition simply by comparing the size of the compressed data – when it gets shorter you’ve got a match. Apparently you need make as few a six bogus requests for each letter in the login cookie to work out its contents.

You obviously need to be eavesdropping on the line, and your victim must be running your Javascript on their browser, but as TLS is there to prevent eavesdropping then this is a serious failure. It’s not the fault of TLS, but that of browser protocol writers, hoping that implementing TLS gives them security without further consideration.

Some people have suggested that this attack would be difficult to implement in practice, but I disagree. Why not simply hijack the DNS at an Internet Cafe (with a fake DCHP server) and force everyone to run the Javascript from the first web site they tried to open, and either snoop the WiFi or sniff the packets off the wire using traditional methods of circumventing switches.

Apparently this flaw doesn’t affect IE, but the others were vulnerable until tipped off about it. Make sure you’re running a current version

PAM authentication in PHP on FreeBSD

I have several groups of lusers who want to be able to set/change their mail vacation settings but aren’t up to using ssh to edit their .forward and .vacation.msg files. I thought I’d write a quick PHP application to allow them to do it in a luser-friendly way using a web browser. If this isn’t what PHP is for, I don’t know what good it is. The snag: you need to make sure the right user is editing the right file.

The obvious answer is to authenticate them with their mail user-name and password pair using PAM. (This is the system that will check user-name/password combinations against whatever authentication you see fit – by default /etc/passwd).

PHP has a module available for doing just this – it’s called “PAM” and there’s even a FreeBSD port of it you can install from /usr/ports/security/pecl-pam. If you want to use it, just “make” and “make install” – it’ll add it to the PHP extensions automatically, but don’t forget to restart Apache if you’re planning to use it there.

You’ll also have to configure PAM itself. This involves listing the authentication methods applicable to your module in /etc/pam.d/. In this case the php module will have the default name ‘php’ unless you’ve changed it in /etc/php.ini using a line like pam.servicename = "php";

Adding the above line above obviously does nothing as it’s the default, but it’s useful as a reminder of what the default is set to. I don’t like implicit defaults, but then again I don’t like a lot of the shortcuts taken by PHP.

The only thing you need to do to get it workings is to add a PAM module definition file called /etc/pam.d/php. The easy way to create this is copy an existing one, such as /etc/pam.d/ftp. This will be about right for most people, but read /etc/pam.d/README if you want to understand exactly what’s going on.

So – to test it. A quick PHP program such as the following will do the trick:

<?php
var_dump (pam_auth('auser','theirpassword',&$error,0));
print $error;
?>

If there’s an entry in /etc/passwd that matches then it’ll return true, otherwise false, and $error will contain the reason. Actually, it checks the file /etc/master.passwd – the one that isn’t world readable and therefore can contain the MD5 password hashes. And there’s the rub…

This works fine when run as root, but not as any other users; it always returns false. This makes it next to useless. It might be a bug in the code, but even if it isn’t it leads to interesting questions about security. For example, it would allow a PHP user to hammer away trying to brute-force guess passwords. I’ve seen it suggested to Linux users can overcome the need to run as root by making their shadow password group or world readable. Yikes!

If you’re going to use this with PHP inside Apache, you’re talking about giving the “limited” Apache user access to one of the most critical system files as far as security goes. I can see the LAMP lusers clamouring for for me to let them do this, but the answer is “no!” Pecl-pam is not a safe solution to this, especially on a shared machine. You could probably persuade it to use a different password file, but what’s the point? If the www user can read it, all web hosting users can and you might just as well read it from the disk directly (or use a database). PAM only makes sense for using system-wide passwords for authenticating real users.

I do now have a work-around: if you want your Apache PHP script to modify files in a user’s home directory you can do this using FTP. I’ve written some code to achieve this (not hard) and I’ll post it here if there’s any interest, and after I’ve decided it’s not another security nightmare.

 

Infosec Europe 2011 – worrying trend

Every Infosec (the Information Security show in London) seems to have have a theme. It’s not planned, it just happens. Last year it was encrypted USB sticks; in 2009 it was firewalls. 2011 was the year of standards.

As usual there were plenty of security related companies touting for business. Most of them claimed to do everything from penetration testing to anti-virus. But the trend seemed to be related to security standards instead of the usual technological silver bullets. Some of the companies were touting their own standards, others offering courses so you could get a piece of paper to comply with a standard, and yet others provided people (with aforementioned paper) to tick boxes for you to prove that you met the standard.

This is bad news. Security has nothing to do with standards; proving security has nothing to do with ticking boxes. Security is moving towards an industry reminiscent of Total Quality Assurance in the1990’s.

One thing I heard a lot was “There is a shortage of 20,000 people in IT security” and the response appears to be to dumb-down enough such that you can put someone on a training course to qualify them as a box-ticker. The people hiring “professionals” such as this won’t care – they’ll have a set of ticked boxes and a certificate that proves that any security breach was “not their fault” as they met the relevant standard.

Let’s hope the industry returns to actual security in 2012 – I’ll might even find merit in the technological fixes.