Email addresses used by comment spammers on WordPress

On studying the behaviour of comment spammers I became interested in the email addresses they used. Were they genuine and where were they from? Well of course they’re not likely to be genuine, but it is possible to force them to register with an address if they want their comments to appear – even if they don’t. Here’s what I found:

When the spammers were required to register, these are the domain names they registered with:

Domain Percent
hotmail.com 25%
mailnesia.com 19%
Others (unique) 16%
gmail.com 7%
o2.pl 7%
outlook.com 5%
emailgratis.info 4%
gmx.com 2%
poczta.pl 2%
yahoo.com 2%
more-infos-about.com 1%
aol.com 1%
go2.pl 1%
katomcoupon.com 1%
tlen.pl 1%
acity.pl 1%
dispostable.com 1%
live.com 1%
mail.ru 1%
se.vot.pl 1%
acoustirack.com <1%
butala.htsail.pl <1%
cibags.com <1%
eiss.xoxi.pl <1%
justmailservice.info <1%
laposte.net <1%
pimpmystic.com <1%
twojewlasnem.pl <1%
wp.pl <1%

Where the authenticity of the address is more questionable, although the sample a lot larger, the figures are as follows:

Domain Percent
gmail.com 40%
yahoo.com 11%
Other (unique) 6%
hotmail.com 6%
aol.com 4%
ymail.com 2%
googlemail.com 2%
gawab.com 2%
bigstring.com 1%
zoho.com 1%
t-online.de 1%
inbox.com 1%
web.de 1%
yahoo.de 1%
arcor.de 1%
live.com 1%
freenet.de 1%
yahoo.co.uk 1%
comcast.net 1%
mail.com 1%
gmx.net 1%
gmx.de 1%
outlook.com <1%
live.cn <1%
hotmail.de <1%
msn.com <1%
livecam.edu <1%
google.com <1%
live.de <1%
rocketmail.com <1%
gmail.ocm <1%
wildmail.com <1%
moose-mail.com <1%
hotmail.co.uk <1%
care2.com <1%
certify4sure.com <1%
snail-mail.net <1%
1701host.com <1%
cwcom.net <1%
maill1.com <1%
wtchorn.com <1%
chinaadv.com <1%
noramedya.com <1%
o2.pl <1%
vegemail.com <1%
vp.pl <1%
24hrsofsales.com <1%
kitapsec.com <1%
peacemail.com <1%
whale-mail.com <1%
wp.pl <1%
aim.com <1%
animail.net <1%
bellsouth.net <1%
blogs.com <1%
email.it <1%
mailcatch.com <1%
rady24.waw.pl <1%
titmail.com <1%
fastemail.us <1%
btinternet.com <1%
harvard.edu <1%
onet.pl <1%
yahoo (various international) <1%
akogoto.org <1%
concorde.edu <1%
freenet.com <1%
leczycanie.pl <1%
mail15.com <1%
speakeasy.net <1%
yale.edu <1%
123inholland.co.nl <1%
SolicitorsWorld.com <1%
apemail.com <1%
buysellonline.in <1%
email.com <1%
help.com <1%
ipad2me.com <1%
ismailaga.org.tr <1%
live.fr <1%
myfastmail.com <1%
mymail.com <1%
ngn.si <1%
redpaintclub.co.uk <1%
stonewall42.plus.com <1%
traffic.seo <1%
xt.net.pl <1%
a0h.net <1%
accountant.com <1%
alphanewsroom.com <1%
att.net <1%
auctioneer.com <1%
brandupl.com <1%
canplay.info <1%
charter.net <1%
cluemail.com <1%
darkcloudpromotion.com <1%
earthlink.com <1%
earthlink.net <1%
eeemail.pl <1%
emailuser.net <1%
excite.com <1%
fastmail.net <1%
gmai.com <1%
gouv.fr <1%
h-mail.us <1%
hotmail.ca <1%
hotmailse.com <1%
hotmalez.com <1%
imajl.pl <1%
jmail.com <1%
juno.com <1%
live.co.uk <1%
mac.com <1%
mailandftp.com <1%
mailas.com <1%
mailbolt.com <1%
mailnew.com <1%
mailservice.ms <1%
modeperfect3.fr <1%
mymacmail.com <1%
nyc.gov <1%
op.pl <1%
peoplepc.com <1%
petml.com <1%
pornsex.com <1%
qwest.net <1%
rosefroze.com <1%
sbcglobal.net <1%
ssl-mail.com <1%
t-online.com <1%
thetrueonestop.com <1%
turk.net <1%
virgilio.it <1%
virginmedia.com <1%
windstream.net <1%
yaahoo.co.uk <1%
yahoo.com.my <1%
yazobo.com <1%
yopmail.com <1%
zol.com <1%

A few words of warning here. First, these figures are taken from comments that made it through the basic spam filter. Currently 90% of comments are rejected using a heuristic, and even more blocked by their IP address, so these are probably from real people who persisted rather than bots. They’re also sorted in order of hits and then alphabetically. In other words, they are ranked from worst to best, and therefore zol.com has least, or equal-least, multiple uses.

It’s interesting to note that gmail was by far the most popular choice (40%) when asked to provide a valid email address but when this was used to register this dropped to 7%, with Hotmail being the favourite followed by other freemail services popular in East Europe and Russia (many single-use and counted under “Other”). Does this mean that Gmail users get more hassle from Google when they misbehave? The use of outlook.com had an even bigger reduction in percentage terms – again suggesting it’s a favourite with abusers.

Another one worth noting is that mailnesia.com was clearly popular as a real address for registering spammers, but was not used even once as a fake address. This is another of those disposable email address web sites, Panamanian registered – probably worth blacklisting. emailgratis.info is also Panamania registered but heads to anonymous servers that appear to be in North Carolina.

Where you see <1% it means literally that, but it’s not insignificant. It could still mean hundreds of hits, as this is a sample of well over 20K attempts.

If you have WordPress blog and wish to extract the data, here’s how. This assumes that the MySQL database your using is called myblog, which of course it isn’t. The first file we’ll create is that belonging to registered users. It will consist of lines in the form email address <tab> hit count:

Please generate and paste your ad code here. If left empty, the ad location will be highlighted on your blog pages with a reminder to enter your code. Mid-Post

echo 'select user_email from wp_users ;' | mysql myblog | sed 1d | tr @ ' ' | awk '{ print $2 }' | sed '/^$/d' | sort | uniq -c | sort -n | awk '{ print $2 "\t" $1}' > registered-emails.txt

I have about a dozen registered users, and thousands of spammers, so there’s no real need to exclude the genuine ones for the statistics, but if it worries you, this will get a list of registered users who have posted valid comments:

select distinct user_email from wp_users join wp_comments where not comment_approved='spam' and ID=user_id;

To get a file of the email addresses of all those people who’ve posted a comment you’ve marked as spam, the following command is what you need:

echo "select comment_author_email from wp_comments where comment_approved='spam';" | mysql myblog | sed 1d | tr @ ' ' | awk '{ print $2 }' | sed '/^$/d' | sort | uniq -c | sort -n | awk '{ print $2 "\t " $1}' > spammer-emails.txt

If you want a list of IP addresses instead, try:

echo "select comment_author_IP from wp_comments where comment_approved='spam';" | mysql myblog | sed 1d | sort | uniq -c | sort -n | awk '{ print $2 "\t " $1}' > spammer-ip-addresses.txt

As I firewall out the worse offenders there’s no point in me publishing the results.

If you find out any interesting stats, do leave a comment.

David Cameron on Google Porn

I’ve been watching with dismay David Cameron’s statements on the Andrew Marr show at the weekend; he’s attacked Google and other big companies for not blocking illegal pornography. Let’s be clear: Google et al, already do, as far as is possible. The Prime Minister is simply playing politics, and in doing so is exposing his complete lack of understanding about matters technological and social.

It’s not just the coalition government; Edward Miliband trumped him in stupidity by saying that the proposed plans “didn’t go far enough”, which is his usual unthinking response to anything announced by the government that’s might be popular.

Cameron’s latest announcement is to force ISPs to turn on “no porn” filters for all households (optionally removed, so it’s not State censorship). I’d be fascinated to hear him explain how such a filter could possibly work, but as my understanding of quantum mathematics isn’t that good it I may yet be convinced. Don’t hold your breath waiting.

The majority of the population won’t be able to understand why this is technical nonsense, so let’s look at it from the social point-of-view. People using the Internet to distribute child-abuse images do not put them on web sites indexed by Google. If Google finds any, they will remove them from search results and tell the police, as would everyone else. Paedophiles simply don’t operate in the open – why would they? They’re engaged in a criminal activity and don’t want to be caught, and therefore use hidden parts of the Internet to communicate, and not web sites found by Google!

Examining the illegal drugs trade is a useful model. It’s against the law, harmful and regarded as “a bad thing” by the overwhelming majority. The police and border security spend a lot of time and money tackling it, but the demand remains and criminal gangs are happy to supply that demand. So how successful has 100 years of prohibition been? Totally ineffective, by any metric. With 80% of the prison population on drugs IN PRISON it should be obvious that criminals will continue to supply drugs under any circumstances, if there’s a demand. If anything, proscribing drugs has made it more difficult to deal with the collateral effects by making the trade and users much more difficult to track.

So, if we can’t stop drugs (a physical item) getting in to prisons (presumably amongst most secure buildings in the country) , does anyone seriously think it’s possible to beat the criminals and prevent illegal porn being transmitted electronically to millions of homes across  the country? David Cameron’s advisors don’t appear to have been able get him to understand this point.

Another interesting question is whether I should opt to have the porn filter removed from my connection. The only way such a filter could possibly be effective is if it banned everything on its creation, and then only allowed what was proven safe through. There are generally considered to be over 500 million web sites out there, with 20,000 being added every month. That’s sites; not individual pages. The subset that can realistically be examined and monitored to make sure they are safe is going to be quite small, and as a security researcher, I need to retrieve everything. So am I going to have to ‘phone my ISP and say “yes please, want to look at porn”? Actually, that won’t be a problem for me because I am my own ISP. The government doesn’t even know I exist; there is no register of ISPs (or even a definition of the term). There are probably tens of thousands in the country. So I shall await a call from Mr Cameron’s office with a full technical explanation of this filtering  scheme with interest.

Fortunately for the Prime Minister, his live speech on the subject scheduled for 11am has been displaced by a load of royal reporters standing outside a hospital and Buckingham Palace saying “no news yet” on the supposed imminent arrival of the Duke and Duchess of Cambridge’s first child.

 

New kind of distraction email bomb attack

AppRiver

I got an interesting note from AppRiver, in which Fred Touchette, one of their analysts explains a technique used by criminals, which they first noticed in January. I haven’t seen it, nor any evidence of specific cases, but it’s food for thought.

The idea is to mail-bomb a user with thousands of spam emails containing random content over a period of several hours. Mr Touchette’s theory is that this is done to cause the user to delete the whole lot unread, and in doing so to miss an important email from their bank or similar, and therefore fail to notice a fraud attempt.

I’m not so convinced about this MO to cover bank fraud, but it would certainly be useful to someone stealing a domain name. A registrar will contact the administrative contact with a chance to block the transfer of a domain when any attempt to move it is made. This is a weak system; banks would normally require positive confirmation and not rely on the receipt and reading of an email before doing anything drastic.

If the criminals have your email login, necessary to manage something like a bank account, they will have no need to prevent you from reading emails with a mail-bomb. They just have make sure they read and delete your mail before you do, which isn’t hard if they’re keen. AppRiver’s advice, nonetheless, is to call all your banks to warn them someone might be attempting to compromise your account. I’m sure they’ll thank you politely if you do.

You can read Appriver Threatscape Report for yourself. Most of it’s unsurprising if you follow threats yourself, but this detraction technique as an attack vector is worth taking seriously, regardless of its prevalence in the wild. AppRiver is based in Florida and provides web and email security and filtering services. I met them at a London trade show and they seemed like a decent bunch.

Using MX records to create backup mail server

There’s a widely held misunderstanding about “main” and “backup” MX records in the web developer world. The fact is that there no such thing! Anyone who tells you different is plain wrong, but there are a lot of web developers who believe there is the case, and some ISPs give in and provide them as it’s simpler than arguing. It’s possible to use two MX records in some crazy scheme that looks like a backup server, in practice it does very little to help and quite possibly rather a lot to hinder. It won’t make your email more robust in practical terms.

If you are using an email server at a data centre, with reasonable expectation of an always-on connection, you need a single MX record. If your processing requirements are great you can have multiple records at the same level to spread the load between peered servers, but none would be a backup any more than any other. Senders simply get one server at random. I have a single MX record.

“But you must have a backup!”, is the usual response. I do, of course, but it has nothing to do with having multiple MX records. Let me explain:

A domain’s MX record gives the address of the server to which its email should be sent. In practice, this means the company’s mail sever; or if they have multiple servers, the incoming one. Most companies have one mail server address, and this is fine. If that mail server dies it needs to be repaired or replaced, and the replacement gets the same address.

But what of having a second MX record with an alternative, lower-priority server? It may sound good, but it’s nuts. Think about it – the company’s mail server is where the mail ends up. It’s where the users expect to log in and read it. If you have an alternative server, the mail will go there instead, but the user’s won’t be able to read it. This assumes that the backup is on a different site, available if only the first site goes down. If it’s on the same site it’s even more pointless, as it will be affected by the same connectivity issues that took the first one offline. Users will have their existing mail on the broken server, new mail will be on a different server, and you’ll be in a real bugger’s muddle trying to reconcile the two later. It makes much more sense to just fix the broken one, or switch in a backup at the same location and on the existing IP address. In extremis, you can change the MX record to point to a replacement server elsewhere.

There’s a vague idea that if you don’t have a second MX, mail will be lost. Nothing can be further from the truth. If your one and only mail server is off-line, the sender’s server will queue up the message and keep trying until it comes back – it will normally do this for a week. It won’t lose it. Most mail servers will report back to the sender if it hasn’t been able to get through for four hours, so they’ll know there’s a problem and won’t worry that you haven’t replied.

If you have two mail servers, one on a different site, the secondary server will start receiving emails when the first one goes off-line. It’ll just queue them up, waiting to forward them to the primary one, but in this case the sender won’t get notification of the delay. Okay, if the primary server is off-line for more than a week it will prevent mail loss – but why would the primary server possibly be off-line for a week – the company won’t function unless it’s repaired quickly.

In the old days of dial-up, before POP3 came in to being, some people did use SMTP in a way where a server in a data centre forwarded to the remote site when it connected. I remember Cliff Stanford had a PC mail client called Turnpike that did just this in the early days of Demon. But SMTP was designed for always-on connections and POP3 was designed for dial-up, so POP3 won out.

Let’s get real: There are two likely scenarios for having a mail server off-line. Firstly, the hardware could be dead. If so, get it repaired, and in less than a week. Secondly, the line to the server could be down, and this could be medium-term if someone with a JCB has done a particularly “good job” on it. This gives you a week to make alternative arrangements and direct the mail down another line, which is plenty of time.

So, the idea of having a “backup” MX is pointless. It can only send mail to an off-site server; it doesn’t prevent any realistic mail loss and your email ends up where your users can’t get it until the primary server is repaired. But is there any harm in having one if it makes you feel better?

Actually, in practice, yes. It does make matters worse. In theory mail will just pile up on a spare server and get forwarded later. However, this spare server probably isn’t going to be up to the same specification as the primary one. They never are – they sit there idling, with nothing to do nearly all the time. They won’t necessary have the fastest line; their spam and virus filtering will be out-of-date or non-existent and they have a finite amount of disk space to absorb mail. This can really matter if you end up storing and forwarding a large amount of spam, as is often the case these days. The primary server can be configured to discard it quickly, but this isn’t a job appropriate for the secondary one. So it builds up until it’s ancient and meagre disk space is exhausted, and then it tells the sender to give up trying due to a “disk full” error – and the email is bounced off in to the ether. It’d have been much better to leave it on the sender’s server is the first place.

There are other security issues to having a secondary server. One problem comes with spam filtering. This is best done at the end of the line; it’s not the place of a secondary server to determine what gets delivered and what doesn’t. For starters, it doesn’t see the corpus of legitimate emails, so won’t be so adept at comparing and sorting. It’s probably going to be some old spare kit that’s under-powered for modern spam processing anyway. However, when it stores and forwards it, the primary server will see it comes from a “friend” rather than a dubious source in a lawless part Internet. Spammers do use secondary MX records as a back door to get around virus and spam filters for this very reason.

You could, of course, specify and configure a secondary mail server to be up to the job, with loads of disk space to prevent a DoS attack and fully functional spam filters, regularly maintained and sharing Bayesian analysis data and local rules with the actual server. And then have this expensive resource sitting there doing nothing all day but converting electricity in to heat. Realistically, it’s not going to happen.

By now you may be wondering, if multiple MX records are so pointless, why they exist? It’s one of these Internet myths; a paradigm that users feel comfortable with, without questioning the technology behind it. There is a purpose, but it’s not for “backup”.

When universal Internet email was new, messages would be sent to a user “@” computer, and computers were normally shared, so each would have multiple possible users. The computer would receive the email and put it in the mailbox corresponding to the user part of the address.

When the idea of sending email to a domain rather than a specific server came in to being, MD and MF records also came in to being. A MD record gave the IP address of the server where mail should end up (the Mail Destination). An MF record, if it existed, allowed the mail to be forwarded through another machine first (Mail Forward). This was sometimes necessary, for example if the MD was on a dial-up connection or behind a firewall and unable to accept direct connections over the Internet. The mail would go to the MF instead, and the MF would send it to the MD – presumably by batching it up and opening a line, transiting a firewall or using some other non-public mechanism.

In the mid 1980’s it was felt that having both MD and MF records placed double the load on DNS servers, so unified MX records, which could be read with a single lookup, were born. To allow for multiple levels of mail forwarding through firewalls, they were prioritised to 99 levels, although if you need more than three for any scheme you’re just being silly.

Unfortunately, the operation of MX records, rather than the explicitly named MF and MD, is a bit subtle. So subtle it’s often very misunderstood.

The first thing you need to understand is that email delivery should be controlled from the DNS for the domain, NOT from the individual mail servers that exist on that domain. This may not be obvious, but this is how it’s designed to work, and when you think of it, a central point of control is a good thing.

Secondly, DNS records should be universal. Every computer on the Internet, making the same DNS query, should get the same result. With the later addition of asymmetric NAT, there is now an excuse for varying this, but you can come unstuck if you get it wrong and that’s not what it was designed for.

If you want to reconfigure the route that mail takes to a domain, you do it by editing the single master DNS record (zone file) for that domain – you leave the multiple mail servers alone,

Now consider this problem: an organisation (called “theorganisation”) has a mail server called A. It’s inside the theorganisation’s firewall, for its own protection. Servers on the Internet can’t talk to A directly, because the firewall won’t let them through, but local users send and receive mail through it all day long. To receive external mail there’s another server called B, this time outside the firewall. A rule on the firewall allows specific traffic from B to get to A. The relevant part of the zone file may look something like this (at least logically):

MX 1 A.theorganisation
MX 2 B.theorganisation

So how do these simple lines tell the world, and servers A and B, how to operate? You need to understand the rules…

When another server, which I shall call C, sends a message to alice@theorganisation it will look up the MX records for theorganisation, and see the records above. C will then attempt to contact alice at the lowest numbered MX it finds, which points to server A. If C is located within the same department, it will be within the firewall and mail can be delivered directly; otherwise the firewall will block it. If C can’t contact A because of the firewall it will try the next highest on the list, in this case B. B is on the Internet, and will accept connections from C (and anyone else). The message arrives at B for Alice, but alice is not a user of B. However, B knows that it’s not the final destination for mail to theorganisation because the MX record says there’s a lower numbered server called A, so it attempts to forward it there. As B is allowed through the firewall, it can deliver the message to A, where it’s finally put in alice’s mailbox.

This may sound a bit complicated, but the rules for MX records can be made to route mail through complex paths simply by editing the DNS zone file, and this is how it’s supposed to work. The DNS zone file MX records control the path the mail will take. If you try to use the system for some contrary purpose (like a poor-man’s backup), you’re going to come unstuck.

There is another situation where you might want multiple MX records: If your mail server has multiple network interfaces on different (redundant) lines. If the MX priority value is the same for both, each IP address will (or should) be used at random, but if you have high-cost and low-cost lines you can change the priority to favour one route over another. With modern routing this artifice may not be necessary – let the router choose the line and mangle the IP addresses in to one for you. But sometimes a simple solution works just as well.

In summary, MX record forwarding is not designed for implementing backup mail servers and any attempt to use them for the purpose is going to do more harm than good. The ideas that this is what they’re all about is a myth.

 

CIA and GCHQ implicated in spying shocker

I was somewhat surprised to hear that the news that CIA (or NSA) has been snooping on Internet interconnections has surprised anyone (read that twice). In the computing world, this has been the assumption since the Internet became commercial, and probably before. There’s a widely held belief that Facebook was built on CIA money, and although I’ve not seen any evidence to prove it, strategically it all makes sense. Social networking promises a rich goldmine for the intelligence services. If they weren’t digging in it I’d want to know why I was funding them with my taxes. Amazon and IBM are currently in a spat over who gets a contract for a CIA “cloud” data centre. Of course they’re all connected!

GCHQ in Cheltenham
GCHQ in Cheltenham

Here in the UK there’s now a kerfuffle about whether GCHQ is involved in using this data. Snooping without a warrant is considered “not cricket”, and I’ve just watched Sir Malcolm Rifkind (chairman of the Intelligence and Security Committee) backing up the Prime Minister in saying that the UK agency was acting within the law, and wasn’t listening in on conversations without a ministerial warrant. This has never been the issue. Tracking those conversations that take place is not the same as listening in on them, and as I understand it, is perfectly legal. In fact ISPs are required to record the details of such conversations, but not the content unless they so wish. They know who calls who, but not what was said.

The public has no right to get all precious about this invasion of privacy. They signed it away when they signed up to Twitter, Facebook or whatever other freebie social networking service they joined. These services exist to mine personal data on their users to sell advertising, or just to sell. If you’re happy about telling a multi-national corporation with dubious morals what you think and who you associate with, why should you be unhappy about your elected government knowing the same things? If you don’t trust them, vote them out.

Personally, I don’t use Yahoo, Facebook or any other service for “social networking” – and not just because I have a life. If you choose to, don’t be naïve about it.

FreeBSD 8.4 released today

FreeBSD 8.4 has just been released. But I thought we were up to 9.1? Actually version 8 is still being maintained for those who don’t want to change too much in one go, as is the way for FreeBSD.

FreeBSD Logo
FreeBSD 8.4 released

Given this conservatism, why bother upgrading from 8.3 to 8.4? If you want the latest, why not go straight to  9.1; otherwise be conservative and leave well alone? This time I might upgrade, because 8.4 contains fixed versions of BIND and OpenSSL. Certain high-profile DDoS attacks amplified by a feature of BIND are a good enough reason to suggest everyone keeps up with the latest BIND release.

You could, of course, update BIND and OpenSSL by just pulling them from the repository but there are a number of other good bug fixes in there anyway, especially in several on the Ethernet drivers. And ZFS has been improved, if you want crazy disks.

I’m not expecting 9.2 to show up until early next year, if convention holds, which is a pity because some of the BIND and OpenSSL problems were found after 9.1 was released. Don’t wait until January, apply the patches now! (Follow the links above).

 

Logitech pulls plug on Vid HD and suggests users dismantle firewalls

One of the best things about Logitech USB web cameras was their video conferencing system called Vid HD. Unlike Skype, it’s secure (or can be). This was a great reason to use it, and why network administrators the world over would chose it over things like MSN Messenger and Skype.

Logitech LogoIf you want to know what’s wrong with Skype see my chapter on VoIP in the Handbook of Electronic Security and Digital Forensics. Basically it’s a “stealth” protocol based on illegal file sharing technology (Kazza) and is almost completely unmanageable at firewall level. Apart from its use as a conduit for malware through a firewall, its anarchic super-node structure is a menace. It was designed, of course, to make it impossible for the authorities to shut it down peer-to-peer media sharing operations after Napster’s servers were clobbered, so the directory server (super-nodes) can pop up anywhere you get a luser running Skype. In summary, no one who knows about security would be happy about Skype running on their corporate network, and home users can go to hell in a handcart.

So, it’s come as something of a shock to discover that Logitech, the supplier of reason, plans to do the dirty on all those who bought their kit and signed up to the service. According Joerg Tewes (their VP of digital home business group) on his blog, Logitech is going to withdraw the service on 1st July.

According to Tewes, “We launched Logitech Vid to make video calling easier and more approachable for our customers. We recognize that video calling has come a long way since then and there are now more widely used video calling solutions available, such as Skype.”

He continues by suggesting that users switch to Skype instead, as though this is some kind of decision made in the best interests of their hapless customers. There’s no hint of an apology.

Unless there is a change of heart from Logitech it’s going to leave a lot of people in the lurch. These will be people who understand about communications and security, not the home users that think Skype is cool. It’s going to hit the kind of people who specify product, and they’ll be loath to trust Logitech again as a result. I, for one, am certainly sorry I recommended them.

Deploying a replacement is going to be awkward and expensive, and there’s no obvious sensible replacement available.  Vid HD was simple, reliable and a good product. Logitech’s management may be simple, but they’re neither reliable nor good.

I have asked Logitech through for their comments through Joerg Tewes about the above, but they have so far declined to comment.

 

Infosec 2013 – First Impressions

I’m here at Infosec 2013 at Earls Court, looking for the latest trends in Information Security. It feels a bit more sober this year, but this could be to do with the number of people turning up on the Tuesday. Hot topics? Well user privilege management seems to be headlining, at least a bit. That’s what the marketing people are aiming their guns at anyway, but it’s too early to tell what the real story will be.

I had a look at the “new” Firebox firewalls. Their big thing is application management, which is, apparently, a big selling point. Rather than just blocking out particular web sites based on URL, they are using signatures on web pages to do the blocking. This approach allows companies, for example, to allow people to access profiles on Facebook but not play games. It’s a good idea, but I don’t see how it can get around the YouTube problem – a mixture of business and entertainment videos (often embedded in supplier and customer web sites) with no obvious way to tell between them. I’ll be taking a closer look.

New at the show is South Korean cyber security company AhnLab. Given my recent comments on the North Korean cyber-warfare claims, they’ll be interesting to talk to.

What’s going on in the cyber-security business-wise? Overseas outsourcing is a recurring theme. Scary!

 

Cybercriminals: Microsoft’s X-EIP is your friend.

Since January 2013, and without any fanfare, Microsoft has stopped including the originating IP address of Hotmail emails in the headers. Instead, an ominously named X-EIP has appeared in its place, consisting of random characters.

Originating IP addresses are the only means to verifying the source of an email. This is important to prevent fraud, detect crime and block spam. It can’t be used by a recipient to positively identify a sender, but by contacting the relevant ISP about it, the location can be pinpointed relatively quickly and the ISP can take action against a customer based on a complaint. Even home users can check that the IP address their friend’s email came from is in the right country, rather than a cyber-café in some remote and lawless part of the world.

So why has Microsoft done this? After much waiting for a reply, this is the best I have got:

My name is **** and I am a Senior Support Analyst for Microsoft. I am part of the Hotmail Escalations Team handling this issue.

In the pursuit of protecting the privacy of our users, Microsoft has opted to mask the X-Originating IP address. This is a planned change on the part of Microsoft in order to secure the well-being and safety of our customers.

Microsoft is in the path of continuously improving the online safety and security of its users. Any feedback regarding this concern would be treated with utmost attention.

We appreciate your patience and understanding regarding this matter.

Thank you.
Best Regards, etc.

Note the “wellbeing and safety of [their] customers” in the above. Which of their customers need this protection? Well paedophiles wishing transfer material with their mates anonymously will love it. As will fraudsters, cyber-bullies and anyone else wishing to send untraceable emails.

Having analysed the new encrypted codes, they’re not a one-to-one encryption of an IP address. Two emails from the same address will have different codes, so decoding them won’t be easy at all. It’s likely that it’s a one-way hash, meaning Microsoft will need to go back through its records to find out where an email came from, and they’re only going to do that with a court order, I suspect.

And that’s not good enough – tracking cybercrime is an immediate activity, so such things can be shut down quickly. The Internet is self-policing; there’s no time for court orders, and no point if you’re crossing international boundaries. If you know the IP address some malware came from, it’s possible to get hold of the sender’s ISP and have the feed quenched within minutes, or if coming from a commercial or academic institution, the network administrators could be around to catch them in the act. Microsoft has extended this process from minutes to weeks, losing any reputation for responsibility it had with Hotmail (not much I’ll grant you) and promoting its service to the cyber criminal.

However, Microsoft is not alone. Google has been doing this for years with Gmail. Is this a cynical attempt by Microsoft to follow Google’s shameful lead?

There are some cases where anonymous email is a good idea, such as when sending emails from a country where free speech is aggressively discouraged. There is no need for this with a mainstream email service; it’s just a feature provided to encourage new users with something to hide.

 

Spamhaus vs. Cyberbunker

There’s a real, genuine cyber-war going on over the Internet between Spamhaus and a Dutch company called Cyberbunker, and their connectivity provider A2B Internet. Spamhaus is a not-for-profit organisation that blacklists internet service providers that allow spammers to use their facilities, and Cyberbunker is an ISP which, according to their own web site, provides services to anyone for any purpose “except child porn and anything related to terrorism. Everything else is fine.” Spamming is okay by them; they’ve never denied it and basically take the view that all ISPs dealing with spammers: it’s none of Spamhaus’ business what they do and launching a denial-of-service attack against them is some kind of natural right. They’re known for hosting outfits like Pirate Bay when no one else would touch them, to give you some idea.

Pirate Bay
One of Cyberbunkers more high-profile customers – The Pirate Bay.

The war started on 19th March when a DDOS attack was launched against the Spamhaus servers in retaliation for them adding a range of IP addresses provided to Cyberbunker by A2B Internet.

A2B Internet’s view is that they’re not responsible for what Cyberbunkers’ customers do with the IP addresses and it’s no business of Spamhaus what anyone else on the Internet does. Spamhaus, and the users of the Spamhaus block-list (SBL) think it is, and after all, no one is forced to use the SBL – they use it to identify emails coming from outfits of the type often hosted by Cyberbunker. This didn’t stop A2B Internet going to the Dutch Police in outrage, accusing Spamhaus of extortion by blacklisting some of its IP addresses. Quite how this amounts to extortion isn’t clear. It pressures A2B  on who it sells connectivity to Cyberbunker, to stop doing so, but Spamhaus would argue that it was listing IP addresses used to send spam, and that’s all there is to it.

Although the SBL isn’t easy to disable by such methods, it was nonetheless annoying and Spamhaus called on the services of Californian-based CloudFlare to mitigate the attacks, which promptly got attacked themselves for their trouble. The attackers are using a feature of DNS to send gigabits of traffic towards the Spamhaus servers. Using a botnet, they’re sending zone transfer requests to poorly configured DNS servers claiming that Spamhaus has requested data on a zone (domain). The request is short, but the data returned can be very large and is sent directly to Spamhaus. People running a DNS should configure it such that it won’t accept zone transfer requests from “just anyone”, but many fail to do this – especially Microsoft installations, in my experience. By using a botnet to send the initial request the attackers have been generating traffic said to be in excess of 300Gbps.

But these attacks don’t just affect Spamhaus. The DNS servers hijacked for the purpose are consequently over-loaded when legitimate requests get through, and the traffic heading to Spamhaus is going to squeeze other legitimate traffic en route. There are stories about concerning disruption to Netflix and other high-bandwidth Internet services. Whether this is any great loss is a matter of opinion.

But is it fair to blame Cyberbunker for these attacks? Circumstantially they’re implicated. The New York Times quoted “Internet Activist” Sven Olaf Kamphuis, who claims to speak for the attackers, as saying that Cyberbunker was retaliating against Spamhaus for “abusing their influence using  one of the largest DDoS attacks the world had publicly seen.” However, it’s my understanding that Mr Kamphuis is the actually the Managing Director, and possibly owner, of Cyberbunker – so if the comments in the NYT are correct, it’s clearly them.

Kamphuis continued, “Nobody ever deputized Spamhaus to determine what goes and does not go on the Internet, they worked themselves into that position by pretending to fight spam.”

He has a point, but possibly not a very good one. About 75% of the spam filters in the world use the SBL to drop mail from dodgy sources. They don’t have to; they choose to. If the SBL was no good, they wouldn’t use it. It’s not really a case of Spamhaus determining what goes on the Internet, it’s a case of the majority of the Internet trusting Spamhaus more than they do Cyberbunker when it comes to deciding what’s spam and what isn’t.

But it means that the maintainers of the SBL have a lot of power, because incorrectly listing an IP address has a seriously negative effect on its owner. It depends on your point of view as to whether a listing is deserved or not. Spammers say they’re within the law (or their moral rights); the recipients of their marketing messages may disagree.

Cyberbunker
Cyberbunker is what its name suggests: a data centre in a disused NATO bomb-proof bunker

This disagreement has been going on for years, but A2B Internet’s complaint to the police and the subsequent DDoS attack are probably a game changer. They’ve crossed a line and “the authorities” can no longer ignore Cyberbunker’s activities. Subsequent action could be interesting as Cyberbunker’s own web site boasts of them already having defeated a raid by a Dutch “SWOT team” – a bunch of heavily armed police with battering rams at least. As they’re holed up in an old NATO nuclear bunker with blast doors able to withstand a 20 Megaton atomic bomb, a bunch of coppers with a sledge hammer aren’t going to have much effect.

Turning off the up-stream link might, however, have the desired effect. They may have buried themselves with enough food, water and diesel for their generators to withstand a long siege, but there’d be no point once they’d been disconnected. I understand that A2B Internet have decided to turn off the tap already. According to Spamhaus, Cyberbunker is getting feeds from elsewhere, but on checking they’re not terribly good feeds – or someone is currently attacking Cyberbunker.

As to the collateral damage, I suspect it’s being somewhat over-blown. Operators of a DNS server should configure it properly to prevent this nonsense, and ISPs really ought to take the initiative and check their customers are secure. But this could be a seminal event where spammers are concerned, and the world will be watching the Dutch authorities with interest.

And before condemning Cyberbunker completely, it’s worth noting they’re providing hosting for legitimate users being hounded by illegitimate governments around the world. In principle, they’re possibly as often right as they are wrong by ignoring what their customers do. There’s reputedly a lot of cyber-crime taking place on AWS, don’t forget, and the world isn’t clamouring to shut Amazon down. The difference may only be scale.