When is a Psychic not a Fraud?

The Daily Mail has had to pay £125,000 in compensation and costs to self-styled psychic Sally Morgan following an article that claimed she was perpetrating a fraud on her audience. Specifically, it said she got her messages from an earpiece rather than from beyond the grave. Going on reports of the proceedings, it appears that she didn’t use an ear-peice (or at least the Daily Mail couldn’t prove it), and therefore the article was libellous. On the face of it, some poorly researched journalism, but whoever would have thought it mattered in the case of someone pretending to hear things from the spirit world?

What the Daily Mail got wrong was that they decided to publish something as fact that couldn’t be backed up – specifically that she used an earpiece. If I say Sally Morgan, and all psychics, are not receiving any super-natural help for their performances or readings it’s up to them to prove otherwise. They’d have a hell of a job doing that, so I’m not worried in the slightest.

Ofcom, the broadcasting regulator, has guidelines about such things. Basically, you’re not allowed to pretend that psychics are real – every so often you have to say that it’s all a load of rubbish, and broadcast for entertainment purposes only. Majestic TV, the company behind Psychic Today, have today been fined £12.5K for not doing this, and inciting credulous punters to call their premium rate telephone lines for “accurate readings”.

I’m at a bit of a loss as to why Mr Justice Tugendhat (the judge in the Sally Morgan vs. Associated Newspapers) came to his decision. Sally Morgan is no more a psychic than a plank of wood, simply because there is no such thing. Exposing her methods incorrectly was wrong, but what’s the compensation all about? People who want to believe this rubbish will do so anyway, so her livelihood is safe. Whether a judge should be awarding damages to anyone who’s livelihood depends on pretending something is true when it isn’t is another matter. So he could have found in her favour and awarded her £1.

As it is Sally Morgan’s web site waffles on about her being “vindicated”, without going in to any detail about what the ruling was about. People reading it might well infer that the judge found her to be a genuine psychic, which isn’t the case at all.

Ofcom’s guidelines on these things seem eminently sensible (see ruling), but unfortunately their reach is limited to broadcasting and publishing. The growth of phone-in TV shows, known as audience participation, where the viewers are incited to dial premium rate numbers to hear a load of stuff made up about their future, is worrying but at least it’s regulated. The Internet isn’t, and I fear this kind of activity will soon spread to there.

If any psychic out there would care to prove they are genuine I will, of course, modify my view. Be warned, I’m a qualified magician and not as easy as the usual marks to fool.

 

Using MX records to create backup mail server

There’s a widely held misunderstanding about “main” and “backup” MX records in the web developer world. The fact is that there no such thing! Anyone who tells you different is plain wrong, but there are a lot of web developers who believe there is the case, and some ISPs give in and provide them as it’s simpler than arguing. It’s possible to use two MX records in some crazy scheme that looks like a backup server, in practice it does very little to help and quite possibly rather a lot to hinder. It won’t make your email more robust in practical terms.

If you are using an email server at a data centre, with reasonable expectation of an always-on connection, you need a single MX record. If your processing requirements are great you can have multiple records at the same level to spread the load between peered servers, but none would be a backup any more than any other. Senders simply get one server at random. I have a single MX record.

“But you must have a backup!”, is the usual response. I do, of course, but it has nothing to do with having multiple MX records. Let me explain:

A domain’s MX record gives the address of the server to which its email should be sent. In practice, this means the company’s mail sever; or if they have multiple servers, the incoming one. Most companies have one mail server address, and this is fine. If that mail server dies it needs to be repaired or replaced, and the replacement gets the same address.

But what of having a second MX record with an alternative, lower-priority server? It may sound good, but it’s nuts. Think about it – the company’s mail server is where the mail ends up. It’s where the users expect to log in and read it. If you have an alternative server, the mail will go there instead, but the user’s won’t be able to read it. This assumes that the backup is on a different site, available if only the first site goes down. If it’s on the same site it’s even more pointless, as it will be affected by the same connectivity issues that took the first one offline. Users will have their existing mail on the broken server, new mail will be on a different server, and you’ll be in a real bugger’s muddle trying to reconcile the two later. It makes much more sense to just fix the broken one, or switch in a backup at the same location and on the existing IP address. In extremis, you can change the MX record to point to a replacement server elsewhere.

There’s a vague idea that if you don’t have a second MX, mail will be lost. Nothing can be further from the truth. If your one and only mail server is off-line, the sender’s server will queue up the message and keep trying until it comes back – it will normally do this for a week. It won’t lose it. Most mail servers will report back to the sender if it hasn’t been able to get through for four hours, so they’ll know there’s a problem and won’t worry that you haven’t replied.

If you have two mail servers, one on a different site, the secondary server will start receiving emails when the first one goes off-line. It’ll just queue them up, waiting to forward them to the primary one, but in this case the sender won’t get notification of the delay. Okay, if the primary server is off-line for more than a week it will prevent mail loss – but why would the primary server possibly be off-line for a week – the company won’t function unless it’s repaired quickly.

In the old days of dial-up, before POP3 came in to being, some people did use SMTP in a way where a server in a data centre forwarded to the remote site when it connected. I remember Cliff Stanford had a PC mail client called Turnpike that did just this in the early days of Demon. But SMTP was designed for always-on connections and POP3 was designed for dial-up, so POP3 won out.

Let’s get real: There are two likely scenarios for having a mail server off-line. Firstly, the hardware could be dead. If so, get it repaired, and in less than a week. Secondly, the line to the server could be down, and this could be medium-term if someone with a JCB has done a particularly “good job” on it. This gives you a week to make alternative arrangements and direct the mail down another line, which is plenty of time.

So, the idea of having a “backup” MX is pointless. It can only send mail to an off-site server; it doesn’t prevent any realistic mail loss and your email ends up where your users can’t get it until the primary server is repaired. But is there any harm in having one if it makes you feel better?

Actually, in practice, yes. It does make matters worse. In theory mail will just pile up on a spare server and get forwarded later. However, this spare server probably isn’t going to be up to the same specification as the primary one. They never are – they sit there idling, with nothing to do nearly all the time. They won’t necessary have the fastest line; their spam and virus filtering will be out-of-date or non-existent and they have a finite amount of disk space to absorb mail. This can really matter if you end up storing and forwarding a large amount of spam, as is often the case these days. The primary server can be configured to discard it quickly, but this isn’t a job appropriate for the secondary one. So it builds up until it’s ancient and meagre disk space is exhausted, and then it tells the sender to give up trying due to a “disk full” error – and the email is bounced off in to the ether. It’d have been much better to leave it on the sender’s server is the first place.

There are other security issues to having a secondary server. One problem comes with spam filtering. This is best done at the end of the line; it’s not the place of a secondary server to determine what gets delivered and what doesn’t. For starters, it doesn’t see the corpus of legitimate emails, so won’t be so adept at comparing and sorting. It’s probably going to be some old spare kit that’s under-powered for modern spam processing anyway. However, when it stores and forwards it, the primary server will see it comes from a “friend” rather than a dubious source in a lawless part Internet. Spammers do use secondary MX records as a back door to get around virus and spam filters for this very reason.

You could, of course, specify and configure a secondary mail server to be up to the job, with loads of disk space to prevent a DoS attack and fully functional spam filters, regularly maintained and sharing Bayesian analysis data and local rules with the actual server. And then have this expensive resource sitting there doing nothing all day but converting electricity in to heat. Realistically, it’s not going to happen.

By now you may be wondering, if multiple MX records are so pointless, why they exist? It’s one of these Internet myths; a paradigm that users feel comfortable with, without questioning the technology behind it. There is a purpose, but it’s not for “backup”.

When universal Internet email was new, messages would be sent to a user “@” computer, and computers were normally shared, so each would have multiple possible users. The computer would receive the email and put it in the mailbox corresponding to the user part of the address.

Please generate and paste your ad code here. If left empty, the ad location will be highlighted on your blog pages with a reminder to enter your code. Mid-Post

When the idea of sending email to a domain rather than a specific server came in to being, MD and MF records also came in to being. A MD record gave the IP address of the server where mail should end up (the Mail Destination). An MF record, if it existed, allowed the mail to be forwarded through another machine first (Mail Forward). This was sometimes necessary, for example if the MD was on a dial-up connection or behind a firewall and unable to accept direct connections over the Internet. The mail would go to the MF instead, and the MF would send it to the MD – presumably by batching it up and opening a line, transiting a firewall or using some other non-public mechanism.

In the mid 1980’s it was felt that having both MD and MF records placed double the load on DNS servers, so unified MX records, which could be read with a single lookup, were born. To allow for multiple levels of mail forwarding through firewalls, they were prioritised to 99 levels, although if you need more than three for any scheme you’re just being silly.

Unfortunately, the operation of MX records, rather than the explicitly named MF and MD, is a bit subtle. So subtle it’s often very misunderstood.

The first thing you need to understand is that email delivery should be controlled from the DNS for the domain, NOT from the individual mail servers that exist on that domain. This may not be obvious, but this is how it’s designed to work, and when you think of it, a central point of control is a good thing.

Secondly, DNS records should be universal. Every computer on the Internet, making the same DNS query, should get the same result. With the later addition of asymmetric NAT, there is now an excuse for varying this, but you can come unstuck if you get it wrong and that’s not what it was designed for.

If you want to reconfigure the route that mail takes to a domain, you do it by editing the single master DNS record (zone file) for that domain – you leave the multiple mail servers alone,

Now consider this problem: an organisation (called “theorganisation”) has a mail server called A. It’s inside the theorganisation’s firewall, for its own protection. Servers on the Internet can’t talk to A directly, because the firewall won’t let them through, but local users send and receive mail through it all day long. To receive external mail there’s another server called B, this time outside the firewall. A rule on the firewall allows specific traffic from B to get to A. The relevant part of the zone file may look something like this (at least logically):

MX 1 A.theorganisation
MX 2 B.theorganisation

So how do these simple lines tell the world, and servers A and B, how to operate? You need to understand the rules…

When another server, which I shall call C, sends a message to alice@theorganisation it will look up the MX records for theorganisation, and see the records above. C will then attempt to contact alice at the lowest numbered MX it finds, which points to server A. If C is located within the same department, it will be within the firewall and mail can be delivered directly; otherwise the firewall will block it. If C can’t contact A because of the firewall it will try the next highest on the list, in this case B. B is on the Internet, and will accept connections from C (and anyone else). The message arrives at B for Alice, but alice is not a user of B. However, B knows that it’s not the final destination for mail to theorganisation because the MX record says there’s a lower numbered server called A, so it attempts to forward it there. As B is allowed through the firewall, it can deliver the message to A, where it’s finally put in alice’s mailbox.

This may sound a bit complicated, but the rules for MX records can be made to route mail through complex paths simply by editing the DNS zone file, and this is how it’s supposed to work. The DNS zone file MX records control the path the mail will take. If you try to use the system for some contrary purpose (like a poor-man’s backup), you’re going to come unstuck.

There is another situation where you might want multiple MX records: If your mail server has multiple network interfaces on different (redundant) lines. If the MX priority value is the same for both, each IP address will (or should) be used at random, but if you have high-cost and low-cost lines you can change the priority to favour one route over another. With modern routing this artifice may not be necessary – let the router choose the line and mangle the IP addresses in to one for you. But sometimes a simple solution works just as well.

In summary, MX record forwarding is not designed for implementing backup mail servers and any attempt to use them for the purpose is going to do more harm than good. The ideas that this is what they’re all about is a myth.

 

CIA and GCHQ implicated in spying shocker

I was somewhat surprised to hear that the news that CIA (or NSA) has been snooping on Internet interconnections has surprised anyone (read that twice). In the computing world, this has been the assumption since the Internet became commercial, and probably before. There’s a widely held belief that Facebook was built on CIA money, and although I’ve not seen any evidence to prove it, strategically it all makes sense. Social networking promises a rich goldmine for the intelligence services. If they weren’t digging in it I’d want to know why I was funding them with my taxes. Amazon and IBM are currently in a spat over who gets a contract for a CIA “cloud” data centre. Of course they’re all connected!

GCHQ in Cheltenham
GCHQ in Cheltenham

Here in the UK there’s now a kerfuffle about whether GCHQ is involved in using this data. Snooping without a warrant is considered “not cricket”, and I’ve just watched Sir Malcolm Rifkind (chairman of the Intelligence and Security Committee) backing up the Prime Minister in saying that the UK agency was acting within the law, and wasn’t listening in on conversations without a ministerial warrant. This has never been the issue. Tracking those conversations that take place is not the same as listening in on them, and as I understand it, is perfectly legal. In fact ISPs are required to record the details of such conversations, but not the content unless they so wish. They know who calls who, but not what was said.

The public has no right to get all precious about this invasion of privacy. They signed it away when they signed up to Twitter, Facebook or whatever other freebie social networking service they joined. These services exist to mine personal data on their users to sell advertising, or just to sell. If you’re happy about telling a multi-national corporation with dubious morals what you think and who you associate with, why should you be unhappy about your elected government knowing the same things? If you don’t trust them, vote them out.

Personally, I don’t use Yahoo, Facebook or any other service for “social networking” – and not just because I have a life. If you choose to, don’t be naïve about it.

FreeBSD 8.4 released today

FreeBSD 8.4 has just been released. But I thought we were up to 9.1? Actually version 8 is still being maintained for those who don’t want to change too much in one go, as is the way for FreeBSD.

FreeBSD Logo
FreeBSD 8.4 released

Given this conservatism, why bother upgrading from 8.3 to 8.4? If you want the latest, why not go straight to  9.1; otherwise be conservative and leave well alone? This time I might upgrade, because 8.4 contains fixed versions of BIND and OpenSSL. Certain high-profile DDoS attacks amplified by a feature of BIND are a good enough reason to suggest everyone keeps up with the latest BIND release.

You could, of course, update BIND and OpenSSL by just pulling them from the repository but there are a number of other good bug fixes in there anyway, especially in several on the Ethernet drivers. And ZFS has been improved, if you want crazy disks.

I’m not expecting 9.2 to show up until early next year, if convention holds, which is a pity because some of the BIND and OpenSSL problems were found after 9.1 was released. Don’t wait until January, apply the patches now! (Follow the links above).

 

Beware ISPs offering Free Upgrades

If you’ve had an ADSL line since the early days, especially those with unlimited transit, you’ll probably be hearing from your ISP about now. They’ll be offering you a “free upgrade” to a new, faster service as the product you currently have is being discontinued by BT. This is a tad disingenuous.

What’s actually happening is that BT is changing its wholesale prices, making the legacy products like Datastream and IPStream less profitable than then newer Wholesale Broadband Connect (WBC), and they will indeed be dropping IPStream and Datastream from exchanges starting in October 2013. Although this won’t be overnight. That doesn’t mean your provider couldn’t offer you an equivalent service, although this will depend on the equipment remaining at the exchange and who operates it. Most of London, for example, has Be or C+W available as an alternative. Or they could move you on to WBC.

The disadvantage with WBC is that it will probably require you to change your modem (or entire router if its a modem/router combined). It’s not technically possible to programme WBC to connect at the older G.DMT standard, giving you the reliability you’re used to. Presumably if you’re using an old 512K line it’s for reliability rather than speed – the last thing you need is fast and flaky. You can clamp the modulation method on some modems, and if it’s a G.DMT-only modem it won’t attempt higher speeds, although this doesn’t guarantee it’ll be stable at the maximum 8Mbps is may try for. Unfortunately many ADSL2+ modems out there tend to get unstable if you turn up the wick, and there may be no way of turning it down from the modem’s side. This won’t have mattered on a G.DMT line, but these won’t exist any longer. In sort, you’re probably going to need a new one.

One striking feature of this whole situation is the different way ISPs are treating their customers; and bear in mind that people on these old lines will have been loyal customers for very many years, paying every month at early 2000’s rates. Zen Internet and EasyNet are good examples. If you had an unlimited IPStream before, this is what you get now.

Zen EasyNet AAISP 4theNet
Transit
(download limit)
Hard limited 100Gb  Remains unlimited Shaped (no change) Remains unlimited
Modem Tough – you must go  and buy another Send pre-configured new one free of charge Depends on service level Depends on service level
Price Same Reduced TBC Reduced
Speed Max 8Mbps down, 448K up As fast as possible up to 24Mbps down, 2Mbps up TBC 12Mbps down, 1Mbps up

This isn’t comparing like-for-like; 4theNet is a lot cheaper to begin with and favoured by those in the know, whereas Zen and EasyNet charge more but do a lot of end-user hand-holding. AAISP has mind-boggling technology solutions, but has always charged for transit in their own way – but they don’t cut you off. Unless Zen has a change of heart, their users are going to walk away. You get the vives that old customers are just too much trouble.

 

Airbus A319 Emergency Landing at Heathrow

It’s all over the news, with mobile phone pictures and everyone being interviewed. Although it’s clear one engine was in flames, one of the interviewees mentioned something really interesting that the main news media hasn’t picked up on yet…

Apparently the engine cowling became detached from both engines, after which the pilot assessed the situation with both engines running properly without covers. Only after one of the engines caught fire was the emergency landing made back at Heathrow. (This is reasonable – there are other places to land for less of an emergency and the crew might have wanted to assess the situation as to why they’d lost the covers before landing).

To lose one cover is unfortunate; to lose both is starting to look like carelessness.

It could be that the passenger being interviewed was a poor observer, or it could be that the covers were simply not latched on properly. It’s been said by the BBC people that “the covers were blown off” – engine explosion? Not likely, as apparently the engines remained running.

Logitech pulls plug on Vid HD and suggests users dismantle firewalls

One of the best things about Logitech USB web cameras was their video conferencing system called Vid HD. Unlike Skype, it’s secure (or can be). This was a great reason to use it, and why network administrators the world over would chose it over things like MSN Messenger and Skype.

Logitech LogoIf you want to know what’s wrong with Skype see my chapter on VoIP in the Handbook of Electronic Security and Digital Forensics. Basically it’s a “stealth” protocol based on illegal file sharing technology (Kazza) and is almost completely unmanageable at firewall level. Apart from its use as a conduit for malware through a firewall, its anarchic super-node structure is a menace. It was designed, of course, to make it impossible for the authorities to shut it down peer-to-peer media sharing operations after Napster’s servers were clobbered, so the directory server (super-nodes) can pop up anywhere you get a luser running Skype. In summary, no one who knows about security would be happy about Skype running on their corporate network, and home users can go to hell in a handcart.

So, it’s come as something of a shock to discover that Logitech, the supplier of reason, plans to do the dirty on all those who bought their kit and signed up to the service. According Joerg Tewes (their VP of digital home business group) on his blog, Logitech is going to withdraw the service on 1st July.

According to Tewes, “We launched Logitech Vid to make video calling easier and more approachable for our customers. We recognize that video calling has come a long way since then and there are now more widely used video calling solutions available, such as Skype.”

He continues by suggesting that users switch to Skype instead, as though this is some kind of decision made in the best interests of their hapless customers. There’s no hint of an apology.

Unless there is a change of heart from Logitech it’s going to leave a lot of people in the lurch. These will be people who understand about communications and security, not the home users that think Skype is cool. It’s going to hit the kind of people who specify product, and they’ll be loath to trust Logitech again as a result. I, for one, am certainly sorry I recommended them.

Deploying a replacement is going to be awkward and expensive, and there’s no obvious sensible replacement available.  Vid HD was simple, reliable and a good product. Logitech’s management may be simple, but they’re neither reliable nor good.

I have asked Logitech through for their comments through Joerg Tewes about the above, but they have so far declined to comment.

 

Rename file extensions in UNIX/Linux/FreeBSD

I had a directory with thousands of files from a Windoze environment with inconsistent file extension  Some ended in .hgt, others in .HGT. They all needed to be in lower case, for some Windows-written cross-compiled software to find them. UNIX is, of course, case-sensitive on such things but Windoze with its CP/M-like file system used upper-case only, and when the shift key was invented, decided to ignore case.

Anyway, rather than renaming thousands of files by hand I thought I’d write a quick script. Here it is. Remember, the old extension was  .HGT, but I needed them all to be .hgt:

for oldname in `find . -name "*.HGT"`
do
newname=`echo $oldname | tr .HGT .hgt`
mv $oldname $newname
done

Pretty straightforward  but I’d almost forgotten the tr (translate) command existed, so I’m now feeling pretty smug and thought I’d share it with the world. It’ll do more than a simple substitution – you could use “[A-Z] [a-z]” to convert all upper case characters in the file to lower case, but I wanted only the extensions done. I could probably have used -exec on the find command, but I’ll leave this as an exercise for the reader!

It could me more compact if you remove the $newname variable and substitute directly, but I used to have an echo line in there giving me confirmation I was doing the right thing.

 

Infosec 2013 – First Impressions

I’m here at Infosec 2013 at Earls Court, looking for the latest trends in Information Security. It feels a bit more sober this year, but this could be to do with the number of people turning up on the Tuesday. Hot topics? Well user privilege management seems to be headlining, at least a bit. That’s what the marketing people are aiming their guns at anyway, but it’s too early to tell what the real story will be.

I had a look at the “new” Firebox firewalls. Their big thing is application management, which is, apparently, a big selling point. Rather than just blocking out particular web sites based on URL, they are using signatures on web pages to do the blocking. This approach allows companies, for example, to allow people to access profiles on Facebook but not play games. It’s a good idea, but I don’t see how it can get around the YouTube problem – a mixture of business and entertainment videos (often embedded in supplier and customer web sites) with no obvious way to tell between them. I’ll be taking a closer look.

New at the show is South Korean cyber security company AhnLab. Given my recent comments on the North Korean cyber-warfare claims, they’ll be interesting to talk to.

What’s going on in the cyber-security business-wise? Overseas outsourcing is a recurring theme. Scary!

 

Lighttpd in a FreeBSD Jail (and short review)

Lighttpd is an irritatingly-named http daemon that claims to be light, compared to Apache. Okay, the authors probably have a point although this puppy seems to like dragging perl in to everything and there’s nothing minuscule about that.

I thought it might be worth a look, as Apache is a bit creaky. It’s configuration is certainly a lot simpler than httpd.conf,although strangely, you tend to end up editing the same number of lines. But is it lighter? Basically, yes. If you want the figures it’s currently running (on AMD64) a size of 16M compared to Apache httpd instances of 196M.

But we’re not comparing like for like here, as Lighttpd doesn’t have PHP; only CGI. If you’re worried about that being slow, there’s FastCGI, which basically keeps instances of the CGI program running and Lightttpd hands tasks off to an instance when they crop up. Apache can do this (there’s the inevitable mod), but most people seem happy using the built-in PHP these days so I don’t think FastCGI is very popular. It’s a pity, as I’ve always felt CGI is under-rated and I’m very comfortable passing off to programs written in ‘C’ without there being an noticeable performance issues. Using CGI to run a perl script and all that entails is horrendous, of course. But FastCGI should level the playing field and allow instances of perl or any other script language of your dreams to remain on standby in much the same way PHP currently remains on standby in Apache. That doesn’t make perl or PHP good, but it levels their use with PHP on Apache, giving you the choice. And you can also choose  high-performance ‘C’.

This is all encouraging, but  I haven’t scrapped Apache just yet. One simple problem, with no obvious solution, is the lack of support for the .htaccess file much loved by the web developers and their content management systems. Another worry for me is security. Apache might be big and confusing, but it’s been out there a long time and has a good track record (lately). If it has holes, there are a lot of people looking for them.

Lighttpd doesn’t have a security pedigree. I’m not saying it’s got problems; it’s just that it hasn’t been thrashed in the same way as Apache and I get the feeling that the development team is much smaller. Sometimes this helps, as it’s cleaner code, but it’s statistically less likely to have members adept at spotting security flaws too. I’m a bit concerned about the FastCGI servers all running on the same level, for example.

Fortunately you can mitigate a lot of security worries by running in a jail on FreeBSD (it will also chroot on Linux, giving some degree of protection). It was fairly straightforward to compile from the ports collection, but it does have quite a few dependencies. Loads of dependencies, in fact. I saw it drag m4 in for some reason! Also the installation script didn’t work for me but it’s easy enough to tweak manually (find the directory with the script and run make in it to get most of the job done). The other thing you have to remember is that it will store local configurations in /usr/local on BSD, instead of the base system directories.

To get it running you’ll need to edit  /usr/local/etc/lighttpd/lighttpd.conf, and if you’re running in a jail be sure to configure the IP addresses to bind to correctly. Don’t be fooled: There’s a line at the bottom that sets the IP address and port but you must find the entry server.bind in the middle of the file and set that to the address you’ve configured for the jail to have passed through. This double-entry a real pooh trap, especially as it tries to bind to the loopback interface and barfs with a mysterious message. Other than that, it just works – and when it’s in the jail it will happily co-exist with Apache.

I’ve got it running experimentally on a production server now, and I’ve also cross-compiled to ARM and it runs on Raspberry Pi (still on FreeBSD), but it was more fun doing that with Apache.

When I get time I’ll do a full comparison with Hiawatha.