Why Jeremy Clarkson Matters

Jeremy Clarkson must feature in the worst nightmares of the trendy liberals that run the BBC. He’s intelligent, articulate and hugely popular, but not politically correct. Whether he’s right or wrong in what he says doesn’t matter. From what I’ve heard of his TV appearances, he comes across shallow and missing the point 75% of the time. He’s written books and a column in the Sun “Newspaper”, which may turn out to pander less to the need to entertain; I don’t know because I can’t be bothered to read them.

I hear more about Mr Clarkson from the news media, where there appears to be a vendetta against him based on the notion that he says things which, while part of English society for over a century, are no longer politically correct. They’re lambasting him for treading on cracks in the pavement.

The latest row seems to be about him losing his temper after a stressful day’s filming. This isn’t a good thing, but it’s part of life. If he was a celebrity chef, such behaviour would be encouraged.

We should really be sharing a thought for the poor producer on the receiving end of the self-important star’s bad mood and abuse: Oisin Tymon. He appears to have taken the matter professionally, in his stride. He’s working in an industry containing celebrities with arge egos placed in stressful situations, and what little information there is in the public domain, it appears he’s taken the incident on the chin (literally, by some accounts) and just got on with it.

Unfortunately, it’s given Danny Cohen, the BBC Director of Television, the perfect excuse to over-react. Or so he seems to think. It’s clearly being used as an opportunity to silence a voice that doesn’t fit with their left-wing, liberal agenda. I’ve no problem with a left-wing agenda, as long as it’s balanced. The BBC is paid for by society as a whole, and has no business censoring someone who reflects the views of that society, whether they reflect their views or not.

Whether Mr Cohen is pandering to the views of his colleagues is something I can’t tell. There are calls for the wonder-boy of British Television to go instead of Clarkson. One thing’s for sure; there’s always Noreena sitting over the breakfast table to keep him on the one true path. Her published works leave no doubt as to her political and philosophical leanings.

As I believe in hearing all views from our “uniquely funded” state broadcaster, I have no choice but to take a stand in defence of the oaf. Guido Fawkes started a petition, and I notice it has almost reached a million supporters. Sign it here.

Yahoo plans to give up passwords

The latest scheme from Yahoo’s Crazy Ideas Department is to dispense with login passwords. Are they going to replace them with a certificate login or something more secure? Nope! The security-gaff prone outfit from Sunnyvale California has had the genius idea of sending a four-character one-time password to your mobile phone, according to an announcement they made at SXSW yesterday (or possibly today if you’re reading this in the USA).

According to Chris Stoned Stoner, their Product Development Director, the bright idea is to avoid the need to memorise difficult passwords by simply sending a new one, each time, to your registered mobile phone.

At first glance, this sounds a bit like the sensible two-factor authentication you find already: Log in using your password and an additional verification code is sent to your mobile. However, Yahoo has dispensed with the first part – logging in with your normal password. This means that anyone that has physical control of your mobile phone can now hijack your Yahoo account too. If your phone is locked, no matter – just retrieve the SMS using the SIM alone. No need to pwn Yahoo accounts the traditional way.

With an estimated 800,000 mobile phones nicked per year in the UK alone (Source inferred from ONS report) and about 6M handsets a year going AWOL in the USA, you’ve got to wonder what Yahoo was thinking.

Apart from the security risk, what are the chances of being locked out of your email simply because you’re out of mobile range (or if you’re phone has gone missing). Double whammy!

The Artificial Intelligence Conspiracy

The Truth about Artificial Intelligence

Last year I was asked, at short notice, to teach an undergraduate Artificial Intelligence module. I haven’t done any serious work in the field since the 1980’s, when it was all the rage. It’s proponents were anticipating that it would be a part of life within ten years; as this claim had been made in the early 1970’s I was always a bit dubious, but computer power was increasing exponentially and so I kept an eye on the field. LISP was the thing back then, although I could never see quite how a language that processed lists easily, but was awkward for anything much else, was going to lead to the breakthrough.

So, having had the AI module dumped on me, I did the obvious thing and ran to the library to get out every textbook on the subject. What was the latest? I was surprised to see how far the field had come in the intervening years. It had got nowhere. The textbooks on AI covered pretty much the same as any good book on applied algorithms. The current state-of-the-art in AI is, in fact, applied algorithms with a different name on the cover; no doubt to make it sound more exciting and to make its proponents sound more interesting than mere programmers.

Since then, of course, AI has been in the news. Dr Stephen Hawking came out with a statement that he was worried about AI machines displacing mankind once they got going. Heavy stuff – it’d make a good plot for a sci-fi movie. It was also splashed all over the news media a week before the release of his latest book. The man’s no fool.

With universities having had departments of artificial intelligence for decades now, and consumer products claiming to have embedded AI (from mobile telephones to fuzzy logic thermostats) you may be forgiven for thinking that a breakthrough is imminent. Not from where I’m sitting.

Teaching artificial intelligence is like teaching warp drive technology. If you’ve never seen Star Trek, this is the method by which the Starship Enterprise travels faster than the speed of light by using a warp engine to bend the space around it such that a small movement inside the warp field translates to a much larger movement through “flat” space. Great idea, except that warp generators only exist in science fiction. And so does AI. You can realistically teach quantum physics, but trying to teach warp technology is only for the lunatic fringe. The same is true of AI, although I’m certain those with a career, and research grants, based on the name will beg to differ.

So where are we actually at? How does artificial intelligence as we know it work, and is it going in the right direction? In the absence of the real thing, the term AI is now being used to describe a class of algorithm. A proper algorithm takes input values and produces THE correct answer. For example, as sort algorithm will take as its input an unordered list and produce as output a sorted list. If the algorithm is correct, the output will always be correct, and furthermore it is possible to say how long it will take (worst case) to get the answer, because there is a worst-case number of steps the program will have to take. These are know as “P Problems”, to those who like to talk about how difficult things are to work out in terms of letters rather than plain old English.

Other problems are NP, which basically means that, although you might be able to produce an algorithm to solve them, the universe may have ended before you get the result. In some cases the computation may last an infinite amount of time. For example, one tricky problem would be working out the shortest route from London to Carlisle? Your satnav can work this out for you, of course, but how can you be sure it’s found the one correct answer; the absolute shortest route? In practice, you probably don’t care. You just want a route that works and is reasonably short. To know for sure that there was no shorter route possible you would have to examine every possible turn-after-turn in the complete road network. You can’t prove it’s not shorted to go via Penzance unless you try it. However, realistically, we use heuristics to prune off crazy paths and concentrate on the promising ones and get a result that’s “good enough”. There are a lot of problems like this.

A heuristic algorithm sounds better to some people if it’s called an AI algorithm, and with no actual AI working AI, people like to have something to point to; to justify their job titles. But where does this leave genuine AI?

In the 1970’s world was seen as lists, or relations (structured data of some kind). If we played about with databases and array (list) processing languages, we’d ignite the spark. If it wasn’t working it was just our failure to classify the world in to relations properly.

When nothing caught fire, Object Oriented Programming became fashionable. Minsky’s idea was that if a computer language could map on to the real world, using code/data (or methods and attributes) to define real-world objects, AI would follow. I remember the debate (around 1989) well. When the “proper” version of C++ appeared, the one with the holy grail of multiple inheritance, the paradigm would take off. Until then C++ was just a syntactical nicety to hide the pointer to the context in a library of functions acting on the same structure layout. We’ve had multiple inheritance for 25 years now, but any conceivable utility I’ve seen made of them has been somewhat contrived. I always thought they were a bad idea except for classes inheriting multiple interfaces, which I will concede but this is hardly the same as inheriting methods and attributes – the stuff that was supposed to map the way world worked.

The current hope seems to be “whole brain” emulation. If we can just build a large enough neural network, it will come to life. I have to admit that the only tangible reason why I don’t see this working is decades of disappointment. Am I right to be sceptical? Looking it another way, medical science has progressed by leaps and bounds, but we’re no closer to creating life than when Mary Shelly first wrote about it. However cleaver we think we are with modern medicine, I don’t think we’re remotely close to reanimating even a single dead cell, never mind creating one.

Perhaps a better places to start is looking at the nature of AI, and how we know we’ve got it. One early test was along the lines of “I’ll be impressed if that thinking machine can play chess!”. This has fallen by the wayside, with Deep Blue finally beating Garry Kasparov in 1997 and settling that question once and for all. But no one is now is claiming that Deep Blue was intelligent; it was simply able to calculate more possible outcomes in less time than its human opponent. One interesting point about it was the size of the machine required to do even this.

Another famous measure of AI success is Alan Turing’s test. A smart man, was Mr Turing. Unfortunately his test wasn’t valid (in my humble opinion). Basically, he reckoned that if you were communicating with a computer and couldn’t tell the difference between it and a human correspondent, then you had AI. No you don’t. We’ve all spoken to humans at call centres that do a pretty good impression of a machine; getting a machine to do a good impression of a human isn’t so hard. And it’s not intelligence.

In the late 1970s and early 1980s, computer conversation programs were everywhere (e.g. ELIZA). It’s no surprised; the input/output was basically a Teletype or later a video terminal (glass Teletype), so what else could you write? The pages of publications such as Creative Computing inspired me to write a few such programs myself, which I had running at the local library for the public to have a go at. Many had trouble believing the responses came from the computer rather than me behind a screen (this was in the early days, remember – most had never seen a computer). I called this simulated intelligence, and subsequently wrote about it in my PCW column. And that’s all it was – a simulation of intelligence. And all I’ve seen since has a simulation; however good the simulation it’s not the same as the real thing.

Science fiction writes have defined AI as a machine being aware of itself. I think this is possibly, true, but it pushes the problem on to defining self-awareness. I think there’s still merit in the idea anyway; it’s one feature of intelligent life that machines currently lack. A house fly is moderately intelligent; as may be an amoeba. What about a bacteria? Bear in mind that we’ve not created an artificial or simulated intelligence that can do as much as a house fly yet, if you’re thinking of AI as having human-like characteristics. (There is currently research into simulating a fly brain (See Arena, P.; Patane, L.; Termini, P.S.; “An insect brain computational model inspired by Drosophila melanogaster: Simulation results” in The 2010 International Joint Conference on Neural Networks – IJCNN).

Other AI definitions talk about a machine being able to learn; take the results of a previous decisions to alter subsequently decisions in the pursuance of a goal. This has been achieved, at high speed and with infinite resolution, many years ago. It’s called an analogue feedback loop. There’s a lot of bluster about AI systems being more complex and being able to cope with a far wider range of input types than previous systems, but a feedback loop isn’t intelligent, however complex it is.

So what have we actually got under the heading of AI? A load of heuristic algorithms that can produce answers to problems that can’t be computed for certain; systems that can interact with humans in a natural language; and with enough processing power you can build a complex enough heuristic system to drive a car. Impress your granny by calling this kind of thing AI if you like, and self-awareness doesn’t really matter if the machines do what we want of them. This is just as well, as AI is just as elusive as it was in the 1970s. All we have now is a longer list of examples that aren’t it.

The only viable route I can see to AI is in Whole Brain Emulation, as alluded to above. We are getting to the point now where it is possible to build a neural network complex enough to match a brain. How, exactly, we could kick-start such a machine in to thinking is an intriguing problem. Those talking loudest about this kind of technology are thinking in terms of uploading the contents of an existing brain, somehow. Personally, I see a few practical problems that will need solving before this will work, but if we could build such a complex neural network and if we could find a way to teach it, we may just achieve a real artificial intelligence. There are two ifs and a may in there. Worrying too much about where AI technology may lead, however, is like worrying about the effects of human physiology from prolonged exposure to the warp coils on a starship.

More comment spammer email analysis

Since my earlier post, I decided to see what change there had been in the email addresses used by comment spammers to register. Here are the results:

 

Freemail Service  %
hotmail.com 22%
yahoo.com 20%
outlook.com 14%
mailnesia.com 8%
gmail.com 6%
laposte.net 6%
o2.pl 3%
mail.ru 2%
nokiamail.com 2%
emailgratis.info 1%
bk.ru 1%
gmx.com 1%
poczta.pl 1%
yandex.com 1%
list.ru 1%
mail.bg 1%
aol.com 1%
solar.emailind.com 1%
inbox.ru 1%
rediffmail.com 1%
live.com 1%
more-infos-about.com 1%
dispostable.com <1%
go2.pl <1%
rubbergrassmats-uk.co.uk <1%
abv.bg <1%
fdressesw.com <1%
freemail.hu <1%
katomcoupon.com <1%
tlen.pl <1%
yahoo.co.uk <1%
acity.pl <1%
atrais-kredits24.com <1%
conventionoftheleft.org <1%
iidiscounts.org <1%
interia.pl <1%
ovi.com <1%
se.vot.pl <1%
trolling-google.waw.pl <1%

As before, domains with <1% are still significant; it’s a huge sample. I’ve only excluded domains with <10 actual attempts.

The differences from 18 months ago are interesting. Firstly, mailnesia.com has dropped from 19% to 6% – however this is because the spam system has decided to block it! Hotmail is also slightly less and Gmail and AOL are about the same. The big riser is Yahoo, followed by laposte.net (which had the highest percentage rise of them all). O2 in Poland is still strangely popular.

If you want to know how to extract the statistics for yourself, see my earlier post.

Wii’ls come off BBC iPlayer

Those of us with suspicions about BBC’s iPlayer project have been proven correct. The corporation has once again shown its properly out of touch with those who are forced to pay for it by first pushing everyone on to using iPlayer, and then discontinuing the support for it on the most widely installed platform in the country.

When  the BBC obtained the funding for BBC3 and BBC4, part of the justification was to allow re-screening of significant programmes that were difficult to watch live in the multi-channel environment. This worked for a while, and then they stopped doing this and filled the airwaves with complete garbage, citing iPlayer as the way to catch up on everything you couldn’t see at the broadcast time. A lot of us were suspicious that this was more to plug iPlayer than anything else. Fortunately in 2009 the corporation released iPlayer for the  most popular games console – the one that more households had installed than anyone else – the Nintendo Wii.

 

Although they had questionable motives, it worked well enough until late last year, then they messed with it. Then it didn’t work. And a few days ago it became apparent that they were dropping the service with the jaw-droppingly arrogant excuse that it was five years old and they wanted to concentrate their efforts on newer platforms.

This is complete nonsense, of course. The Wii platform remains the most widely available, by far. The Wii is tried and trusted, appreciated by families if not hard-core games fanatics, and is hardly an obsolete product. It’s still on sale, and at a reasonable price. As a platform for iPlayer it’s an obvious choice.

So what’s the BBC thinking? Are they stymied by simple technical incompetence, having no one available to working on the Wii code base following an “upgrade” to a new iPlayer version? Quite possibly, and they’re so out-of-touch that they don’t see a problem with this.

A feeble note the BBC web site says they are concentrating efforts on producing a new player for the Wii U – the console no one wants. Hell is going to freeze over before this platform gets anywhere near the installed base of 100,000,000+ of the standard Wii consoles (worldwide, as at late 2014, based on Nintendo’s quarterly consolidated regional sales reports).

So what does this tell is about the BBC? If iPlayer is part of an important future broadcasting strategy, they’re not supporting it very well at all. All the house advertising suggests it’s important to the corporation. It’s a strange outfit – some of its R+D has always been groundbreaking whereas recently a lot of it has been laughable, and the management is notoriously well insulated from the real world. Their failure to support common platforms in the arbitrary manner makes the whole concept unstable.

In the old days you could invest in a TV set in confidence knowing that your license fee was going to keep it supplied with content for as long as was reasonably possible. The BBC acted very honorably when it came to the switch from VHF to UHF; a bit less so with DVB-T – and they’ve used the extra channels to provide constant re-runs of their lowest quality output. Dropping iPlayer now, just as families were trusting that the could invest in the equipment needed to receive the service is a continuation of a worrying trend.

 

jpmoryan.com malware spam

Since about 2pm(GMT) today FJL has been intercepting a nice new zero-day spammed malware from the domain jpmoyran.com (domain now deleted). Obviously just one letter different from J P Morgan, the domain was set up in a fairly okay manner – it would pass through the default spamassassin criteria, although no SPF was added as it’s being sent out by a spambot.

The payload  was a file called jpmorgan.exe (spelled correctly!) with an icon that was similar to an Adobe PDF file. Is it malware? Well yes, but I’ve yet to analyse just what. It’s something new.

 

Text of the message is something like:

 

Please fill out and return the attached ACH form along with a copy of a voided check (sic).

Anna Brown
JPMorgan Chase
GRE Project Accounting
Vendor Management & Bid/Supervisor
Fax-602-221-2251
Anna.Brown@jpmchase.com
GRE Project Accounting

Be careful.

 

Update: 19:30

As a courtesy, I always let affected companies know they’re being attacked, with variable results. J P Morgan’s cyber security department in New York took about 30 minutes to get to; they couldn’t cope with the idea that (a) I was not in America; and (b) I wasn’t even a customer of theirs. I eventually ended up speaking to someone from the “Global(sic) Security Team” who told me that if I was a customer I didn’t need to worry about it, but I could sent it to abuse@… – and then put the phone down on me. This was an address for customers to send “suspicious” emails to. I doubt they’ll read it, or the malware analysis. If you’re a J P Morgan customer, you might want to have a word about their attitude.

Interesting security issue with Google Apps for Education

I’ve come across a feature of Google Apps for Education that people should really be aware of. It goes like this…

When a school or college signs up for Google Apps for Education, a single email account is used to register a local administrator. This administrator then has control over the sub-accounts, including creation, passwords and monitoring. This would be someone at the school you can trust, right? Because they have access to all your children’s data. And it’s only for school use, so where’s the problem?

Well here’s the problem: that data will probably include a GMail account, and they may not be using it for education-related matters. Creepy. Assuming you trust the monitor, do you snoop on the pupils for their own protection or leave it completely unmoderated, with all the implications for child safety. You’re between a rock and a hard place. By forcing pupils to use an insecure channel you’re responsible for the consequences: if you look you could be accused of voyeurism; if you don’t you can be accused of allowing abuse which you could have prevented.

And it gets worse, because you’re basically logging in using a Google Account. How many people log out when they’re finished? And if a child logs in on a home computer and someone else uses it afterwards without realising, the administrator at the school gets to snoop on data inadvertently added to the account by other members of the household.

Are you a parent, and were you aware of this? You are now!

If you’re a school, my advice is to (a) monitor the monitor; and (b) make sure children know to log out after use; and (c) make very sure that you have parents’ specific permission to allow their children to use the system, being aware of the above. If not and you end up monitoring someone you don’t have permission to (i.e. not your pupil), you’re probably looking at an offence under the Misuse of Computer Act 1990 in the UK, and a class action law suit in the USA. Remember that school in Philadelphia that took snapshots using students’ Macbook webcams without telling anyone? (Robbins v. Lower Merion School District). There was no suggestion of foul play, just naivety on the part of the school district. And it cost them $600K to settle, plus a great deal of embarrassment.

London Low Emissions Tax Grab on the Poor

The GLA has sprung a public consultation on us, trying to get us to agree to a tax on horrible polluting vehicles to improve the air quality in central London. It’s the kind of thing that gives environmentalists a bad name – a money grab in the guise of a clean-up.

The idea is that vehicles that don’t meet current emission standards, decided by age, will be clobbered an additional £12 on top of the congestion tax for driving through London. Who’s it going to hit? Not the commercial users (generally speaking) as their vehicle fleets are going to be fairly modern. And not the Chelsea Tractors – they’re too new. It’s going to affect the people least able to afford it – those with an older family car that they keep going rather than scrapping because either they can’t afford a shiny new one, or simply think the conspicuous consumption of the new car market is immoral.

The consultation has some interesting, but cooked, figures for the source of the problem. Even then it doesn’t stack up. But on a proper survey of pollutants like this one it’s even more revealing.

First off, a half of some pollutants come from brake, tyre and road surface wear. Taxing older vehicles isn’t going to change that – it’s got nothing to do with the engine. Then about a third comes from burning gas, and most of that commercial use. The GLA doesn’t mention this!

Then we get to the breakdown from vehicle NO2 emissions. The (current, measured) figures show that:

35% comes lorries (articulated or rigid)
28% is from busses and coaches
21% is from taxis
16% is from cars and motorbikes.

Of the last figure, 90% of that is likely to be from diesel cars and 10% petrol cars.

Hmm. So which type of vehicle is going to be caught by the tax the most – probably the older cars, and these will probably be petrol (most cars are). Yet they’re responsible for only 2% of the problem.

Okay, if the GLA wishes to slap a £100 charge on coaches and lorries, this will work – it will hasten the replacement of ageing clapped-out diesel engines which will have done enough miles by the time this is introduced in 2020. People with older cars simply don’t operate this way. They’ll just have to pay up, proving this is just a money generating exercise.
The GLA was serious about reducing emissions, they should go for the low-hanging fruit – ban diesel taxis and make them go electric would save 21% at a stroke. And the same with the LRT busses (possibly not coaches). And the beauty of this system is that it won’t cost very much to run.

LGVs (big lorries) are more of a problem. They’re probably not going to head through central London unless they really have to, and the technology doesn’t exist (yet) to replace them. Emissions from these have already been reduced, but they still produce most of the problem. And they’re not going to be taxed, because they meet modern standards. It needs some investment in clever solutions.

The plan appears to be to raise this by taxing the low-income or occasional motorist (i.e. anyone with an older car). That’s not right. If you agree, and want to have your say, click here.

 

Sad to hear of aircraft down at Popham

So sad to hear of the loss of life at Popham today when a small light aircraft came down south of the A303 in poor weather, almost certainly attempting a descent to land on runway 26. One of the three on board survived, and was driven to Southampton hospital in critical condition. Apparently the aircraft wasn’t based at Popham, but had left from Bembridge and was presumably diverting there due to the weather.

Another aircraft came down in about the same place in September 2012, but with no loss of life.

I was flying yesterday in a similar aircraft but thought better of today due to visit; and it’s both sad and sobering. My thoughts are with their relatives and everyone else at the Spitfire Club.

 

Update: 04-Jan-2015

The names of the occupants have been released as Lewis and Sally Tonkinson, with their six-year-old son as the sole survivor. Looking at the photographs of the crash site in the Isle of Weight County Press, the aircraft in question appears to be very “light”, consistent with a Pioneer 300 Hawk registration G-OWBA, of which Mr Tonkinson is a connected and on which 37 hours have been logged. Curiously, this is a two-seater with a 20Kg luggage capacity. LAA registration number is LAA 330-15155

Update 07-Jan-2015
I’ve seen reported elsewhere that the aircraft in question was a Pioneer 400 G-CGVO, but can’t tie this to Mr Tonkinson. The 400 is a “stretched” 300, with four seats, which would make more sense, but I’ve seen no official confirmation. There’s an AAIB report on G-CGVO (door opened on takeoff), but it was in Herefordshire, and the aircraft was based in Wales. It’s obviously possible that it subsequently changed hands.

Do I have SoapSoap in my WordPress?

Apparently, 100,000 WordPress sites have been compromised by this nasty. It injects redirect code in to WordPress themes.

According to an analysis posted by  Tony Perez on his blog, it’s going to be easy to spot if you’re a server administrator as in injects the code:

php function FuncQueueObject()
{
wp_enqueue_script("swfobject");
}
add_action("wp_enqueue_scripts", 'FuncQueueObject');

In to wp-includes/template-loader.php

So,

find / -name template-loader.php -exec grep {} swfobject \;

should do the trick. I’m not a PHP nut, but I don’t think swfobject is common in that file.

Update: 06-Jan-2015

The web site linked to above has an on-line scanner that’s supposed to check for this problem, so I’ve just run it against this blog. It found something here. False positive, methinks! I’ve written to them pointing out that the search may be a little naive given the subject matter of that post! Fair play for providing such a tool free of charge though. It’s a little hard to see how such a scanner could work at all, but not pick up text lifted from a compromised site.