The Truth about Artificial Intelligence
Last year I was asked, at short notice, to teach an undergraduate Artificial Intelligence module. I haven’t done any serious work in the field since the 1980’s, when it was all the rage. It’s proponents were anticipating that it would be a part of life within ten years; as this claim had been made in the early 1970’s I was always a bit dubious, but computer power was increasing exponentially and so I kept an eye on the field. LISP was the thing back then, although I could never see quite how a language that processed lists easily, but was awkward for anything much else, was going to lead to the breakthrough.
So, having had the AI module dumped on me, I did the obvious thing and ran to the library to get out every textbook on the subject. What was the latest? I was surprised to see how far the field had come in the intervening years. It had got nowhere. The textbooks on AI covered pretty much the same as any good book on applied algorithms. The current state-of-the-art in AI is, in fact, applied algorithms with a different name on the cover; no doubt to make it sound more exciting and to make its proponents sound more interesting than mere programmers.
Since then, of course, AI has been in the news. Dr Stephen Hawking came out with a statement that he was worried about AI machines displacing mankind once they got going. Heavy stuff – it’d make a good plot for a sci-fi movie. It was also splashed all over the news media a week before the release of his latest book. The man’s no fool.
With universities having had departments of artificial intelligence for decades now, and consumer products claiming to have embedded AI (from mobile telephones to fuzzy logic thermostats) you may be forgiven for thinking that a breakthrough is imminent. Not from where I’m sitting.
Teaching artificial intelligence is like teaching warp drive technology. If you’ve never seen Star Trek, this is the method by which the Starship Enterprise travels faster than the speed of light by using a warp engine to bend the space around it such that a small movement inside the warp field translates to a much larger movement through “flat” space. Great idea, except that warp generators only exist in science fiction. And so does AI. You can realistically teach quantum physics, but trying to teach warp technology is only for the lunatic fringe. The same is true of AI, although I’m certain those with a career, and research grants, based on the name will beg to differ.
So where are we actually at? How does artificial intelligence as we know it work, and is it going in the right direction? In the absence of the real thing, the term AI is now being used to describe a class of algorithm. A proper algorithm takes input values and produces THE correct answer. For example, as sort algorithm will take as its input an unordered list and produce as output a sorted list. If the algorithm is correct, the output will always be correct, and furthermore it is possible to say how long it will take (worst case) to get the answer, because there is a worst-case number of steps the program will have to take. These are know as “P Problems”, to those who like to talk about how difficult things are to work out in terms of letters rather than plain old English.
Other problems are NP, which basically means that, although you might be able to produce an algorithm to solve them, the universe may have ended before you get the result. In some cases the computation may last an infinite amount of time. For example, one tricky problem would be working out the shortest route from London to Carlisle? Your satnav can work this out for you, of course, but how can you be sure it’s found the one correct answer; the absolute shortest route? In practice, you probably don’t care. You just want a route that works and is reasonably short. To know for sure that there was no shorter route possible you would have to examine every possible turn-after-turn in the complete road network. You can’t prove it’s not shorted to go via Penzance unless you try it. However, realistically, we use heuristics to prune off crazy paths and concentrate on the promising ones and get a result that’s “good enough”. There are a lot of problems like this.
A heuristic algorithm sounds better to some people if it’s called an AI algorithm, and with no actual AI working AI, people like to have something to point to; to justify their job titles. But where does this leave genuine AI?
In the 1970’s world was seen as lists, or relations (structured data of some kind). If we played about with databases and array (list) processing languages, we’d ignite the spark. If it wasn’t working it was just our failure to classify the world in to relations properly.
When nothing caught fire, Object Oriented Programming became fashionable. Minsky’s idea was that if a computer language could map on to the real world, using code/data (or methods and attributes) to define real-world objects, AI would follow. I remember the debate (around 1989) well. When the “proper” version of C++ appeared, the one with the holy grail of multiple inheritance, the paradigm would take off. Until then C++ was just a syntactical nicety to hide the pointer to the context in a library of functions acting on the same structure layout. We’ve had multiple inheritance for 25 years now, but any conceivable utility I’ve seen made of them has been somewhat contrived. I always thought they were a bad idea except for classes inheriting multiple interfaces, which I will concede but this is hardly the same as inheriting methods and attributes – the stuff that was supposed to map the way world worked.
The current hope seems to be “whole brain” emulation. If we can just build a large enough neural network, it will come to life. I have to admit that the only tangible reason why I don’t see this working is decades of disappointment. Am I right to be sceptical? Looking it another way, medical science has progressed by leaps and bounds, but we’re no closer to creating life than when Mary Shelly first wrote about it. However cleaver we think we are with modern medicine, I don’t think we’re remotely close to reanimating even a single dead cell, never mind creating one.
Perhaps a better places to start is looking at the nature of AI, and how we know we’ve got it. One early test was along the lines of “I’ll be impressed if that thinking machine can play chess!”. This has fallen by the wayside, with Deep Blue finally beating Garry Kasparov in 1997 and settling that question once and for all. But no one is now is claiming that Deep Blue was intelligent; it was simply able to calculate more possible outcomes in less time than its human opponent. One interesting point about it was the size of the machine required to do even this.
Another famous measure of AI success is Alan Turing’s test. A smart man, was Mr Turing. Unfortunately his test wasn’t valid (in my humble opinion). Basically, he reckoned that if you were communicating with a computer and couldn’t tell the difference between it and a human correspondent, then you had AI. No you don’t. We’ve all spoken to humans at call centres that do a pretty good impression of a machine; getting a machine to do a good impression of a human isn’t so hard. And it’s not intelligence.
In the late 1970s and early 1980s, computer conversation programs were everywhere (e.g. ELIZA). It’s no surprised; the input/output was basically a Teletype or later a video terminal (glass Teletype), so what else could you write? The pages of publications such as Creative Computing inspired me to write a few such programs myself, which I had running at the local library for the public to have a go at. Many had trouble believing the responses came from the computer rather than me behind a screen (this was in the early days, remember – most had never seen a computer). I called this simulated intelligence, and subsequently wrote about it in my PCW column. And that’s all it was – a simulation of intelligence. And all I’ve seen since has a simulation; however good the simulation it’s not the same as the real thing.
Science fiction writes have defined AI as a machine being aware of itself. I think this is possibly, true, but it pushes the problem on to defining self-awareness. I think there’s still merit in the idea anyway; it’s one feature of intelligent life that machines currently lack. A house fly is moderately intelligent; as may be an amoeba. What about a bacteria? Bear in mind that we’ve not created an artificial or simulated intelligence that can do as much as a house fly yet, if you’re thinking of AI as having human-like characteristics. (There is currently research into simulating a fly brain (See Arena, P.; Patane, L.; Termini, P.S.; “An insect brain computational model inspired by Drosophila melanogaster: Simulation results” in The 2010 International Joint Conference on Neural Networks – IJCNN).
Other AI definitions talk about a machine being able to learn; take the results of a previous decisions to alter subsequently decisions in the pursuance of a goal. This has been achieved, at high speed and with infinite resolution, many years ago. It’s called an analogue feedback loop. There’s a lot of bluster about AI systems being more complex and being able to cope with a far wider range of input types than previous systems, but a feedback loop isn’t intelligent, however complex it is.
So what have we actually got under the heading of AI? A load of heuristic algorithms that can produce answers to problems that can’t be computed for certain; systems that can interact with humans in a natural language; and with enough processing power you can build a complex enough heuristic system to drive a car. Impress your granny by calling this kind of thing AI if you like, and self-awareness doesn’t really matter if the machines do what we want of them. This is just as well, as AI is just as elusive as it was in the 1970s. All we have now is a longer list of examples that aren’t it.
The only viable route I can see to AI is in Whole Brain Emulation, as alluded to above. We are getting to the point now where it is possible to build a neural network complex enough to match a brain. How, exactly, we could kick-start such a machine in to thinking is an intriguing problem. Those talking loudest about this kind of technology are thinking in terms of uploading the contents of an existing brain, somehow. Personally, I see a few practical problems that will need solving before this will work, but if we could build such a complex neural network and if we could find a way to teach it, we may just achieve a real artificial intelligence. There are two ifs and a may in there. Worrying too much about where AI technology may lead, however, is like worrying about the effects of human physiology from prolonged exposure to the warp coils on a starship.
Interesting thoughts, Frank. I would also add that the problem with AI is the emphasis on Intelligence, whereas human consciousness is far more than intelligence as commonly understood (i.e. being good at maths, solving a crossword puzzle). Human consciousness includes other elements such as creativity and intuition. The philosopher Henri Bergson thought that creativity and intuition came not from the brain, but from contact with a super-sensory realm of spirit. If he is right, then there can never be a computer-created consciousness.
I agree, we can safe sleepy at night in the meantime.
Good to see you blogging.