Artificial Intelligence and Ebby

What is AI? What implications does complexity theory have for AI? Give an example of something that is easy for humans but hard for computers. Explain.

There are many definitions of Artificial Intelligence. I think the most amusing is “AI is whatever humans can do, but computers cannot do yet.” - which reflects the idea that truly intelligent machines are not possible - there is always something that separates the machines from us.

There are four main types of systems that could be said to be intelligent - those that think like humans, those that act like humans, those that think rationally, and those that act rationally. (Russell & Norvig). One typical test of an artificially intelligent agent is the Turing Test - where a machine and a human both communicate (using some sort of text-based interface, at this stage) with another human, the judge. If the judge cannot tell which of the candidates is the computer, and which is the human, then the computer is said to have passed the Turing Test.

One task that is generally much harder for machines than humans is language interpretation and acquisition. Steven Pinkett, in his book “The Language Instinct” argues that we are hardwired to develop language complete with the syntactical rules that most languages contain, and indeed in instances with children grouped together they did indeed develop - in one generation - a complete creole that was just as complex as any of their parent’s languages.

Even if we simplify the problem of understanding language to that of text-based (rather than verbal, and speech recognition) communication, it is still a problem that is hard for computers to solve. Indeed, it appears that more progress has been made in the area of speech recognition than language recognition in recent times. Google now has the ability for a user of an iPhone to speak words into their device, and generating a search query of their database on the words that were spoken.

If we move back to the definition of AI as a machine thinking just like a human, then we arrive at the question: can a machine ever truly think like a human? Can it ever have a mind? Can it be intelligent? Are these the same thing?

I personally believe that in principle there is nothing preventing a computer, if it were complex enough itself, from for all intents and purposes being indistinguishable from a human mind. The so-called singularity point - where computing power reaches and then exceeds that of the human mind, is approaching quickly, and may provide us with untapped opportunity. If it walks like a duck and quacks like a duck…

I think that stating outright that a robot or computer cannot think is foolish. Even if I state that I can think (and wouldn’t a computer that was trying to make you believe it was a human do the same!), how do I really know that I do think? Even more to the point, how do I know that you can think? Some years ago, an article appeared in New Scientist about virtual worlds. I’m not talking about World of Warcraft, or even Second Life, I’m talking here about a complete simulation of a universe. Eventually, a civilisation will garner enough computing power to create such a simulation that can have self-aware (or programmed to believe they are self-aware, which might well be the same thing!). Once that point has been reached, then said technological society will be able to generate numerous such simulations. And some of those simulations will be able to create simulations, if they are left to run long enough.

So, once one civilisation reaches this tipping point, there will spring into existence a multitude of simulations, and perhaps only one real universe: statistically speaking are we likely to be that special one universe that hasn’t reached that point yet? We’ve already had rude shocks when we discovered that Earth isn’t at the centre of the universe, or even necessarily that special. Why should this be the special universe.

So, we’ve decided that we are a simulation. Since you and I both think that we can think, clearly simulatrons can think. Thankyou for your time.

In all seriousness (and I don’t think that I subscribe to the above argument), I’ll go back to the duck thing. Who cares if an intelligence is “real” or just simulated. As long as it makes the decisions that would be made in a similar way (or perhaps makes better decisions - although we know from Asimov’s stories that the Robots will eventually overthrow us - I for one welcome out new Robot Overlords), who cares? It’s like saying that women are not as good as men. It’s just siliconism. Think racism or sexism, if you like.

If I assess most people I come across (or virtually all, if you read the You-Tube comments), then they are not intelligent. They’d certainly fail the Turing Test when compared to some of the more advanced “AI” systems out there. They’d probably fail at quite a few other tests while they are at it, but that’s another story. I do recall reading some years ago that the amount of intelligence on a planet stays the same - it’s just that the number of people increases. So the average goes down. (I think this one was Herbert, not Asimov, but I can’t recall. It does feel a bit “Foundation”).

So, is the Turing Test a reasonable test? It certainly centres on language - something that lots of “ppl sm 2 not get now”. But I think it is still valid to some extent. Something that can have a level of success with the Turing Test, be it a human or machine, clearly has some level of intelligence, whether that intelligence is “natural”, or simulated. But I’m not sure that it is a real good measure of intelligence.

Other authors have examined a range of “other” intelligences. Dimasio, for instance, discusses Emotional Intelligence, and the “EG”, although this for me was somewhat unfulfilling. IQ tests are out of fashion now, although in many cases whilst they are culturally biased, this is no different to the bias that we have given to language in the Turing Test. All of these tests are based around human intelligence, and the secret here is that we think that this is the only type of intelligence. Because we don’t know about the other types.

My dog is almost certainly intelligent, so some degree. She remembers things, and when she could see and hear properly, was much more aware of her environment that at times I was. She still has a much better sense of smell than I ever could (or would want, but that’s another matter). She would fail a Turing Test, yet I would say she was smarter than many students I have taught…

Well, this started out as a practice essay for an exam I have tomorrow. Not sure that most of it will be that relevant. But it was fun.

On Strong AI and the Chinese Room.

This is a post taken out of context I made to a discussion group in my AI subject. (Which another Matt and I have decided they falsely advertised, and should have called “Expert Systems”). It’s a bit vague in a couple of places, and I will probably come back to this topic at some stage and write a clearer explanation of what I mean. But this is here for now.

I did some thinking about the thought experiment where a guy sits in a room translating English into Chinese, and how “He will probably learn some characters over time.”, and how this is not the same as a machine.

I think it is the same. If a machine is able to recognise a word/character, that is still the same as the man learning it, rather than just memorising it. (I’d actually counter that memorising meanings and symbols is still a type of learning, and I come from an educational background, so I do know a little about learning).

But I want to push even further. A human isn’t the same as a translating program because it can learn new information that wasn’t there when the program was written. If a program is able to incorporate new facts, develop new pathways, and perhaps even rewrite it’s own program, then this is a much closer analogy to what we define is learning.

There are system out there that allow for dynamic reprogramming (I think some derivatives of Smalltalk are pretty good at this), and many systems that allow for dynamic alteration of code (perhaps by an outside source). Even the OLPC project is based on a system which can be changed while it is running.

Our brains are a machine that can alter it’s physical structure in response to the environment - indeed that is how we learn - and the pathways that are used often eventually end up working better than those that are used infrequently. A program could work the same way. Evolutionary programming is another example - an algorithm can be seeded, and then a variety of mutations are generated, and an “overseer” selects the mutations which result in a better algorithm.

I can’t provide any hard quotes, but I know this has been used to some extent to develop novel programs.

I know this is verging on Strong AI, which Darius says is a no-no, but consider this post as having taken place in a pub or cafe. In fact, I’m off to the pub now.