Thursday, December 15, 2005

AI is not brain dead, but perhaps it's mentally handicapped


My hero Marvin Minsky slammed me the other day: "AI has been brain-dead since the 1970s," he said, and with that slammed everyone else in my field. [see http://www.wired.com/news/print/0,1294,58714,00.html]

The main thrust of his problem with AI is that as a field it does not seem to be trying to tackle the hard problems. Specifically, he means general human-level intelligence. The example he points out is commonsense reasoning, of which CYC is the one great current project.

I have, particularly in the last year or so, grown dissatisfied with the kind of work a lot of people in AI are doing for the same reason. For shorthand, I will call the project of trying to replicate human-level intelligence on a machine to be "the Great Work" (see my blog posting called "Why Artificial Intelligence is the most important problem to be working on" for a justification for this grandiose name.) The focus on small problems is so great that often the researchers do not even have a story of how their findings constrain or fit into a general theory of AI. I think that we are lucky that our field has such a relatively clearly-defined objective, and I believe it's every AI researcher's responsibility to at least spend some time thinking about how their research contributes to the Great Work.

At the same time, it' s very hard to do the great work. Not only is the mind a bogglingly thorny system, it's hard to get the Great Work funded, it's hard to get testable hypotheses. It's hard to do it without borrowing systems from everybody else, which is a problem because people want to know what your contribution is. Interfacing systems together is seen to be an inferior intellectual enterprise, even if it might be exactly what is needed for the Great Work. So people work on, say, tweaking A* (a search algorithm) in a particular domain. So to that extent I see where Minsky is coming from.

However, I disagree with his statement for two reasons. First, it's inflamatory, overly simplified, and dismissive of some excellent research. It's overstated. The years and years of research it took to get speech recognition up to snuff so your credit card company can understand your voice on the phone can hardly be classified as "brain-dead."

The other problem I have with it is that it's focusing on high-level reasoning, and ignoring the importance of low level sensory-motor and perceptual reasoning, which, in spite of how difficult it's proven to be, has made some progress. It's unfair to call that "brain-dead" either.

So in summary, I am sympathetic to Minsky's thoughts behind the statement, but it's overstated. We can't say "retarded" anymore, unless you are a rapper, so I guess I would say that AI is "mentally handicapped," so come on people, let's get on that Great Work. Oh wait, I am a rapper.

Ok, it's retarded.

2 comments:

Anonymous said...

"Artificial Intelligence is the most important problem to be working on"

Agreed, but not much hope to be put in Minsky and als, they are brain dead.

Quoting from
http://language.home.sprynet.com/lingdex/limtran2.htm

"Another cross-cultural example concerns a well-known wager AI pioneer Marvin Minsky has made with his M.I.T. students. Minsky has challenged them to create a program or device that can unfailingly tell the difference, as humans supposedly can, between a cat and a dog. Minsky has made many intriguing remarks on the relation between language and reality, (19) but he shows in this instance that he has unwittingly been manipulated by language-imposed categories. The difference between a cat and a dog is by no means obvious, and even `scientific' Linnaean taxonomy may not provide the last word. The Tzeltal Indians of Mexico's Chiapas State in fact classify some of our `cats' in the `dog' category, rabbits and squirrels as `monkeys,' and a more doglike tapir as a `cat,' thus proving in this case that whole systems of animals can be sliced differently. Qualified linguistic anthropologists have concluded that the Tzeltal system of naming animals—making allowance for the fact that they know only the creatures of their region—is ultimately just as useful and informative as Linnaean latinisms and even includes information that the latter may omit. (20) Comparable examples from other cultures are on record. (21)"

Anonymous said...

I forgot this one
http://language.home.sprynet.com/lingdex/autocars.htm

:-D

Yet, I DO believe that "Artificial Intelligence is the most important problem to be working on"
But not GOFAI, obviously...