Shame on you, Skeptic magazine!

I like Skeptic magazine, in general, but their article on how AI is a failure is very disappointing.

http://www.skeptic.com/the_magazine/featured_articles/v12n02_AI_gone_awry.html

Read it if you like, but in essence it says that we have not created anything close to human-level intelligence in spite of a great deal of effort put toward it, and overblown predictions.

So what is this article saying that's interesting? True, we have not created anything near human-level intelligence. This is not news to anyone. Does this mean AI is a failure? Well, sort of, but the article seems to take the position that any AI progress doesn't count unless it achieves it's final goal. But certainly there has been progress in AI. Just ten years ago you could not talk to computers on the phone when you call your bank or credit card. AIs land our airplanes. There is no mention of such progress.

It's also true that AI has a history of overblown claims. Specifically, things take much longer than anyone can anticipate. But it's not that none of the goals have been met (AIs can play grandmaster-level chess, for example). I think it's unfair to judge the success of the field by the time predictions of the scientists in it. The fact is intelligence is more complicated than anyone thought-- and this keeps happening. We're still learning how hard AI is. That makes the research project a failure?

Let's look, for example, at cancer research, which has been around for longer than AI, and, I would conjecture, has had a great deal more money thrown at it. Is cancer cured? Hell no. In fact, it seems we've hardly made a dent in it at all. The (slightly) growing survival rates are mostly due to earlier detection rather than novel treatment approaches (I read this somewhere but can't remember the reference-- can someone help me out with this?) But where's the article in Skeptic magazine about how cancer research is a failure?

The problem is that cancer and AI are incredibly important issues and are worthy of continued effort in spite of their difficulty. Though the article does not come out and say it, we are left with the incorrect notion that AI has made no progress at all. From there it's a short step to cut it's funding. Bad idea. AI is the most important thing in the world.

Comments

It is always trendy to badmouth ideas after the hype curve has passed. Part of the problem is that terms get abused to the point where they have little meaning. Nanotechnology is another example. But you are right - the most interesting and useful things happen after the hype curve but few people care at that point.
Anonymous said…
You are right to complain. I'm with you.

Just the last sentence - AI is a big issue no doubt but the most important? Heavens, dont go over the top. I know AI might solve all our problems one day but:

I suggest problems itsself for first place - we cant live without them I think.
A philanthropist would say making all results of any reserach available for whole mankind is of capital importance.
Jim Davies said…
I have followed this article up with another related to cancer research:

http://jimdavies.blogspot.com/2006/08/looks-like-war-on-cancer-has-failed-as.html

Popular Posts