The “Artificial Intelligence” Name is Doing Just Fine, Thank You
BigDog robots trot around in the shadow of an MV-22 Osprey. |
Recently a distinguished professor from my alma mater, Georgia Tech, published an Atlantic article about how the term “artificial intelligence” has become meaningless.
I like Ian Bogost, and I’ve cited his research.
But he’s got this one wrong.
He lists a bunch of AI systems that don’t work very well, which is kind of like saying that the word “painting” is meaningless because there are bad paintings. Other systems are criticised because they don’t use AI techniques. One, for example, uses a “pattern matching filter.” Presumably, if it were using an actual AI technique, it wouldn’t be listed. To say that Google’s DeepDream isn’t AI because it only uses deep learning networks is absurd, because deep learning networks are about as AI as you can get.
But there’s the rub, isn’t it? Without some notion of what constitutes AI techniques, you can’t make the distinction. But if we know what AI techniques are, then how can the concept of AI be meaningless?
He derides one system as merely using “off-the-shelf computer vision,” not acknowledging the fact that what counts today as “off the shelf” is the result of 60 years of hard work by AI researchers, who just happened to break off into a subfield called “computer vision.”
He asks artificial intelligence researcher (and friend of mine) Dr. Charles Isbell what AI is, and Dr. Isbell says that AI should do something it takes humans effort to learn--a curious requirement, because that excludes vast areas of traditional AI research, including vision, bodily motion, and language processing--all things humans, it turns out, learn effortlessly.
Then there’s a bit of talk about “true” AI, but this is a completely different topic, and one that constantly haunts discussions of the field. On one hand, people use AI to mean the field that develops techniques for machines to execute thinking, and on the other other it’s used to describe what we now commonly call “general AI,” which is a single AI agent that can pretty much do every kind of thinking a human can. If the term “artificial intelligence” is problematic, it’s here, but the distinction between the two is not even made explicit in the article.
Is it fair to critique AI as being a hodge-podge of methods, which indeed it is? Considering the hodge-podge of ways the human mind processes things, perhaps not. What if a general intelligent being has to use a hodge-podge of processes to do it all? It’s like saying that “medicine” is meaningless because it has disparate things such as talk therapy and radiation treatment in its set of techniques.
We might apply the same reasoning to a subject of Bogost’s personal interest: games. Just like artificial intelligence programs, I could trot out countless games are don’t work or aren’t fun. I could say the term “game” is meaningless because the set of game systems and rules are a hodge-podge. In fact, the idea of a game is the example Wittgenstein used to argue that almost all words evade necessary and sufficient conditions, and that word meaning is a result of a kind of “family resemblance.”
Artificial intelligence has remained a core part of computer science ever since there were computers. If AI is meaningless, I’m left to wonder in which classes we’d teach about Bayes’ nets, production systems, machine learning algorithms, and natural language processing. Compilers? Databases?
The term “artificial intelligence,” is a gist with a fuzzy boundary, with nothing sacred or absolutely true about it, held together by a bunch of vague meanings. It gets abused by people who don’t know the field-- or are trying to use the term as a buzzword to sell us something.
But it’s only meaningless to the extent that most words are.
Comments
Machine Learning Course
AI Course