In the weeks following Garry Kasparov’s loss to IBM’s Deep Blue this May, editorial pages were full of breathless soul-searching about What It All Meant. Had machines crossed the intelligence barrier? Could Deep Blue really think? Was Homo sapiens obsolete?
That any of this happened at all is, I suppose, tribute to the PR staff at IBM; they managed to turn something as dull as a chess tournament into a full-on media spectacle. Recent headlines aside, though, artificial intelligence has been a hot topic since at least February of 1996, when Kasparov won his first match with Deep Blue. Today, at the midpoint of 1997, the books are arriving at a furious pace. Some of these are sharp and thoughtful, but the field is pretty uneven.
You might expect Kasparov Versus Deep Blue, an insider’s view of the 1996 match, to be rich with first-hand insight. It’s not. McGill University professor Monty Newborn is a big mover in international chess; he organized the tournament, and he’s on a first-name basis with both Kasparov and the IBM team. For all his intellectual credentials, though, his book is a dud. Most of its 322 pages are taken up with a turgid, move-by-move guide to computer chess matches through history, all told in arcane shorthand. Diehard chess geeks might find pages of lines like “58 Kg2 Be1 59 Kf1 Bc3 60 f4 exf3 61 exf3 Bd2 62 f4 Ke8 63 Qc8+ Ke7” illuminating, but there’s not much here for anyone else.
Likewise, Newborn’s analysis is limited to the tiniest details of chess protocol. He delights in pointing out that certain moves are distinctly unlike human play, but never explains, in a way that makes any sense to a lay reader, just what makes the difference. The substance of artificial intelligence – how it might change the game, never mind the outside world – eludes him entirely.
HAL’s Legacy is not one of the good ones either. It’s a compilation of essays by a number of authors, each comparing some aspect of modern AI technology with the demonic computer of 2001. Stanley Kubrick’s classic film was, among other things, a fable about the dangers of evolving technology. A sharp-but-playful comparison of the fiction with today’s state-of-the-art could have made for a fascinating book. But editor David Stork is too much of a scientist to let anything so pedestrian get in the way of his analysis. A few of these chapters are well-written, cleverly argued gems, but they’re the exceptions.
Overall, the book feels curiously defensive. Writer after writer takes time to praise Stanley Kubrick for his commitment to scientific realism: for not stooping to the rayguns-and-explosions depths of, say, Star Trek. What’s really at work here is obvious: these writers know that Trek fans play silly could-the-holodeck-be-built-today games all the time, and they want no part of that kind of speculative nonsense. This is Science, so please don’t laugh at us. The result is a book that’s both overly cautious and distractingly snobbish.
The Age of Androids isn’t exactly fun, either, but it has the advantage of being completely, unashamedly sure of itself. This is remarkable, because author William E. Datig actually claims to have invented an android: a conscious, thinking machine that uses the word “I” to refer to itself.
You’d imagine that building a conscious robot would require a pretty advanced understanding of consciousness itself. Well, Datig thinks so, too. The first two-thirds of the book outline what he immodestly calls his “Universal Theory of Knowledge” and his “Universal Grammar on Form and Being.” These are dense chapters, stuffed full of vaguely scientific, vaguely philosophical explanations of a) how we know, and b) how things exist. In short, Datig’s got it all figured out.
Near the end of the book, the reader is invited to participate in a thought experiment: the creation of a simple android. Unfortunately, this is where Datig’s big-worded pseudo-science falls on its head. His “android” consists of a computer connected to a video camera, and an extra monitor, whose display is only partly controlled by the computer. The video camera (the “eye”) looks at the extra monitor and “sees” its own actions taking place in The Rest Of The World (the portion of the display that it doesn’t control). Presto-chango! You’ve got a machine that understands mind-body dualism and can think of itself as “I.” For all its dry earnestness, The Age of Androids is still a delight to read. Provided you’re not really hoping to build your own robot.
Are We Unique? also sets out to understand intelligence. But it does so without any of the hallucinatory excess that livens up Datig’s book. Instead, James Trefil takes us on a fascinating tour through the modern-day science of the mind. We read about efforts to communicate with animals, and the ongoing struggle between those who claim human intelligence is unique and those who believe that the only difference between people-smart and animal-smart is a matter of degree. (Trefil falls into the former camp, pointing out that the human ability to manipulate objects with the nose only differs from the elephant’s by a matter of degree, too.)
The second half of the book deals with current efforts to build AI into computers. After introductions to MIT’s robot, COG, Doug Lenat’s CYC project, and a number of others, Trefil avoids the obvious conclusion, and doesn’t tell us which one he thinks is the best approach. Instead, he dips into several seemingly unrelated fields – mathematics, complexity theory, economics, philosophy – and builds a convincing case that human-style minds won’t ever be duplicated in silicon. He does not, mind you, rule out the possibility that a wholly alien kind of intelligence might someday emerge from current research. Are We Unique? is an absolutely gripping read: a cleverly argued, well-researched guide to the human mind.
After Thought sets out to cover much of the same ground – how computer intelligence differs from human smarts – but its approach isn’t quite as elegant. It traces the history of math, from Greek geometry to Renaissance algebra to modern-day parallel processing, and argues that computers, with their ability to tackle staggering mathematical problems, truly represent the arrival of an alien intelligence.
Genetic algorithms, neural nets, classifier systems: these are, the book argues, maths that would have made no sense in previous centuries, because no one would ever have tried to do them by hand. A machine’s ability to crunch through mountains of data, and spot patterns that would elude the human observer, is something radically new. So far, so good.
But the book runs into trouble after this, when it starts puffing up this new “intelligence” as a Really Great Thing. The triumph of AI, we read, is that it gives us bold new insights into the way things work: predictive, intuitive, pattern-spotting technology that will, and should, transform our society.
In truth, much of this has already arrived, in the form of software that builds crime-fighting profiles of credit-card users or serial killers. And, beneficial as those technologies sound, they herald the arrival of much more sinister tools. Those, for example, that examine the medical records of children and predict who’ll develop heart disease in adulthood. (This sort of thing is already done in the U.S. in an effort to trim costs.) Author James Bailey ignores these implications entirely. The result is a book that, while pleasant and interesting to read, fails to ask the tough questions.