Human uniqueness isn’t in jeopardy

Forty-five years ago, Steve, one of my best university friends, distinguished himself on the NBC-TV game show The G. E. College Bowl when he missed a question and became the only person in the program’s history to use the F-word on national network television.

Until a computer shows similar emotion at a blow to its ego, the uniqueness of human consciousness is safe.

IBM’s three-day commercial, at other times known as Jeopardy!, has shown clearly that a computer with a large-enough database and efficient deep analysis algorithms — plus a literally lightning-fast buzzer response with a built-in safeguard against the split-second early answer that can lock out a human contestant — can use its computational muscle to defeat merely mortal champions in a trivia game.

While I’m impressed, I’m not feeling threatened. I’m impressed by the advances in AI that let Watson (the IBM computer array win the game. I’m not at all threatened that human consciousness will have a machine companion, or competitor, anytime soon.

That doesn’t mean that Watson-like machines don’t pose a different kind of challenge. Watson’s ability to parse common English sentences points to a near future when some human jobs which rely on conversational interactions will be threatened by computer programs. As noted by others, the editorial functions of the Yahoo! news page, performed by a human editor who interprets computer data streams, are already fully automated by Google, which relies entirely on algorithms to place items on its news pages. Some suggest that such functions as ordering hotels and airline tickets or obtaining computer support by telephone will soon be within a Watson-descendant’s grasp — or would be, if it had a grasp.

But this sort of computer function, like the skills Watson showed on Jeopardy!, doesn’t come anywhere close to duplicating the way the human brain integrates sensory information with self-aware processing. Indeed, even in the comparatively raw data crunching at which Watson is proficient, there is evidence of very unhuman “thinking.”

Beside responding “Toronto” to the first night’s Final Jeopardy answer in the category “U. S. Cities,” Watson sometimes showed remarkably strange “reasoning” in some of the choices it showed in the on-screen data flashed (too briefly) for viewers. On the first night, given an answer which tied “smart fashion” to “people who graduate in the same year” (I may not have the exact phrasing, but the sense is there), Watson questioned “What is chic?” with 97% confidence. In fact, Watson’s third-highest rated question was “Who is Vera Wang?”, which also has a whole lot to do with smart fashion and nothing at all to do with graduation years. No human contestant would ever so misconstrue the category as to consider Vera Wang even remotely correct, much less the third best available answer in 200 million pages of data. (The correct question: “What is class?”) In fact, had Watson not fortuitously “landed on” both Daily Double answers in Wednesday’s Double Jeopardy session, it might even have lost the contest altogether.

Still, Watson’s “skills” are evident. As reported in the Vancouver Sun by columnist Peter McKnight:

Unlike chess, Jeopardy! uses “ natural” language, which is effectively infinite, and filled with nuance, ambiguities and multiple meanings. And to up the ante, Jeopardy! clues typically involve puns, word plays and riddles, things computers have never been good at deciphering.

Consider this example: In an untelevised exhibition round with Watson, a Jeopardy! clue in the category “All-Eddie Before & After” asked “A Green Acres star goes existential (& French) as the author of The Fall.” This isn’t something one would expect a computer to solve, but according to technology writer Clive Thompson, who witnessed the event, Watson got it right: “Who is Eddie Albert Camus?”

But the important difference between us and Watson isn’t how well it can parse questions and answers but the obvious fact that Watson knows, but it doesn’t know that it knows — and, unlike my friend Steve, it doesn’t care that it doesn’t know that it knows. According to McKnight, when chess champion Gary Kasparov lost to IBM’s Deep Blue some years ago, Kasparov said, “Well, at least it didn’t enjoy beating me.”

With well-planned good timing, the March 2011 issue of The Atlantic features “Mind and Machine,” by Brian Christian. Christian participated in the 2009 edition of the Turing Test, an annual competition in which computer programs vie for the Most Human Computer award by holding “conversations” with human judges. The computer that fools the most judges wins the prize, and any computer that fools more than 30% of them is considered to have “passed” the Turing Test — named after computer pioneer and mathematician Alan Turing, who proposed that computers would have human-like intelligence when there was no way (other than peeking around the curtain) to tell them apart from human responders:

The test is named for the British mathematician Alan Turing, one of the founders of computer science, who in 1950 attempted to answer one of the field’s earliest questions: can machines think? That is, would it ever be possible to construct a computer so sophisticated that it could actually be said to be thinking, to be intelligent, to have a mind? And if indeed there were, someday, such a machine: how would we know?

Instead of debating this question on purely theoretical grounds, Turing proposed an experiment. Several judges each pose questions, via computer terminal, to several pairs of unseen correspondents, one a human “confederate,” the other a computer program, and attempt to discern which is which. The dialogue can range from small talk to trivia questions, from celebrity gossip to heavy-duty philosophy — the whole gamut of human conversation. Turing predicted that by the year 2000, computers would be able to fool 30 percent of human judges after five minutes of conversation, and that as a result, one would “be able to speak of machines thinking without expecting to be contradicted.”

The bar seems rather low at 30%, but even at that level no computer has yet passed the test (although one did come very close in 2008). So it seems that being able to decipher a human sentence and being able to converse like one are quite different skills, and so far Watson and his metal mates have been able to master only the first of them.

As computer scientist John Seely Brown quipped, reported by John Markoff in the New York Times, “The essence of being human involves asking questions, not answering them.” And Peter McKnight rightly emphasized that “Deep QA software exercises no judgment and possesses no wisdom.”