Give me that good old-fashioned AI

It is said that to explain is to explain away. This maxim is nowhere so well fulfilled as in the area of computer programming, especially in what is called heuristic programming and artificial intelligence. For in those realms machines are made to behave in wondrous ways, often sufficient to dazzle even the most experienced observer. But once a particular program is unmasked, once its inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away; it stands revealed as a mere collection of procedures, each quite comprehensible. The observer says to himself “I could have written that”. With that thought he moves the program in question from the shelf marked “intelligent”, to that reserved for curios, fit to be discussed only with people less enlightened than he.

These words were written in 1966 by Joseph Weizenbaum, in a paper that explained—and explained away—his famous program ELIZA, the mock psychotherapist. ELIZA’s conversations could occasionally suggest a glimmer of understanding:

I need some help, that much seems certain.

WHAT WOULD IT MEAN TO YOU IF YOU GOT SOME HELP?

Perhaps I could learn to get along with my mother.

TELL ME MORE ABOUT YOUR FAMILY.

But Weizenbaum revealed that ELIZA was merely picking out a few keywords from the text and applying simple syntactic transformations, along with a dose of randomness. The program was manipulating symbols, but the symbols had no meaning attached to them.

What about Watson, the new Jeopardy champion? Watson gave a dazzling performance this past week, decisively winning a two-game match against the best human players in the history of the quiz show. But will the magic crumble if we look closely at how it works? Does the program really understand those quirky Jeopardy clues, or is it just pushing symbols around, in the manner of ELIZA?

The most detailed account of Watson’s innards that I’ve been able to find is an article published in AI Magazine last fall by David Ferrucci of IBM, the project’s lead engineer, and a dozen colleagues from IBM and Carnegie Mellon. (“Building Watson: An overview of the DeepQA project,” AI Magazine 31(3):59–79. The article is behind a paywall at the AI Magazine web site, but resourceful internauts may find it elsewhere.)

Here’s the overview:

The system we have built and are continuing to develop, called DeepQA, is a massively parallel probabilistic evidence-based architecture. For the Jeopardy Challenge, we use more than 100 different techniques for analyzing natural language, identifying sources, finding and generating hypotheses, finding and scoring evidence, and merging and ranking hypotheses.

Thus we learn that behind Watson’s calm, metallic voice is a clamor of 100+ agents doing their massively parallel probabilistic evidence-based thing. This is not one big brain but a society of mind. (By the way, I think “probabilistic” means simply that potential answers are scored by assigning them probabilities; as far as I can tell there is no randomness or indeterminacy in the algorithms, but I could be wrong about that.)

The first step in the run-time question-answering process is question analysis. During question analysis the system attempts to understand what the question is asking and performs the initial analyses that determine how the question will be processed by the rest of the system.

This description implies that the system is indeed making an effort to dig down into the semantics of natural language. But how does it attempt to understand the clue? The rest of the paragraph is more of a shopping list than an explanation:

The DeepQA approach encourages a mixture of experts at this stage, and in the Watson system we produce shallow parses, deep parses (McCord 1990), logical forms, semantic role labels, coreference, relations, named entities, and so on, as well as specific kinds of analysis for question answering.

The reference to (McCord 1990) is perhaps the most illuminating item in this list. The author is Michael C. McCord, who is at IBM’s Yorktown Heights lab where Watson was built. The phrase “deep parses” apparently refers to McCord’s idea of a slot grammar, which provides a single framework for combining the syntactic analysis of sentences (subject, predicate, object, etc.) with semantic features (word senses, logical relations, predicates). Unfortunately, the AI Magazine article gives no further hints about how slot grammars are used in the analysis of Jeopardy clues. (For some useful recent accounts of slot grammars, see the links in McCord’s publications list.)

The part of the question-analysis phase that Ferrucci et al. discuss at greatest length is a process they call “LAT detection.” LAT is “lexical answer type”: a word or phrase in the clue that specifies what kind of response is wanted—a person, a city, a book, a substance, and so on. Consider this clue, in a category titled “Oooh…. chess”:

Invented in the 1500s to speed up the game, this maneuver involves two pieces of the same color.

The LAT in the clue is “maneuver”: Whatever the answer is, it must be something that can plausibly be described as a maneuver. If you were to fixate on the wrong LAT—say “the game” or “two pieces”—you’d have no hope of coming up with the correct answer. Naming the two pieces “king and rook” would not score any points, even though that particular choice of pieces suggests you have the right idea in mind; to get credit for the answer, you need to give the name of the maneuver: “castling.”

Identifying the correct LAT is clearly important. It’s also clearly difficult. What’s not so clear is how Watson does it. In the chess example, does “maneuver” stand out from the rest of the words in the clue for grammatical reasons (it’s the subject of the main clause), or because it’s pointed to by the demonstrative adjective “this,” or for some other reason? How would you write a program to identify the LAT of an arbitrary Jeopardy clue?

Moving on from the analysis of questions to the finding of answers, the algorithmic details remain a little fuzzy.

Watson has access to various sources of “structured” knowledge: relational databases, taxonomies, ontologies. With such resources, retrieval is straightforward. Yet it turns out that few clues can be reformulated as database queries. Ferrucci writes: “Watson’s current ability to effectively use curated databases to simply ‘look up’ the answers is limited to fewer than 2 percent of the clues.” I suppose this is not really surprising. If the game could be reduced to database lookup, it wouldn’t be much fun.

For the other 98 percent of the queries, I gather that the retrieval process is more like Googling for the answer. The machine has no live internet connection during the Jeopardy contest, so it can’t actually search the web. But lots of free-form textual data was loaded into the Watson servers ahead of time, including all of Wikipedia and many other reference works. Using these documents as seeds, the system then trawled the web for other sources that might be useful, and cached copies of them for use offline. About four terabytes of material was available for query answering.

As for the search methods applied to this archive, the article by Ferrucci et al. offers another shopping list:

A variety of search techniques are used, including the use of multiple text search engines with different underlying approaches (for example, Indri and Lucene), document search as well as passage search, knowledge base search using SPARQL on triple stores, the generation of multiple search queries for a single question, and backfilling hit lists to satisfy key constraints identified in the question.

In a long series of training runs the system was tuned to balance the competing demands of coverage, accuracy and speed.

The operative goal for primary search eventually stabilized at about 85 percent binary recall for the top 250 candidates; that is, the system generates the correct answer as a candidate answer for 85 percent of the questions somewhere within the top 250 ranked candidates.

The trouble with free-form textual search is that you may very well identify relevant snippets of text but still have a hard time extracting the correct answer. Indeed, the same kind of analysis that goes into figuring out the question also has to be applied to candidate answers. For example, Ferrucci et al. discuss this clue: “He was presidentially pardoned on September 8, 1974.” Among the materials retrieved by the search algorithm was the text fragment: “Ford pardoned Nixon on Sept. 8, 1974.” For a human player with a little knowledge of U.S. history, this result would be more than enough to settle the matter, but a computer program still has some work to do. Suppose the program has correctly identified the LAT of the clue as “He,” and suppose further that it knows that both “Ford” and “Nixon” refer to male persons, perhaps even that they were presidents. Which of the two names is the right choice? Several of the tests that Watson applies are essentially string-matching algorithms, similar to those that search DNA sequences for genetic patterns. Those algorithms might count how often each name occurs in association with the given date, but that result will not resolve the ambiguity in this case. The correct answer comes from a program module that undertakes a deeper logical analysis and recognizes the difference between subject and object in the two statements.

•     •     •

Given this glimpse into how Watson works, do we deem its intelligence to be explained, or explained away? Personally, I have mixed feelings.

I admit to a sentimental fondness for what John Haugeland called Good Old-Fashioned AI, or GOFAI—the ambitious kind of artificial intelligence that aspired to build a true thinking machine, a system with some deep internal representation (a mental model) of the world in which it functions. The outstanding example of this style is Terry Winograd’s SHRDLU program, written in 1970, which conversed about objects in a world of toy blocks on a tabletop. At the time, Winograd firmly asserted that the program was able to “understand discourse,” and he meant by this that the program understood not only the words but also the objects and relations the words referred to.

The promise of SHRDLU was that we could extend the same methods to broader domains of discourse, steadily building toward a general-purpose, human-like intelligence, with the same kind of carefully planned knowledge representation. But that never happened. Later in the 1970s, AI entered a time of troubles. When it came back in the 80s, the emphasis had shifted, and the technology had diversified. The new AI focused on expert systems, on data mining, on statistical rather than deductive methods; another branch of AI turned away from the human cerebral cortex in favor of the motor neurons of the cockroach. Overall, the field took a more pragmatic turn, with less concern for understanding the ultimate nature of intelligence and more energy invested in getting useful results, whatever the methodology.

Watson is in this latter-day pragmatic tradition, with its 100+ agents and its massively parallel probabilistic evidence-based architecture. Compared with SHRDLU, it’s all so messy, so ad hoc, so opaque. But it works, doesn’t it.

And I suppose my own mind is not quite as tidy as I would like to believe.

•     •     •

Although Watson won its Jeopardy match by a wide margin and made very few mistakes along the way, the moment everyone will remember is the program’s spectacular flub of a Final Jeopardy question on the second night. The category was “U.S. Cities,” and the clue was:

Its largest airport is named for a WWII hero, its second largest for a WWII battle.

Watson replied “Toronto.” As it happens, I got that question right; just seconds after the clue was revealed, I called out “Chicago.” Later, though, when I thought about the mental process that led to my answer, I realized that this was not at all a product of well-focused deductive reasoning. I was doing the same kind of scattershot, parallel, probabilistic groping in the dark that I frown on in a machine.

My “reasoning” went something like this: If it has two airports, it must be a pretty big city…. New York has three airports…. There’s Dallas, with DFW and Love—but no heroes or battles there. Chicago has two. Oh! Midway—that must be the battle of Midway.

That’s when I pressed the buzzer.

Note how sketchy my thinking was. I had no idea O’Hare was named for a war hero. As a matter of fact, I had no idea that Midway was named for the naval battle. If I had been asked in a more straightforward way, “Why is Chicago’s second airport named ‘Midway’?”, I would have guessed that it lies halfway between Point A and Point B. The Pacific island would not have entered my consciousness. And I never bothered to dig any deeper into the catalogue of multi-airport cities—Washington, San Francisco, Houston (isn’t G. H. W. Bush a WWII hero?).

So messy and ad hoc.

This entry was posted in computing, modern life.

6 Responses to Give me that good old-fashioned AI

  1. John Cowan says:

    And yet SHRDLU was the very model of a scruffy GOFAI piece of code (to parody a different lyric). Almost all of its knowledge is procedurally expressed, with almost nothing resembling a database at all except the information about the blocks themselves. By the standards of “neat” GOFAI, with its systematic tables and rules, SHRDLU is much closer to Watson than you’d think.

    Anyhow, the difference between explaining things and explaining them away is far from clear. When we explain heat in terms of energy, which kind of explanation is that? Clearly, heat as a separate factor drops out of our physics, but we don’t usually employ the complaining tone of “explaining it away” in such situations; on the contrary, we rejoice in the unification as a great scientific advance. Does weak unification “explain away” QED, or QED “explain away” electricity and light?

    I’m committed to the claim that Watson does not understand natural language in the sense of the Turing test, and to the claim that the Turing test, which probes the ability to converse in an adult fashion on any culturally relevant subject clearly and forcibly, is what defines intelligence. (When actual adult humans cannot do this, it is indeed their intelligence we question, with obvious exceptions like stutterers and people with locked-in syndrome).

  2. Russ Abbott says:

    I’m always surprised when someone points to passing the Turing test as the ultimate indicator of intelligence. We will build a computer that can pass the Turing test in much the same way that we built a computer that won at Jeopardy. But that still won’t feel satisfying.

    The question, it seems to me, isn’t intelligence; it’s what we refer to as consciousness. Will we ever build a conscious computer? Of course, I’m not able to say how one should test for consciousness. That’s part of the problem.

    Consciousness is not the same as intelligence. Most of us would grant consciousness to animals that are far less intelligent than we are. My cat is clearly conscious.

    This question raises many other questions such as the nature and possibility of zombies—entities that have all the attributes of humans except consciousness.

    I don’t have a pithy ending for this post. I simply want to point out that what most people think of when they talk about the possibility of intelligent computers is really conscious computers.

  3. Cody says:

    Regarding Watson’s 100+ agents , I am under the impression that Daniel Dennett endorses a similar approach to philosophy of the mind (or consciousness?), though I am not very familiar with it. I believe his model of mind lacks any central intelligence, but rather consists of a large number of (competing?) agents. I’ve only read little bits and pieces here and there.

    Anyone know of any attempts to apply genetic programming to AI?

    Concerning explaining vs. explaining away, that seems to me to be the nature of “knowing.” It reminds me of an alleged quote of Bertrand Russell’s concerning electricity:

    [Electricity] is not a thing, like St.Paul’s Cathedral; it is a way in which things behave. When we have told how things behave when they are electrified, and under what circumstances they are electrified, we have told all there is to tell.

    I’d conjecture that we will engineer intelligent machines, without finding any magical component buried in our brains.

  4. This sounds similar to P Langleys comments in the last issues of Machine Learning [1] where he says:

    “The early movement in machine learning was also characterized by an emphasis on ‘symbolic’ representations of learned knowledge,”
    “A related trend involved a shift away from the field’s original concern with learning in the context of intelligent systems.”
    and
    “During the 1990s, a number of factors led to decreased interest in the role of knowledge. One was the growing use of statistical and pattern-recognition approaches, which improved performance but which did not produce knowledge in any generally recognized form.”

    [1] The changing science of machine learning . Editorial.
    http://www.springerlink.com/content/j067h855n8223338/

  5. John Cowan says:

    Russ Abbott: It will certainly feel satisfying to me, in the event it’s achieved during my lifetime.

    On zombies:

    There are no (philosophical) zombies, unless you want to say that we are all zombies. No one who isn’t (self-)conscious could pass the Turing test; after all, the judge can ask questions that demand intelligent introspection. When we want to tell if human beings are conscious, the first test we try is to talk to them. If they don’t respond, they probably aren’t conscious (with the exceptions noted above). If they respond but don’t make sense or aren’t coherent, we may also tentatively conclude a lack of consciousness (some people in epileptic seizures are like this, and most people when just awakened are very similar).

    After all, what would happen if we talked to a (philosophical) zombie? The conversation would go like this:

    “Are you conscious?”

    “Of course I am. Aren’t you?”

    “How can you be sure?”

    “Well, I’m able to introspect about myself and report what I find. For example, at the moment I’m really annoyed that you could possibly think I’m a zombie.”

    No zombie can consistently deny being conscious, because the whole idea of zombies is that they are behaviorally identical to humans. Perhaps Daniel Dennett is a zombie; perhaps everyone is a zombie except Hilary Putnam. How would you ever find out? And don’t say “introspection”, because a zombie can report on his own introspection as well as you can, otherwise he wouldn’t be a zombie.

    The whole idea of zombies is really (per Dennett) just a modern projection of the idea that some humans “don’t feel pain [or whatever] the way we do.” Of course they do, and conscious is as conscious does.

    Off-topic remark:

    “What do we want?” “Braaains!” “When do we want it?” “Braaains!”

    Sign seen at the Rally for Rationality:

    “What do we want?” “Evidence-based change.” “When do we want it?” “After peer review.”

Leave a Reply to LinkMiner Cancel reply

Your email address will not be published. Required fields are marked *

*

In addition to the basic HTML formatting options offered by the buttons above, you can also enter LaTeX math commands. Enclose LaTeX content in \( ... \) for inline mode or \[ ... \] for display mode.