Crossed conversations

David G. Hays, a pioneer of computational linguistics, describes an experiment he performed in 1956 at the RAND Corporation:

The experiment strips conversation down to its barest essentials by depriving the subject of all language except for two pushbuttons and two lights, and by suggesting to him that he is attempting to reach an accord with a mere machine. We brought two students into our building through different doors and led them separately to adjoining rooms. We told each that he was working with a machine, and showed him lights and pushbuttons. Over and over again, at a signal, he would press one or the other of two buttons, and then one of two lights would come on. If the light that appeared corresponded to the button he pressed, he was right; otherwise, wrong. The students faced identical displays, but their feedback was reversed: if student A pressed the red button, then a moment later student B would see the red light go on, and if student B pressed the red button, then student A would see the red light. On any trial, therefore, if the two students pressed matching buttons they would both be correct, and if they chose opposite buttons they would both be wrong.

We used a few pairs of RAND mathematicians; but they would quickly settle on one color, say red, and choose it every time. Always correct, they soon grew bored. The students began with difficulty, but after enough experience they would generally hit on something. Some, like the mathematicians, chose one color and stuck with it. Some chose simple alternations (red-green-red-green). Some chose double alternations (red-red-green-green). Some adopted more complex patterns (four red, four green, four red, four green, sixteen mixed and mostly incorrect, then repeat). The students, although they were sometimes wrong, were rarely bored. They were busy figuring out the complex patterns of the machine.

(The quotation is from “Language and Interpersonal Relationships,” Daedalus, Vol. 102, No. 3, Summer 1973, pp. 203–216.)

I too have met a few mathematicians who might be described as always correct but soon bored. In this case, however, I don’t think the mathematicians are to blame. The game they were asked to play seems like the dullest possible use of this experimental apparatus, which actually has rich possibilities.

Wouldn’t it have been more fun to put the players at crossed purposes? Let one try to match the color of the button to the color of the light, while the other player strives to create a mismatch. This game, more commonly played by matching pennies, has no strategy that allows both competitors to be boringly correct. In 1956—the same year as the RAND experiments—D. W. Hagelbarger of Bell Labs wrote a computer program to play the penny-matching game. In 9,795 games against human opponents (some of them probably mathematicians) the program scored 5,218 wins and 4,577 losses, well above chance level.

The RAND experiment, with its deliberate confusion of human and computational agents, also summons up thoughts of the Turing test. Turing’s paper on the “imitation game” was published in 1950, but Hays does not allude to it. And, after all, in his experiment the computer was a mere fiction, introduced to distract the participants from the true nature of the proceedings. But I’m curious about the outcome of a Turing test conducted over this minimalist channel of communication. If you were sitting at that console, with nothing but two buttons and two light bulbs to express yourself, how would you persuade an interrogator that you are a person rather than a machine? What conversational strategy could you adopt that a computer program could not master just as well, at least over a limited number of exchanges?

If the conversation goes on long enough, of course, the thinness of the channel hardly matters. With two buttons to choose between, you can communicate one bit of information in each exchange, and in principle that stream of bits can encode any message. Nevertheless, getting the conversation started seems like an intriguing problem. Without prompting or prearrangement, how would the two parties—human or otherwise—bootstrap a language? Would they reinvent Morse code, or some ad hoc equivalent? Maybe they’d start like this:


00000010101010000000000
00101000001010000000100
10001000100010010110010
10101010101010100100100

(but I doubt it).

This entry was posted in computing, linguistics, mathematics.

3 Responses to Crossed conversations

  1. Paul Beame says:

    The question of how to formalize and analyze this question of to “bootstrap a language” was considered by Brendan Juba and Madhu Sudan in a pair of 2008 papers on “Universal Semantic Communication” and are further developed and discussed at length in other papers and in a 2011 book of the same title by Juba. One conclusion is that this is actually possible given certain properties of the goals of the parties.

  2. brian says:

    Thanks for the pointer to the work of Brendan Juba et al., which turns my idle question into serious computer science!

  3. saurabh says:

    I think the Hays experiment establishes more the character of a certain type of left brained person who’d much rather be perfect than messy. Such people will usually have a pretty high threshold for boredom, so calling them ‘bored’ is probably the biggest leap in Hays’ analysis.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

In addition to the basic HTML formatting options offered by the buttons above, you can also enter LaTeX math commands. Enclose LaTeX content in \( ... \) for inline mode or \[ ... \] for display mode.