My next “Computing Science” column in American Scientist (due out in a few days) is an essay on reversible computing—on machines that can go through a calculation either forward or backward. The notion of building such a Janus-like device is more than just a cute intellectual exercise. In principle, a reversible computing engine could operate without dissipating energy as heat—a possibility that is beginning to attract the interest of the chipmaking industry.
There’s an aspect of the story that didn’t make it into the column, which I want to expand on here. In conventional (irreversible) computers, some of the most important operations are those that move, copy and store data; in higher-level programming languages these instructions are represented by assignment statements. For example, the statement x := 4 assigns the value 4 to a variable or memory location named x. The assignment is an irreversible operation, because it wipes out any previous value of x; there’s no way to turn around and unassign x because the earlier value is gone. It is this loss of information—or increase in entropy—that encumbers irreversible computing with an unavoidable energy cost.
Assignment statements are forbidden in a reversible computing system. Most often, their place is taken by swap operations, which exchange the values of two variables. Swapping is inherently reversible. To undo the effect of swapping a and b, you just swap them again, thereby restoring the status quo ante. What could be simpler? But there are some subtleties lurking in the details.
To see the nature of the problem, there’s no need to delve into the arcana of computer logic and microelectronics; it’s apparent even in swapping everyday objects. Suppose you have a blue glass full of wine in your left hand and a red glass full of water in your right hand. Now please exchange the liquids: Move the wine into the red glass and the water into the blue glass. Unless you’re a better juggler than I am, you’ll want a third glass—an empty vessel—to serve as an intermediary in the exchange.
In computer programming, the standard trope for swapping the values of two variables also relies on an empty vessel. It looks like this:
procedure swap(a,b) variable temp temp := a a := b b := temp
The “third glass” in this procedure is the local variable temp, introduced to hold the value of a while a itself is given the value of b, so that finally b is ready to receive the old value of a that we stashed for safekeeping in temp. This three-card-monte shuffle works well enough, but the need for that extra variable is irksome. I’d prefer to live in a world where I can do something as simple as swapping two items of data without having to clutter up the program with an extra variable. It also seems like the process shouldn’t need three sequential operations. And, finally, there’s bad news here if we want the swap operation to serve as the foundation of a reversible computation. The whole point of choosing swap was to avoid assignment statements and other instructions that erase information. The problem, of course, is that the swap procedure has three assignment statements inside it.
There’s a famous mathematical magic trick, often called “the xor hack,” for avoiding the use of a temporary variable when swapping two values. It looks like this:
procedure xwap(a,b) a := a xor b b := a xor b a := a xor b
The symbol xor is an abbreviation of exclusive or, which means, roughly, “one or the other but not both or neither.” If a and b are individual bits, the xor operation has the truth table shown at right, producing a value of 1 whenever exactly one of its inputs is a 1, and 0 otherwise. If a and b are multibit patterns, such as numbers, the xor operation is applied to all the pairs of corresponding bits. I call the xwap procedure a mathematical magic trick because it really is a wonder to watch as the patterns of bits sort themselves out, so that after three repetitions of exactly the same operation, the values have somehow traded places. (Try it for yourself!) The triply repeated xor turns wine into water and water into wine. It’s almost too clever for its own good.
The xor hack dispenses with that noisome temp variable and manages to complete an exchange of values without needing any auxiliary memory. For the purposes of reversible computing, however, the hack gets us no closer to our goal. It is still a sequence of three assignment statements, and individually each of those statements is still irreversible. The action a := a xor b
erases the previous value of a.
If we view a swap or xwap instruction from the outside—as a black box whose inner workings are hidden and inaccessible—then its overall effect is clearly reversible and information-conserving. So one approach to this conundrum is just to declare that the swap or xwap commands are atomic operations that we promise never to look inside or to interrupt. We’ll pretend we don’t know about those internal assignment statements, and then we’ll hope that what we pretend we don’t know won’t hurt us.
I’m reminded of a rhetorical device sometimes used to explain the role of virtual particles in quantum field theories. In those theories, two electrons can exchange a virtual photon whose energy is greater than the mass of both electrons taken together. Such an event would seem to violate the most fundamental conservation laws. One way of “excusing” the violation is to say that the photon can be allowed to have surplus energy only if the particle’s existence is so fleeting that it can’t possibly be detected. That’s why it’s called “virtual.” This rule of undetectability is enforced by the Heisenberg uncertainty principle, which says that the photon’s lifetime is inversely proportional to its energy.
Maybe we could formulate an analogous rule for the swap or xwap instruction, suspending the second law of thermodynamics provided that we keep our vow never to peek inside the black box. But if that kind of reasoning will hold up to scrutiny, then I know how to build a perpetual-motion machine. All I need is a good lock to keep prying eyes out of the gearbox.
None of the arguments above prove that a swap operation can’t be made fully reversible; they just suggest the awkwardness of trying to build a reversible swap from irreversible assignment statements. The problems should all go away if swap is truly a primitive operation, built into the hardware of a computer rather than emulated in software.
But swapping in hardware—doing it with physics rather than with logic—also raises some brow-furrowing questions. One issue has to do with the dimensionality of space. In one dimension, you can’t swap anything; I guess that’s why the real number line never gets out of order. On a two-dimensional plane, you can swap pointlike objects but not linelike features. To swap lines, you need access to a third dimension; you need to rise above the plane or dive under it, as in a highway overpass/underpass. (Time can also serve as the extra dimension allowing lines to cross without intersecting; this is the principle of the traffic light.) Is there a consistent d+2 law at work here? If we wanted to swap planes, would we need a four-dimensional space? I think so, but I’ve never found a place where I can actually test that proposition.
For building a reversible computer, there should be no need to swap anything more elaborate than lines, which represent wires or other kinds of signal paths. Thus three-dimensional space should provide plenty of room. One of the basic building blocks of reversible computing, called the Fredkin gate, works just this way. The Fredkin gate is a logic device with three inputs and three outputs. The signal on one input determines whether or not the other two signals are swapped. Although the diagram gives only an abstract, logical view of this process, there’s no doubt such a device can be built in our three-dimensional world.
Looking deeper, though, the problem of having insufficient room to maneuver, and even the need for temporary storage, come back to haunt us again. The Fredkin gate is named for Ed Fredkin, who was one of the pioneers in thinking about reversible computing when he ran the Information Mechanics group at MIT in the 1970s. Over a period of many years Fredkin has been refining and elaborating a theory that envisions the entire universe as a vast computer—and necessarily a reversible one. This idea is well outside the mainstream of modern physics, but it’s surely no harder to swallow than string theory—and it’s no more remote from experimental verification.
Fredkin’s universal computer is a kind of cellular automaton, where three-dimensional space is packed full of tiny cubicles called cells. The cells are very simple objects, which have only a few discrete states, much as bits in the memory of a computer can have only the states 0 or 1. The cells of the cosmic computer interact only with their immediate neighbors, and the only allowed form of interaction is for two adjacent cells to swap states. As a personal matter, I should say that I find Fredkin’s idea deeply intriguing. If I were ever asked to create a universe, a design along these lines is what I would try to come up with. But I have a problem with the swapping. Fredkin’s cellular-automaton universe is what Aristotle would have called a plenum—a space packed solid with stuff, in this case the cells of the automaton. There are no voids between the cells, and nothing exists other than the cells. Which leaves me wondering how the cells perform the little dos-à-dos required to swap states. It’s like trying to switch two bricks deep inside a wall.
One could argue that the cells don’t change places; it’s only their states that are swapped. But at this depth of the ontological abyss, I’m not sure such a distinction makes sense. It’s like arguing about whether or not two electrons might actually be “the same” particle. In any case, whether or not swapping entails the actual movement of something physical, I still have trouble imagining the details of the process. Signals of some kind have to be passed back and forth for the swap to be accomplished. How are the two signals synchronized? How do we ensure that a signal from a to b is always paired with a signal from b to a? Don’t we need an auxiliary variable (or else some version of the xor hack) to make sure no information is lost?
For the record, I have asked questions like these of Fredkin. He is quite unworried by my worries.
Notes and references. Many computers have a built-in swap or exchange instruction. I don’t know how this operation is actually carried out at the hardware level, but I would be surprised if it is anything other than a version of the three-assignment swap procedure defined above, making use of some temporary, scratchpad storage register.
The xor hack has a good Wikipedia entry, including a discussion of a subtle trap: xwap(a, b) fails spectacularly if a and b happen to refer to the same location in memory. As far as I know, the earliest mention of the xor hack is in a 1972 MIT compendium of computer lore known as HAKMEM. The xor hack is item 161 in this list, attributed to Bill Gosper.
Fredkin describes his cellular-automaton universe—and much else—at the Digital Philosophy web site. I have mentioned some related ideas in an earlier American Scientist column. I also brought up my reservations about swapping as the fundament of physics in an after-dinner talk at a meeting organized by Fredkin (“Debugging the Universe,” International Journal of Theoretical Physics, Vol. 42, No. 2, February 2003, pages 277–295).