Reversing History

by Brian Hayes

Published 9 April 2006

Several posts here (1, 2, 3, 4) as well as my March–April column in American Scientist have looked at technologies for building reversible computing machines. Most of the discussion focuses on “combinational” logic—networks of devices that have no feedback loops or memory elements. In a purely combinational circuit, the state of the outputs is determined entirely by the current state of the inputs. But real computers are not made up of purely combinational circuits. They have memory, and so the state of the outputs depends not just on the current inputs but also on earlier inputs. What happens to such history-dependent computational machinery when you try to throw it into reverse?

Peter G. Neumann of SRI International points out that this question was addressed by David A. Huffman of MIT almost 50 years ago, long before the better-known work on reversibility that I mentioned in my column. Huffman’s analysis was formulated in terms of finite-state machines, which are also called sequential machines because they pass through a sequence of states in response to a series of inputs. For example, one simple sequential logic machine has two states called odd and even, and it alternates between them as inputs are received. Huffman considered a subset of sequential machines that are “information lossless,” never discarding information as signals pass through the logic devices. For combinational circuits, “lossless” and “reversible” are equivalent concepts; if no information is thrown away, then the outputs determine the inputs just as much as the inputs dictate the outputs. For sequential devices, however, the situation is more complicated. Knowing the outputs of the machine is not always enough to recover the inputs, because information may also be sequestered in memory elements or the internal states of the machine. Hence there are lossless sequential circuits for which we cannot build an inverse circuit, because it would need to fetch information from the future in order to reconstruct the past. Nevertheless, Huffman showed there is always a procedure for reversing a lossless sequential machine, given enough knowledge of the internal states.

Huffman’s account of these ideas is worth revisiting after all these years. Here’s the reference: Huffman, David A. 1959. Canonical forms for information-lossless finite-state logical machines. IRE Transactions on Circuit Theory (special supplement) and IRE Transactions on Information Theory (special supplement) CT-6 and IT-5:41–59. The paper was reprinted in a volume that may be easier to track down, and that includes a number of other notable contributions to automata theory: Sequential Machines: Selected Papers, edited by Edward F. Moore, Reading, Mass.: Addison-Wesley, 1964.

One followup to Huffman’s work was a paper by Neumann himself, adapting the lossless sequential machine to the needs of error-correcting coding: Neumann, P. G. 1964. Error-limiting coding using information-lossless sequential machines. IEEE Transactions on Information Theory. IT-10:108–115.

Tags for this article: computing.

Publication history

First publication: 9 April 2006

Converted to Eleventy framework: 22 April 2025

More to read...

Three Months in Monte Carlo

Exploring a computer model I’ve known for decades, I come face to face with some things I know that ain’t so.

Kenken-Friendly Numbers

Kenken is the funny-page puzzle that allows the number nerds among us to strut their stuff. And it’s not limited to the integers 1 through 6 or the operations +, –, ×, ÷.

Approximately Yours

However foolish, I recall it fondly: My first dabbling adventure in computational number theory.

Another Technological Tragedy

The cause of the accident was not a leak or an equipment failure or a design flaw. The pressure didn’t just creep up beyond safe limits while no one was paying attention; the pressure was driven up by the automatic control system meant to keep it in bounds.