It’s time for my bimonthly self-promotional horn toot. The new issue of *American Scientist* is now available online, and paper copies should soon be stuffing mailboxes everywhere. My “Computing Science” column is on a new class of “holographic” algorithms invented a few years ago by Leslie G. Valiant of Harvard. The ideas have been further developed by Jin-Yi Cai of the University of Wisconsin (who is currently visiting Harvard as a Radcliffe Fellow) and several students and colleagues both at Wisconsin and in China.

If you want to know what holographic algorithms are all about, you’ll have to go read the column. I’m not going to rehash it here. Instead, I’m going to recycle a few paragraphs of a version of the column that might have been but never will be. This was a false start, an attempt at a lead that wound up leading me in the wrong direction:

It was the final day of a workshop, and I joined a group for dinner at a farmhouse restaurant in the village of Muggia, around the corner from Trieste, where Italy meets Istria. As the evening was winding down, we sat drinking the last of the wine, watching the sun sink into the Adriatic, and debating the P = NP question. The debate was somewhat lopsided: No one at the table was willing to defend the proposition that P is indeed equal to NP. But there was plenty of room for argument between hard-liners who insisted that P is most certainly

notequal to NP and those who took a more agnostic view.What are P and NP? They are classes of computational tasks or problems. Roughly speaking (I’ll try to speak less roughly in a moment), the class P includes all the things we know how to compute quickly. NP is a larger class, encompassing all the problems known to be in P and many others besides; for these additional tasks we don’t have efficient algorithms, and yet no one has proved that such algorithms don’t exist. If it should turn out that P = NP, then thousands of seemingly hard problems actually have some quick shortcut solution, if only we could find the secret.

A mathematician at my end of the table took an inflexible position: P = NP is not only wrong but unthinkable—a notion akin to time travel or perpetual motion. I had been lurking silently through most of this conversation, but at this point I spoke up. Time travel seems implausible, I said, because it would undermine causality, and perpetual motion would violate the conservation of energy. What bedrock principle would be overturned by the discovery that P = NP? The mathematician replied: “Do you believe in miracles? No? Well, wouldn’t it be miraculous if we lived in a universe where every last one of those NP problems has an easy solution?”

This was good enough for me on that mild summer night in Muggia. The fact is, however, the miracle argument cuts both ways. There’s a subclass of NP problems, designated NP-complete, with a special property: If there’s an efficient algorithm for any one of these tasks, the same method can be used to solve

allNP problems efficiently. Thus for P to be different from NP requires the “miracle” that not even one out of thousands of NP-complete problems has a quick solution….

One more note on the column. This is difficult subject matter, and my grasp of it is tenuous. I could not have written the piece without a lot of help and hand-holding from Valiant and Cai, who were reading drafts and answering phone calls up until the very moment the issue went to press. I have a reason for thanking them here rather than in the column itself: If I’ve goofed in spite their efforts, I don’t want them implicated.

I really like the philosophical discussions that arise when one speculates that P!=NP might be tied to physical constraints on computing in our universe. I strongly recommend an article by Quantum Computing researcher Scott Aaronson on this very subject: NP-complete Problems and Physical Reality. He presents several proposals for polynomial time computing of NP-hard problems, and ends by proposing that yes, P!=NP should be taken as a physical law akin to the laws of thermodynamics. It’s an entertaining read.

The P vs NP question has long been settled – in both ways! See

http://www.win.tue.nl/~gwoegi/P-versus-NP.htm

A world (like ours?) in which P is not equal to NP is interesting and challenging,

because in this world finding solutions (i.e., doing science!) requires ingenuity, inspiration, and a bit of luck. Verifying a given solution is easy and can be done by a machine or a PhD student.

If P=NP, both tasks could be done by a computer, at least if the proof of P=NP comes

with an efficient algorithm for an NP-complete problem.

This had already been noted by Kurt Goedel in a letter to John von Neumann in 1956

see

http://www.andrew.cmu.edu/user/hardt/godel.html

The worst case, however, would be a non-constructive proof of P=NP…

“If thereâ€™s an efficient algorithm for any one of these tasks, the same method can be used to solve all NP problems efficiently. Thus for P to be different from NP requires the â€œmiracleâ€ that not even one out of thousands of NP-complete problems has a quick solutionâ€¦”

This is a terrible argument. The whole point is that all these problems are (polynomially) equivalent. So you should really say, “Thus for P to be different from NP requires the ‘miracle’ that not even one out of thousands of *equivalent* NP-complete problems has a quick solutionâ€¦” If you like, I can give you billions more equivalent problems—would that make P=NP more likely to you?

The fact that all these problems are equivalent is *not* any sort of evidence, even intuitive, that P = NP. Rather, they are evidence that the notion of poly-time reducibility and the class NP are useful elements in studying complexity.

In reply to Anonymous: As I see it, the main argument against P=NP is that we know lots and lots of NP-complete problems, and we’ve worked hard looking for polynomial-time solutions to many of them, and we’ve come up empty. It’s the very multiplicity and diversity of these problemsâ€”and the fact that they’ve *all* resisted our best effortsâ€”that gives people confidence they really are intractable problems. (The introductory essay in Garey and Johnson expresses this idea vividly.)

Yes, all NP-complete problems are linked by reductions, and so in some deep sense they are “equivalent.” But the “equivalent” problems are not all “the same” problem; they are still separate and highly diverse, and that’s what’s crucially important. In attacking some NP-complete problems, we can use specialized tools from graph theory; for other problems there are methods in linear algebra; for still others we can draw on number theory, or even physics. When I say that the “miracle” argument cuts both ways, what I mean is that the thousands of NP-complete problems can be tackled in thousands of different ways; only one of those approaches has to succeed, but so far not one has.

Of course if you already know beyond all doubt that P is not equal to NP, then it’s no miracle or even a surprise that all those techniques fail simultaneously. But that’s putting the cart before the horse.

This whole discussion is very much a repetition of that evening in Muggia, though without the wine and the sunset. Who knowsâ€”maybe “Anonymous” is the same mathematician I was sparring with there.

I read your article on holographic algorithms; you are to be commended for taking on such a technical and deep topic!

As for P v NP, I am of the agnostic group: that we do not yet understand how fast problems can really be solved. To give an example, there is an area straddling complexity and algorithms that is often called “exact algorithms” or “exponential algorithms” (I call them “accelerated algorithms”). These algorithms improve on the runtime of brute-force search in a substantial way.

Suppose the number of possible solutions for your favorite NP problem is N, so that brute force search takes O(N*n^c) time for some c > 1. An “accelerated algorithm” for the problem would run in O(N^e*n^c) time, for some constant e<1.

Many NP problems exhibit surprising accelerated algorithms. For example, there is an algorithm for 3-SAT that runs in O(n^c*(4/3)^n) time, where n is the number of variables (this is due to Schoening). New accelerated algorithms for problems are being found every year. While they remain exponential, it is still intellectually interesting that many NP problems can avoid brute-force enumeration, and they make one wonder how far such algorithms can be pushed. We believe that most of the current ideas behind these algorithms (e.g. backtracking, local search, dynamic programming) can only be pushed so far, but it is not clear how to measure this in a general sense.

To show that P=NP, take the Hamiltonian circuit problem mentioned in “Accidental Algorithms”. Code it up using Code’s Boolean scheme. Then switch vertex and edge to get the Eulerian circuit problem, and use Euler’s method to solve all the NP complete problems.