Conquering divide

This morning I am enjoying the benefits of jet lead. My watch says it’s 7:30, but the hotel-room clock reads 4:30, so I have a few hours free to lie awake and solve the world’s problems. As a warmup exercise I’m doing mental arithmetic. I’m dividing integers, and trying to figure out how I do it.

I know what got me started on this particular line of predawn predaydreaming. On the flight out here I was reading a paper on division by Eric Allender of Rutgers. Then yesterday I was reminded again of division’s divisiveness when I ran into a friend, Robert L. Constable of Cornell (another refugee from the Eastern time zone, who’s probably awake now too). Constable said something I found provocative—which is a little odd, because he was repeating my own words to me, from an article I wrote a while ago. Paraphrasing Constable paraphrasing me: We should count ourselves lucky that multiplication is a polynomial-time operation.

In the article I went on to elaborate:

If you are given two prime numbers, it’s easy to multiply them, yielding a bigger number as the product. But trying to undo this process—to take the product and recover the two unknown factors—seems to be much more difficult. We have fast algorithms for multiplying but not for factoring. Why is that?

The contrast between multiplying and factoring is indeed fascinating, but maybe not quite as dramatic as that passage suggests. In the first place, we don’t really know that factoring is difficult. At the moment I don’t have an efficient, polynomial-time factoring program installed on my computer, but I can’t prove that you don’t have one. It’s not even true that we have no fast algorithms for factoring. We do know of one such algorithm; it’s just that you need a quantum computer to run it.

In any case, factoring isn’t how I’m passing my time this morning; I’m dividing. The two things are related, of course; either of them might claim to be the inverse of multiplication. (If you stop people on the street and ask, “What’s the opposite of multiplying?” few of them would reply “factoring.”) We can think of division as lazy factoring, or factoring with a head start. Dividing 40321 by 29 looks like an easier task than factoring 40321. But just how easy is it to divide? Specifically, what is the complexity class of division? Is it one of those tough exponential problems, or can we count on doing it in polynomial time, even in the worst case?

Here’s the most elementary division algorithm: Find x ÷ y by counting how many times you can subtract y from x. Stop counting when x is less than y; the count at that point becomes the quotient, and whatever is left of x is the remainder. In other words:

function divide(x,y)
   q = 0
   while x ≥ y 
      x = x - y
   return (q,x)   //quotient, remainder

The procedure requires exactly q subtraction steps, and so its time complexity is proportional to x / y. This sounds pretty good—a linear-time algorithm. Unfortunately, though, that’s not the proper way to measure the algorithm’s running time. The correct complexity is not a function of x or y or q but a function of the number of bits or digits in these quantities. And that means the repeated-subtraction algorithm is exponential; for an n-bit number, the running time is O(2n).

Of course no one really divides by repeated subtraction, even at five o’clock in the morning.

Like most of my contemporaries, I was exposed at a tender age to the algorithm known as long division, or guzzinta. I understand that this technique has now been dropped from the curriculum in many schools, to the disgust of educators who believe it’s immoral for youth to be spared any trauma their parents endured. Far be it from me to shirk my patriotic duty to torture children, but I’m actually glad that long division is being retired from the canon of skills that every educated person is expected to master. Part of my reason is that I never learned the trick very well myself, and I’m hoping to avoid embarrassment. But there’s also the fact that this is the first nontrivial algorithm most of us encounter in life, and it’s a really weird one. It makes for a rough and ragged introduction to the charms of algorithmic reasoning.

Here’s part of a worked example of long division from the Ask Dr. Math section of the Math Forum at Drexel University:

Let's divide 175 by 3:
    3 ) 175
We take the dividend (175) one digit at a time starting 
at the left. If we tried to divide 1 by 3, we would get zero, 
so let's skip that and use the first two digits, 17:
    3 ) 17
What I did was to look through the multiplication 
table for 3 (in my mind), looking for the biggest multiple 
of 3 that is less than 17: 3, 6, 9, 12, 15, 18 - since 18 is 
too big, we use 3 x 5 = 15. 

It’s the bit about looking through the multiplication table in your mind that I find unsettling. Note that what’s involved here is not just a matter of using the familiar 10-by-10 multiplication table “in reverse.” When you multiply decimal numbers, you take each digit of the multiplier and the multiplicand, find the corresponding column and row in the table, and, presto, there’s the partial product at the intersection. Reversing this process would mean using the divisor to select a row in the table, finding the dividend in that row, then reading off the index of the column to determine a digit of the quotient. But, as Dr. Math notes, the dividend is generally not present in the table; you have to search for “the biggest multiple less than….” No such searching (or guessing) step is needed with multiplication.

I have another gripe with the long-division algorithm: It mixes up operations on numbers with operations on digits—on the written representation of numbers. The various multiplication and subtraction steps that enter into the long-division procedure are well-defined operations in arithmetic, but when you come to that business about “bringing down the next digit,” you’re manipulating arbitrary symbols rather than acting on numerical values. How would you express the “bringing down” step in terms of +, – and × operations? If you think this is a frivolous quibble, try writing a computer program to implement the long-division algorithm. At some points in the program you’ll want numbers to be numbers—things you can add, subtract and multiply—but at other points it’s more convenient to represent numbers as lists of digits, which are easy to select and move around but not so easy to combine numerically.

* * *

Okay. So now it’s six o’clock local time, the horizon is brightening, and I seem to have persuaded myself that division of integers is so awkward as to be all but impossible. And yet…. I have just divided 40321 by 29, getting a quotient of 1390 and a remainder of 11. My algorithm—though I hesitate to glorify it with that term—seems to be a crude mishmash of long-division and Newton’s method. I make guesses, do some trial multiplication, and use the sign and magnitude of the error to suggest a better guess. It works, but it’s not pretty, and I wouldn’t want to be teaching it to impressionable youth. There’s a faint whiff of nondeterminism about it, as if I were guessing the answer rather than calculating it.

Why is division so hard?

At times I feel this is a dumb question, and that the universe doesn’t owe us an answer to such questions. Some things are just harder than others; that’s the way the world works. We can count ourselves lucky that multiplication has a polynomial-time algorithm; we have no right to expect that our luck will hold when we turn to division.

But then I get to thinking about physical systems that multiply and divide, where the two operations seem perfectly symmetrical. An example is an electrical circuit governed by Ohm’s law.

Image scarfed from Cornell University, Center for Nanoscale Systems, Institute for Physics Teachers

If we attach the terminals of a voltmeter to opposite ends of a resistor, we observe Ohm’s law in the form E = IR: Voltage is equal to the product of current and resistance. If we install an ammeter in series with the resistor, we see the effects of the law I = E/R: Current is equal to voltage divided by resistance. If you prefer pipes to wires, you can do the hydraulic version of the experiment, and there are also lots of mechanical schemes, where the same principle is illustrated with levers, gears, pulleys and such. In all of these systems, nature seems to divide just as easily as it multiplies. Why can’t we do the same with pencil and paper, or transistors, or neurons?

Much of theoretical computer science is given over to questions just like these. Given a certain model of computation, how many operations are needed to solve a problem? If we were given some extra resource (randomness, perhaps, or quantum superposition), would that make the job any easier? Can we reduce one problem to another and thereby solve both of them with the same method?

In the case of division, the answers are in. Specifically, they’re in the paper by Eric Allender that got me started on this strange morning’s meditation. The conclusions are simply stated. Forget about those whiffs of nondeterminism; division, along with the other three arithmetic operations, is a member in good standing of the familiar and friendly complexity class P, made up of deterministic polynomial-time problems. Indeed, all four operations belong to a subclass of P called L, which consists of just those problems that can be solved by a deterministic algorithm in polynomial time and with a logarithmic amount of memory (or, again, with memory proportional to the number of digits in the input). [Correction: As Ryan Williams and Jonathan Katz point out in the comments: a logspace solution has to fit in an amount of memory proportional to the logarithm of the number of digits.] Allender narrows down the complexity of division still further: It belongs to a class called DLOGTIME-uniform TC0. If you want to know just what that means, I must send you off to Allender’s paper (PDF).

Putting multiplication and division in the same complexity class doesn’t mean they’re exactly equal in difficulty. Complexity theory adopts the amusing practice of ignoring all constant factors and all terms other than the fastest-growing one, so that a running time of, say, 106n2 + 1000n is considered the same as n2. But in the case of multiplication and division this approximation is reasonably close to reality.

Most of the results on the complexity of division are quite recent. People have been dividing for millennia, but Allender’s paper is titled “The Division Breakthroughs,” and it describes results published within the past decade. For most of the history of Homo calculensis, we didn’t know how good we had it.

Should I be surprised to wake up in the morning and find out that division isn’t as hard as it seems? On the one hand, it’s common sense that division can’t be really, really hard, because we do it all the time, getting answers promptly and reliably. On the other hand, division algorithms are so ugly we have to hide them from the children. I don’t know what to think. My mind is divided.

Update 2007-10-11. In the comments, Jonathan Katz asks a very good question: How is division done on real computers? I wish I could give a detailed answer. Here’s what little I know.

Modern general-purpose processors come with built-in hardware for floating-point division. The most popular method seems to be an algorithm called SRT division, after D. Sweeney of IBM, J. E. Robertson of the University of Illinois and T. D. Tocher of Imperial College London, who independently came up with variations on the same theme in the 1950s. The underlying idea is not fundamentally different from long division, but there are a couple of tricks and tweaks to speed the process. Although the numbers are represented in binary, the calculation is usually done in radix 4, so that the quotient is calculated two bits at a time. Moreover, the quotient digits are not 0, 1, 2, 3 but –2, –1, 0, 1, 2; this choice of digits has the advantage that the multiplication steps in the algorithm can be implemented as simple shifts and negations. But with five possible quotient digits in radix 4, there is some ambiguity or redundancy: Some numbers can be represented in more than one way. This has to be resolved, and the answer converted to conventional notation, at the end of the operation.

What I find particularly intriguing is that the “whiff of nondeterminism” I thought I smelled in schoolbook division also turns up here. David L. Harris, Stuart F. Oberman and Mark A. Horowitz explain:

The key idea of SRT division is to guess the quotient digit based on a few of the most significant divisor and partial remainder bits, rather than computing it exactly. As long as the guess is close enough, the algorithm can correct on subsequent steps using redundant quotient digits.

This entry was posted in computing, mathematics.

17 Responses to Conquering divide

  1. Jonathan Katz says:

    Just thought you’d be interested to know: the best paper award at the last STOC (one of the top conferences for theoretical computer science research) showed an improvement in the time complexity of multiplication: from O(n log n log log n) to something slightly larger than O(n log n).

    That paper is available here

  2. Derek says:

    AFAIK, complexity classes don’t say how hard an algorithm is, only how much it slows as the inputs get bigger. In other words, division may inherently be slower than multiplication.

  3. Jonathan Katz says:

    This post leads me to wonder— how is division done on real computers? (I.e., how is division handled by a C compiler or, better yet, how is division handled by, e.g., the 80×86 instruction set?)

  4. Barry Cipra says:

    I must be missing something in the definitions, because it seems to me there’s a fairly simple logspace algorithm for division. Here’s my proposal. First, multiply the divisor by 10 — i.e., put zeros at its end — as long as the result stays less than (or equal to) the dividend. Then subtract, increment the appropriate digit of the quotient by 1, and repeat. Each step is clearly polynomial (the “hardest” part is the subtraction), and the number of repetitions is clearly no more than 9 times the number of digits in the dividend (the worst case occurs when you divide 9999…99 by 1), so everything overall is polynomial. What makes it logspace — if I understand what logspace means (which is probably the sticking point) — is that after you’ve done the subtraction, you can forget the original number.

    For example, dividing 29 into 40321 (your example) proceeds as follows, with each step written in a separate line:

    40321-29000=11321, Q=(1000)
    11321-2900=8421, Q=(1100)
    8421-2900=5521, Q=(1200)
    5521-2900=2621, Q=(1300)
    2621-290=2331, Q=(1310)
    2331-290=2041, Q=(1320)
    2041-290=1751, Q=(1330)
    1751-290=1461, Q=(1340)
    1461-290=1171, Q=(1350)
    1171-290=881, Q=(1360)
    881-290=591, Q=(1370)
    591-290=301, Q=(1380)
    301-290=11, Q=(1390), R=11

    The logspaciousness resides in the fact that once you’ve completed a step, you can erase everything in the step above it — think of it as alternating between two blackboards.

    So what am I missing?

  5. brian says:

    Barry: I’m sure you’re not missing anything. As noted above, division is indeed in P (the polynomial-time complexity class) and in L (the logspace complexity class). What am *I* missing?

  6. Mitch says:

    Re: the ‘real’-world symmetry of multiplication/division vs the asymmetry in numbers…real life is like the continuous reals, where multiplication and division -are- symmetric. But in discrete integers, all pairs of numbers multiply to give a single integer, but not so with division.

  7. Suresh says:

    I think the point of Allender’s article is that we want to find the “natural” complexity of division. In the absence of true lower bounds, the next best thing is to find the complexity class that division is “complete” for, which basically means that division is an example of the hardest problem in this class. His paper describes the final nailing of this, with the result that division is complete for a circuit class called TC^0 with a uniformity condition (see the complexity zoo entry on TC^0 for more information)

  8. Barry: Your algorithm runs in polynomial space and exponential time, not logspace and polynomial time. The input to the division problem is two n-bit integers. In the worst case, the integers you keep around in your algorithm (especially at the beginning of its execution) have at least Omega(n) bits– not O(log n) bits.

  9. Whoops– ignore the “exponential time” statement in my post, I conflated your algorithm with Brian’s– your algorithm DOES run in polynomial time, but it also uses polynomial space as well.

  10. Barry Cipra says:

    It seems to me what’s lacking is a clear definition of “logspace.” I was going by Brian’s parenthetical description, “memory proportional to the number of digits in the input.” The algorithm I outlined certainly fits that bill.

  11. A logspace algorithm would use “memory proportional to the logarithm of the number of digits in the input”. (We assume the input is on a read-only storage and the final output is written on a write-only storage, so those digits are not counted as part of the working memory.)

    For an example, if you want to compute the digit that occurs the most often in a given input number (to break ties, let’s say you want the smallest digit with this property), this can be done in logspace by keeping 10 binary counters (one for each digit) and incrementing the appropriate counter as you read the input from left to right. Each counter requires O(log n) bits to store, where n is the number of digits.

  12. Jonathan Katz says:

    Yes, a logspace algorithm uses space that is logarithmic in its input length, which in this case is the number of digits n.

  13. Carlos says:

    I never had a problem with the long division algorithm once I understood it exactly as the algorithm in Barry Cipra’s comment above: add a bunch of zeros to the divisor and the quotient and start subtracting away. When you can’t any longer, shift right and repeat.

    (I also like the connection with CRC algorithms, GF(2) and XOR gates. CRC hardware is in the end really perform long division, if I remember things right)

  14. Barry Cipra says:

    Ryan, thanks for the clarification.

  15. Toby says:

    IANAM, but:

    “No such searching (or guessing) step is needed with multiplication.”

    Is it not the case that both algorithms benefit *equally* from a lookup table - specifically the so-called “times tables” that we learned by rote as children? Or are you suggesting (absent a table) that children do the sub-multiplies by repeated addition? :-)

  16. Barry Cipra says:

    “this is the first nontrivial algorithm most of us encounter in life, and it’s a really weird one.”

    Speaking of weird algorithms, does anyone else (in the appropriate, geriatric age bracket) recall learning a long-hand algorithm for taking square roots? I remember doing so back in the early 1960s. The mechanics looked a lot like long division, except you had to guess successive digits of the divisor along with the quotient (which of course are supposed to be equal). As in long division, the algorithm could keep churning out decimal digits.

    Is computing the (integer part of the) square root also in Logspace?

  17. brian says:

    Toby: IANAM either, and perhaps that’s why I don’t understand your question. I assert there’s an asymmetry: A standard “times table” supplies a product for every combination of multiplier and multiplicand in a certain range, but the same table does not provide (by a simple lookup) the quotient for every combination of divisor and dividend. No?