Number Factoids

The web sites numbersaplenty.com and numberworld.info dish up a smorgasbord of facts about every natural number from 1 to 999,999,999,999,999. Type in your favorite positive integer (provided it’s less than 1015) and you’ll get a list of prime factors, a list of divisors, the number’s representation in various bases, its square root, and lots more.

I first stumbled upon these sites (and several others like them) about a year ago. I revisited them recently while putting together the Carnival of Mathematics. I was looking for something cute to say about the new calendar year and was rewarded with the discovery that 2016 is a triangular number: 2016 bowling pins can be arranged in an equilateral triangle with 63 pins per side.

This incident set me to thinking: What does it take to build a web site like these? Clearly, the sites do not have 999,999,999,999,999 HTML files sitting on a disk drive waiting to be served up when a visitor arrives. Everything must be computed on the fly, in response to a query. And it all has to be done in milliseconds. The question that particularly intrigued me was how the programs recognize that a given number has certain properties or is a member of a certain class—a triangular number, a square, a Fibonacci, a factorial, and so on.

I thought the best way to satisfy my curiosity would be to build a toy number site of my own. Here it is:

Number Factoids




Prime factors:
Prime number ?
Square-free number ?
Square-root-smooth number ?
Square number ?
Triangular number ?
Factorial number ?
Fibonacci number ?
Catalan number ?
Somos-4 number ?

Elapsed time:

This one works a little differently from the number sites I’ve found on the web. The computation is done not on my server but on your computer. When you type a number into the input field above, a JavaScript program running in your web browser computes the prime factors of the number and checks off various other properties. (The source code for this program is available on GitHub, and there’s also a standalone version of the Number Factoids calculator.)

Because the computation is being done by your computer, the performance depends on what hardware and software you bring to the task. Especially important is the JavaScript engine in your browser. As a benchmark, you might try entering the number 999,999,999,999,989, which is the largest prime less than 1015. The elapsed time for the computation will be shown at the bottom of the panel. On my laptop, current versions of Chrome, Firefox, Safari, and Opera give running times in the range of 150 to 200 milliseconds. (But an antique iPad takes almost 8 seconds.)

Most of that time is spent in factoring the integer (or attempting to factor it in the case of a prime). Factoring is reputed to be a hard problem, and so you might suppose it would make this whole project infeasible. But the factoring computation bogs down only with really big numbers—and a quadrillion just isn’t that big anymore. Even a crude trial-division algorithm can do the job. In the worst case we need to try dividing by all the odd numbers less than \(\sqrt{10^{15}}\). That means the inner loop runs about 16 million times—a mere blink of the eye.

Once we have the list of prime factors for a number N, other properties come along almost for free. Primality: We can tell whether or not N is prime just by looking at the length of the factor list. Square-freeness: N is square-free if no prime appears more than once in the list. Smoothness: N is said to be square-root smooth if the largest prime factor is no greater than \(\sqrt{N}\). (For example, \(12 = 2 \times 2 \times 3\) is square-root smooth, but \(20 = 2 \times 2 \times 5\) is not.)

The factor list could also be used to detect square numbers. N is a perfect square if every prime factor appears in the list an even number of times. But there are lots of other ways to detect squares that don’t require factorization. Indeed, running on a machine that has a built-in square-rooter, the JavaScript code for recognizing perfect squares can be as simple as this:

function isSquare(N) {
  var root = Math.floor(Math.sqrt(N));
  return root * root === N;
}

If you want to test this code in the Number Factoids calculator, you might start with 999,999,961,946,176, which is the largest perfect square less than \(10^{15}\).

Note that the isSquare function is a predicate: The return statement in the last line yields a boolean value, either true or false. The program might well be more useful if it could report not only that 121 is a square but also what it’s the square of. But the Number Factoids program is just a proof of concept, so I have stuck to yes-or-no questions.

Metafactoids: The Factoids calculator tests nine boolean properties. No number can possess all of these properties, but 1 gets seven green checkmarks. Can any other number equal this score? The sequence of numbers that exhibit none of the nine properties begins 20, 44, 52, 68, 76, 88, 92,… Are they all even numbers? (Hover for answer.)


What about detecting triangular numbers? N is triangular if it is the sum of all the integers from 1 through k for some integer k. For example, \(2016 = 1 + 2 + 3 + \dots + 63\). Given k, it’s easy enough to find the kth triangular number, but we want to work in the opposite direction: Given N, we want to find out if there is a corresponding k such that \(1 + 2 + 3 + \cdots + k = N\).

Young Carl Friedrich Gauss knew a shortcut for calculating the sums of consecutive integers: \(1 + 2 + 3 + \cdots + k = N = k\,(k+1)\,/\,2\). We need to invert this formula, solving for the value of k that yields a specified N (if there is one). Rearranging the equation gives \(k^2 + k - 2N = 0\), and then we can crank up the trusty old quadratic formula to get this solution:

\[k = \frac{-1 \pm \sqrt{1 + 8N}}{2}.\]

Thus k is an integer—and N is triangular—if and only if \(8N + 1\) is an odd perfect square. (Let’s ignore the negative root, and note that if \(8N + 1\) is a square at all, it will be an odd one.) Detecting perfect squares is a problem we’ve already solved, so the predicate for detecting triangular numbers takes this simple form:

function isTriangular(N) {
  return isSquare(8 * N + 1);
}

Try testing it with 999,999,997,764,120, the largest triangular number less than \(10^{15}\).


Factorials are the multiplicative analogues of triangular numbers: If N is the kth factorial, then \(N = k! = 1 \times 2 \times 3 \times \cdots \times k\). Is there a multiplicative trick that generates factorials in the same way that Gauss’s shortcut generates triangulars? Well, there’s Stirling’s approximation:

\[k! \approx \sqrt{2 \pi k} \left( \frac{k}{e} \right)^k.\]

We might try to invert this formula to get a function of \(k!\) whose value is \(k\), but I don’t believe this is a promising avenue to explore. The reason is that Stirling’s formula is only an approximation. It predicts, for example, that 5! is equal to 118.02, whereas the true value is 120. Thus taking the output of the inverse function and rounding to the nearest integer would produce wrong answers. We could add correction terms to get a closer approximation—but surely there’s a better way.

One approach is to work with the gamma (\(\Gamma\)) function, which extends the concept of a factorial from the integers to the real and complex numbers; if \(n\) is an integer, then \(\Gamma(n+1) = n!\), but the \(\Gamma\) function also interpolates between the factorial values. A recent paper by Mitsuru Uchiyama gives an explicit, analytic, inverse of the gamma function, but I understand only fragments of the mathematics, and I don’t know how to implement it algorithmically.

Fifteen years ago David W. Cantrell came up with another inverse of the gamma function, although this one is only approximate. Cantrell’s version is much less intimidating, and it is based on one of my favorite mathematical gadgets, the Lambert W function. A Mathematica implementation of Cantrell’s idea works as advertised—when it is given the kth factorial number as input, it returns a real number very close to \(k+1\). However, the approximation is not good enough to distinguish true factorials from nearby numbers. Besides, JavaScript doesn’t come with a built-in Lambert W function, and I am loath to try writing my own.

On the whole, it seems better to retreat from all this higher mathematics and go back to the definition of the factorial as a product of successive integers. Then we can reliably detect factorials with a simple linear search, expressed in the following JavaScript function:

function isFactorial(N) {
  var d = 2, q = N, r = 0;
  while (q > 1 && r === 0) {
    r = q % d;
    q = q / d;
    d += 1;
  }
  return (q === 1 && r === 0);
}

A factorial is built by repeated multiplication, so this algorithm takes it apart by repeated division. Initially, we set \(q = N\) and \(d = 2\). Then we replace \(q\) by \(q / d\), and \(d\) by \(d + 1\), while keeping track of the remainder \(r = q \bmod d\). If we can continue dividing until \(q\) is equal to 1, and the remainder of every division is 0, then N is a factorial. This is not a closed-form solution; it requires a loop. On the other hand, the largest factorial less than \(10^{15}\) is 17! = 355,687,428,096,000, so the program won’t be going around the loop more than 17 times.


The Fibonacci numbers are dear to the hearts of number nuts everywhere (including me). The sequence is defined by the recursion \(F_0 = 0, F_1 = 1, F_k = F_{k-1} + F_{k-2}\). How best to recognize these numbers? There is a remarkable closed-form formula, named for the French mathematician J. P. M. Binet:

\[F_k = \frac{1}{\sqrt{5}} \left[ \left(\frac{1 + \sqrt{5}}{2}\right)^k - \left(\frac{1 - \sqrt{5}}{2}\right)^k\right]\]

I call it remarkable because, unlike Stirling’s approximation for factorials, this is an exact formula; if you give it an integer k and an exact value of \(\sqrt{5}\)), it returns the kth Fibonacci number as an integer.

One afternoon last week I engaged in a strenuous wrestling match with Binet’s formula, trying to turn it inside out and thereby create a function of N that returns k if and only if \(N\) is the kth Fibonacci number. With some help from Mathematica I got as far as the following expression, which gives the right answer some of the time:

\[k(N) = \frac{\log \frac{1}{2} \left( \sqrt{5 N^2 + 4} - \sqrt{5} N \right)}{\log \frac{1}{2} \left( \sqrt{5} - 1 \right)}\]

Plugging in a few values of N yields the following table of values for \(k(N)\):

N k(N)
1 2.000000000000001
2 3.209573979673092
3 4.0000000000000036
4 4.578618254581733
5 5.03325648737724
6 5.407157747499656
7 5.724476891770392
8 6.000000000000018
9 6.243411773788614
10 6.4613916654135615
11 6.658737112471047
12 6.8390081849422675
13 7.00491857188792
14 7.158583717787527
15 7.3016843734535035
16 7.435577905992959
17 7.561376165404197
18 7.680001269357004
19 7.792226410280063
20 7.8987062604216005
21 7.999999999999939

In each of the green rows, the function correctly recognizes a Fibonacci number \(F_k\), returning the value of k as an integer. (Or almost an integer; the value would be exact if we could calculate exact square roots and logarithms.) Specifically, 1 is the second Fibonacci number (though also the first), 3 is the fourth, 8 is the sixth, and 21 is the eighth Fibonacci number. So far so good. But there’s something weird going on with the other Fibonacci numbers in the table, namely those with odd-numbered indices (red rows). For N = 2, 5, and 13, the inverse Binet function returns numbers that are close to the correct k values (3, 5, 7), but not quite close enough. What’s that about?

If I had persisted in my wrestling match, would I have ultimately prevailed? I’ll never know, because in this era of Google and MathOverflow and StackExchange, a spoiler lurks around every cybercorner. Before I could make any further progress, I stumbled upon pointers to the work of Ira Gessel of Brandeis, who neatly settled the matter of recognizing Fibonacci numbers more than 40 years ago, when he was an undergraduate at Harvard. Gessel showed that N is a Fibonacci number iff either \(5N^2 + 4\) or \(5N^2 - 4\) is a perfect square. Gessel introduced this short and sweet criterion and proved its correctness in a problem published in The Fibonacci Quarterly (1972, Vol. 10, No. 6, pp. 417–419). Phillip James, in a 2009 paper, presents the proof in a way I find somewhat easier to follow.

It is not a coincidence that the expression \(5N^2 + 4\) appears in both Gessel’s formula and in my attempt to construct an inverse Binet function. Furthermore, substituting Gessel’s \(5N^2 - 4\) into the inverse function (with a few other sign adjustments) yields correct results for the odd-indexed Fibonacci numbers. Implementing the Gessel test in JavaScript is a cinch:

function gessel(N) {
  var s = 5 * N * N;
  return isSquare(s + 4) || isSquare(s - 4);
}

So that takes care of the Fibonacci numbers, right? Alas, no. Although Gessel’s criterion is mathematically unassailable, it fails computationally. The problem arises from the squaring of \(N\). If \(N\) is in the neighborhood of \(10^{15}\), then \(N^2\) is near \(10^{30}\), which is roughly \(2^{100}\). JavaScript does all of its arithmetic with 64-bit double-precision floating-point numbers, which allow 53 bits for representing the mantissa, or significand. With values above \(2^{53}\), not all integers can be represented exactly—there are gaps between them. In this range the mapping between \(N\) and \(N^2\) is no longer a bijection (one-to-one in both directions), and the gessel procedure returns many errors.

I had one more hope of coming up with a closed-form Fibonacci recognizer. In the Binet formula, the term \(((1 - \sqrt{5})\,/\,2)^k\) becomes very small in magnitude as k grows large. By neglecting that term we get a simpler formula that still yields a good approximation to Fibonacci numbers:

\[F_k \approx \frac{1}{\sqrt{5}} \left(\frac{1 + \sqrt{5}}{2}\right)^k.\]

For any integer k, the value returned by that expression is within 0.5 of the Fibonacci number \(F_k\), and so simple rounding is guaranteed to yield the correct answer. But the inverse function is not so well-behaved. Although it has no \(N^2\) term that would overflow the 64-bit format, it relies on square-root and logarithm operations whose limited precision can still introduce errors.

So how does the Factoids calculator detect Fibonacci numbers? The old-fashioned way. It starts with 0 and 1 and iterates through the sequence of additions, stopping as soon as N is reached or exceeded:

function isFibo(N) {
  var a = 0, b = 1, tmp;
  while (a < N) {
    tmp = a;
    a = b;
    b = tmp + b;
  }
  return a === N;
}

As with the factorials, this is not a closed-form solution, and its computational complexity scales in linear proportion to N rather than being constant regardless of N. There are tricks for speeding it up to \(\log N\); Edsger Dijkstra described one such approach. But optimization hardly seems worth the bother. For N < \(10^{15}\), the while loop cannot be executed more than 72 times.


I’ve included two more sequences in the Factoids calculator, just because I’m especially fond of them. The Catalan numbers (1, 1, 2, 5, 14, 42, 132, 429…) are famously useful for counting all sorts of things—ways of triangulating a polygon, paths through the Manhattan street grid, sequences of properly nested parentheses. The usual definition is in terms of binomial coefficients or factorials:

\[C_k = \frac{1}{k+1} \binom{2k}{k} = \frac{(2k)!}{(k+1)! k!}\]

But there is also a recurrence relation:

\[C_0 = 1,\qquad C_k = \frac{4k-2}{k+1} C_{k-1}\]

The recognizer function in the Factoids Calculator does a bottom-up iteration based on the recurrence relation:

function isCatalan(N) {
  var c = 1, k = 0;
  while (c < N) {
    k += 1;
    c = c * (4 * k - 2) / (k + 1);
  }
  return c === N;
}

The final sequence in the calculator is one of several discovered by Michael Somos around 1980. It is defined by this recurrence:

\[S_0 = S_1 = S_2 = S_3 = 1,\qquad S_k = \frac{S_{k-1} S_{k-3} + S_{k-2}^2}{S_{k-4}}\]

The surprise here is that the elements of the sequence are all integers, beginning 1, 1, 1, 1, 2, 3, 7, 23, 59, 314, 1529, 8209. In writing a recognizer for these numbers I have made no attempt to be clever; I simply generate the sequence from the beginning and check for equality with the given N:

function isSomos(N) {
  var next = 1, S = [1, 1, 1, 1];
  while (next < N) {
    next = (S[3] * S[1] + S[2] * S[2]) / S[0];
    S.shift();
    S.push(next);
  }
  return next === N;
}

But there’s a problem with this code. Do you see it? As N approaches the \(10^{15}\) barrier, the subexpression S[3] * S[1] + S[2] * S[2] will surely break through that barrier. In fact, the version of the procedure shown above fails for 32,606,721,084,786, the largest Somos-4 number below \(10^{15}\). For the version of the program that’s actually running in the Factoids calculator I have repaired this flaw by rearranging the sequence of operations. (For details see the GitHub repository.)


The factorial, Fibonacci, Catalan, and Somos sequences all exhibit exponential growth, which means they are sprinkled very sparsely along the number line. That’s why a simple linear search algorithm—which just keeps going until it reaches or exceeds the target—can be so effective. For the same reason, it would be easy to precompute all of these numbers up to \(10^{15}\) and have the JavaScript program do a table lookup. I have ruled out this strategy for a simple reason: It’s no fun. It’s not sporting. I want to do real computing, not just consult a table.

Other number series, such as the square and triangular numbers, are more densely distributed. There are more than 30 million square and triangular numbers up to \(10^{15}\); downloading a table of that size would take longer than recomputing quite a few squares and triangulars. And then there are the primes—all 29,844,570,422,669 of them.

What would happen if we broke out of the 64-bit sandbox and offered to supply factoids about larger numbers? A next step might be a Megafactoids calculator that doubles the digit count, accepting integers up to \(10^{30}\). Computations in this system would require multiple-precision arithmetic, capable of handling numbers with at least 128 bits. Some programming languages offer built-in support for numbers of arbitrary size, and libraries can add that capability to other languages, including JavaScript. Although there is a substantial speed penalty for extended precision, most of the algorithms running in the Factoids program would still give correct results in acceptable time. In particular, there would no problem recognizing squares and triangulars, factorials, Fibonaccis, Catalans and Somos-4 numbers.

The one real problem area in a 30-digit factoid calculator is factoring. Trial division would be useless; instead of milliseconds, the worst-case running time would be months or years. However, much stronger factoring algorithms have been devised in the past 40 years. The algorithm that would be most suitable for this purpose is called the elliptic curve method, invented by Hendrik Lenstra in the 1980s. An implementation of this method built into PARI/GP, which in turn is built into Sage, can factor 30-digit numbers in about 20 milliseconds. A JavaScript implementation of the elliptic curve method seems quite doable. Whether it’s worth doing is another question. The world is not exactly clamoring for more and better factoids.

Addendum 2016-02-05: I’ve just learned (via Hacker News) that I may need to add a few more recognition predicates: detectors for “artisanal integers” in flavors hand-crafted in the Mission District, Brooklyn, and London.


Posted in computing, mathematics | 4 Comments

Carnival of Mathematics #130

Welcome to the 130th monthly Carnival of Mathematics, a roving blog organized by The Aperiodical. The previous edition was hosted by GanitCharcha. This month’s collection includes material appearing on the Web between December 10 and January 9 (with a bit of spillover on both ends).

Tradition obliges me to say something interesting about the number 130. I could mention that 130 = 5 × 5 × 5 + 5. Is that interesting? Meh, eh? The Wikipedia page for 130 reports the following curious observation:

\(130\) is the only integer that is the sum of the squares of its first four divisors, including \(1\): \(1^2 + 2^2 + 5^2 + 10^2 = 130\).

What’s interesting here is not the fact that 130 has this property. The provocative part is the statement that 130 is the only such integer. How can one know this? It would be easy enough to check that no smaller number qualifies, but how do you rule out the possibility that somewhere far out on the number line there lurks another example? Clearly, what’s needed is a proof, but Wikipedia offers none. I spent some time trying to come up with a proof myself but failed to put the pieces together. Maybe you’ll do better. If not, Robert Munafo explains all, and traces the question to the Russian website dxdy.ru.


Enough of 130. I’d like to celebrate another number of the season: 2016. As midnight approached on December 31, Twitter brought this news:

11111011111 + 1 = 11111100000. Can't wait for 11111111111 (2047)

In other words, the year we are now beginning is equal to \(2^{11} - 2^{5}\), and in another 32 years we’ll be celebrating the turn of a binary millennium, marking the passage of two kibiyears. (Year-zero deniers are welcome to wait an extra year.)

Fernando Juan, with an animated GIF, pointed out another nonobvious fact: \(2016 = 3^3 + 4^3 + 5^3 + 6^3 + 7^3 + 8^3 + 9^3\).

Even more noteworthy, May-Li Khoe and Federico Ardillo gave graphic proof that 2016 is a triangular number:

2016 dots arranged to form a equilateral triangle

Specifically, 2016 is the sum of the integers \(1 + 2 + 3 + \cdots + 63\), making it the \(63\)rd element of the sequence \(1, 3, 6, 10, 15, \ldots\). You could go bowling with \(2016\) pins!

According to a well-known anecdote, Carl Friedrich Gauss was a schoolboy when he discovered that the nth triangular number is equal to \(n(n + 1) / 2\), which is also the value of the binomial coefficient \(\binom{n + 1}{2}\). Hence 2016 is the number of ways of choosing two items at a time from a collection of 64 objects. A tweet by John D. Cook offers this interpretation: “2016 = The number of ways to place two pawns on a chessboard.”

Patrick Honner expressed the same idea another way: 2016 is the number of edges in a complete graph with 64 vertices.

The complete graph on 64 vertices, with 2016 edges.

If you attended a New Years Eve party with 64 people present, and at midnight every guest clinked glasses with every other guest, there were 2016 clinks in all. (And probably a lot of broken glassware.)

In a New Years story of another kind, Tim Harford warns of the cost of overconfidence when it comes to making resolutions, as when you buy a year’s gym membership and stop going after six weeks. “Some companies base their business models on our tendency to overestimate our willpower.”


Triangular numbers turn up in another holiday item as well. A video by Tipping Point Math offers a quantitative analysis of the gift-giving extravaganza in “The Twelve Days of Christmas.” On the \(n\)th day of Christmas you receive \(1 + 2 + 3 + \cdots + n\) gifts, which is clearly the triangular number \(T_n\). But what’s the total number of gifts when you add up the largesse over \(n\) days? These sums of triangular numbers are the tetrahedral numbers: \(1, 4, 10, 20, 35 \ldots\); the twelfth tetrahedral number is \(364\). The video shows where to find all these numbers in Pascal’s triangle.

Vince Knight, in Un Peu de Math, approaches the Christmas gift exchange as a problem in game theory. Two friends agree not to exchange gifts, but then they are both tempted to renege on the agreement and buy a present afterall. This situation is a variant of the game-theory puzzle known as prisoner’s dilemma—which sounds like a rather grim view of holiday tradition. But in fact the conclusion is cheery: “People enjoy giving gifts a lot more than receiving them.”

The December holiday season also brings Don Knuth’s annual Christmas lecture, which has been posted on YouTube. In previous years, this talk was the Christmas Tree lecture, because it touched on some aspect of trees as a data structure or a concept in graph theory. Now Knuth has branched out from the world of trees. His subject is comma-free codes—sets of words that can be jammed together without commas or spaces to mark the boundaries between words, and yet still be read without ambiguity. A set that lacks the comma-free property is {tea, ate, eat}, because a concatenation such as teateatea has within it not just tea,tea,tea but also eat,eat and ate,ate. But the words {sat, set, tea} do qualify as comma-free, because no sequence of the words can be partitioned in more than one way to yield words in the set.

The idea of comma-free codes arose in the late 1950s, when it was thought that the genetic code might have a comma-free structure. Experiments soon showed otherwise, and interest in comma-free codes waned. But Knuth has rediscovered a paper by Willard Eastman, published in 1965, that constructs an algorithm for generating a comma-free code (if possible) from a given set of symbols. Knuth’s lecture demonstrates the algorithm and gives a computer implementation.


Stephen Wolfram with a replica of Babbage’s difference
engine and a portrait of Ada Lovelace. Image courtesy
of Stephen Wolfram.

December 10, 2015, was the 200th birthday of Ada, Countess of Lovelace, who collaborated with Charles Babbage on the proposed computing machine called the analytic engine. The anniversary was commemorated in several ways, including an exhibition at the Science Museum in London, but here I want to call attention to a 12,000-word biographical essay by Stephen Wolfram, the creator of Mathematica.

The stories of Lovelace and Babbage have been told many times, but Wolfram’s account is worth reading even if you already know how the plot comes out. The principles of the machines and the algorithm that Lovelace devised as a demo (a computation of Bernoulli numbers) are described in detail, but the heart of the story is the human drama. Today we see these two figures as pioneers and heroes, but their lives were tinged with frustration and disappointment. Lovelace, who died at age 36, never got a proper opportunity to show off her ideas and abilities; in her one published work, all of her original contibutions were relegated to footnotes. Babbage in his later years felt aggrieved with the world for failing to support his vision. Wolfram makes an interesting biographer of this pair, never hesitating to see his subjects reflected in his own experience, and vice versa.


Back in the summer of 2012 Shinichi Mochizuki of Kyoto University released four long papers that claim to resolve an important problem in number theory called the abc conjecture. (I’m not going to try to explain the conjecture here; I did so in an earlier post.) More than three years later, no one in the mathematical community has been able to understand Mochizuki’s work well enough to verify that it is indeed a proof. In December more than 50 experts gathered at the University of Oxford to try to make progress on breaking the impasse. Brian Conrad of Stanford University, one of the workshop participants, wrote up his notes as a guest post in Cathy O’Neil’s Mathbabe blog. This is an insider’s account, and parts of the discussion will not make much sense unless you have fairly deep background in modern number theory. (I don’t.) But that inaccesibility illustrates the point, in a way. Number theorists themselves are in the same situation with regard to the Mochizuchi papers, which are clotted with idiosyncratic concepts such as Inter-universal Teichmuller Theory (IUT). Conrad writes:

After [Kiran] Kedlaya’s lectures, the remaining ones devoted to the IUT papers were impossible to follow without already knowing the material: there was a heavy amount of rapid-fire new notation and language and terminology, and everyone not already somewhat experienced with IUT got totally lost…. Persistent questions from the audience didn’t help to remove the cloud of fog that overcame many lectures in the final two days. The audience kept asking for examples (in some instructive sense, even if entirely about mathematical structures), but nothing satisfactory to much of the audience along such lines was provided.

It’s still not clear when or how the status of the proof will be resolved. O’Neil herself has taken a stern position on the issue of inpenetrable purported proofs. When the Mochizuchi papers were first released, she wrote: “If I claim to have proved something, it is my responsibility to convince others I’ve done so; it’s not their responsibility to try to understand it (although it would be very nice of them to try).”

Anthony Bonato, the Intrepid Mathematician, offers a friendlier introduction to the abc conjecture and its consequences. Also see comments on the conjecture and the workshop by Evelyn Lamb in the Scientific American blog Roots of Unity and by Anna Haensch in an American Mathematical Society blog.


Another number-theory event that attracted wide notice in recent weeks was the successful defense and publication of a doctoral thesis at Princeton University by Piper Harron. The title is “The Equidistribution of Lattice Shapes of Rings of Integers of Cubic, Quartic, and Quintic Number Fields: an Artist’s Rendering,” and it’s a great read (PDF). Here is the abstract:

A fascinating tale of mayhem, mystery, and mathematics. Attached to each degree \(n\) number field is a rank \(n\, - 1\) lattice called its shape. This thesis shows that the shapes of \(S_n\)-number fields (of degree \(n = 3, 4,\) or \(5)\) become equidistributed as the absolute discriminant of the number field goes to infinity. The result for \(n = 3\) is due to David Terr. Here, we provide a unified proof for \(n = 3, 4,\) and \(5\) based on the parametrizations of low rank rings due to Bhargava and Delone–Faddeev. We do not assume any of those words make any kind of sense, though we do make certain assumptions about how much time the reader has on her hands and what kind of sense of humor she has.

This is not your grandmother’s doctoral thesis! Harron interleaves sections of “laysplaining” and “mathsplaining,” and illustrates her work with an abundance of metaphors, jokes, asides to the reader, cartoons, and commentary on the course of her life during the 10 years she spent writing the thesis (e.g., a brief interruption to give birth). In terms of expository style, Mochizuki and Harron stand at opposite poles—a fact noted by at least two bloggers. Both of the posts by Evelyn Lamb and by Anna Haensch mentioned above in connection with Mochizuki also discuss Harron’s work.

Harron has a blog of her own with a post titled “Why I Do Not Talk About Math,” and she has written a guest post for Mathbabe with this description of her thesis:

My thesis is this thing that was initially going to be a grenade launched at my ex-prison, for better or for worse, and instead turned into some kind of positive seed bomb where flowers have sprouted beside the foundations I thought I wanted to crumble.

Reading her accounts of life as “an escaped graduate student,” I am in equal measures amused and horrified (not a comfortable combination). If nothing else, Harron’s “thesis grenade” promises to broaden—to diversify—the discussion of diversity in mathematical culture. It’s not just about race and gender (although Harron cares passionately about those issues). Human diversity also ranges across many other dimensions: modes of reasoning, approaches to learning, cultural contexts, styles of explaining, and ways of living. Harron’s thesis is a declaration that one can do original research-level mathematics without adopting the vocabulary, the styles, the attitudes, and the mental apparatus of one established academic community.


If I show you a cube, you can easily place it in a three-dimensional cartesian coordinate system in such a way that all the vertices have rational \(x,y,z\) coordinates. By scaling the cube, you can make all the coordinates integers. The same is true of the regular tetrahedron and the regular octahedron, but for these objects the scaling factor includes an irrational number, \(\sqrt{2}\). For the dodecahedron and the icosahedron, some of the vertex coordinates are themselves irrational no matter how the figure is scaled; the irrational number \(\varphi = (1 + \sqrt{5})\, /\, 2\) plays an essential role in the geometry.

This intrusion of irrationality into geometry troubled the ancients, but we seem to have gotten used to it by now. However, David Eppstein, a.k.a \(0\)xDE, writes about a class of polyhedra I still find deeply disconcerting: “Polyhedra whose vertex coordinates have no closed form formula.” In this context a closed-form formula is a mathematical expression that can be evaluated in a finite number of operations. There’s no universal agreement on exactly what operations are allowed; Eppstein works with computational models that allow the familiar operations \(+, -, \times, \div\) as well as finding roots of polynomials. He constructs a polyhedron—it looks something like the teepee his children used to play with—whose vertex coordinates cannot all be calculated by a finite sequence of these operations.

With this development we land in territory even stranger than that of the irrational numbers. I cannot draw an exact equilateral triangle on a computer screen because at least one of the coordinates is irrational; nevertheless, I can tell you that the vertex ought to have the coordinate \(y = 1 / \sqrt{3}\). In Eppstein’s polyhedron I can’t give you any such compact, digestible description of where the vertex belongs; the best anyone can offer is a program for approximating it.


Brent Yorgey in The Math Less Traveled offers an unlikely question: What’s the best way to read a manuscript printed on three-sided paper? If you have a sheaf of unbound pages printed on just one side, the obvious procedure is to read the top sheet, then shuffle it to the bottom of the heap, and continue until you come back to page 1. If the pages are printed on two sides, life gets a little more complicated. You can read the top page, flip it over and read the back, then flip it again and move it to the bottom of the sheaf. An alternative (attributed to John Horton Conway) is to read the front page, move it to the bottom of the heap, then flip over the entire stack to read the back side; then flip the stack again to read the front of the following page. In either case, you must alternate two distinct operations, which means you must somehow keep track of which move comes next.

The unsolved problem is to find the best algorithm when the pages are printed on three sides. “You may not be familiar with triple-sided sheets of paper,” Yorgey writes, “so here’s how they work: they stack nicely, just like regular sheets of paper, but you have to flip one over three times before you get back to the original side.”

Yorgey gives some criteria that a successful algorithm ought to satisfy, but the question remains open.


Mr. Honner takes up a problem encountered at a Math for America banquet:

Suppose you are standing several miles from the Pentagon. What is the probability you can see three sides of the building?

He discovers that it’s one of those cases where interpreting the problem is as much of a challenge as finding the answer. “In particular, it’s a reminder of how the different ways we model random selection can make for big differences in our solutions!”


Long-tailed data distributions—where extreme values are more common than they would be in, say, a normal distribution—are notoriously tricky. John D. Cook points out that long tails become even more treacherous when the variables are discrete (e.g., integers) rather than continuous values.

Suppose \(n\) data points are drawn at random from a distribution where each possible value \(x\) appears with frequency proportional to \(x^{-\alpha}\). Working from the data sample, we want to estimate the value of the exponent \(\alpha\). If \(x\) is a continuous variable, there’s a maximum-likelihood formula that generally works well: \(\hat{\alpha} = 1 + n\, / \sum \log x\). But with discrete variables, the same method leads to disastrous errors. In a test case with \(\alpha = 3\), the formula yields \(\hat{\alpha} = 6.87\). But we needn’t lose hope. Cook presents another approach that gives quite reliable results.


A post on DRHagen.com tackles a mystery found in an xkcd cartoon:

Calendar of meaningful dates, by Randall Munroe (xkcd). License: Creative Commons Attribution-NonCommercial 2.5.

The size of each date is proportional to its frequency of occurrence in the Google Books ngram database for English-language books published since 2000. September 11 is clearly an outlier. That’s not the mystery. The mystery is that the 11th of most other months is noticeably less common than other dates. The cartoonist, Randall Munroe, was puzzled by this; hover over the image to see his message. Hagen convincingly solves the mystery. I’m not going to give away the solution, and I urge you to try to come up with a hypothesis of your own before following the link to DRHagen.com. And if you get that one right, you might work on another calendrical anomaly—that the 2nd, 3rd, 22nd, and 23rd are underrepresented in books printed before the 20th century—which Hagen solves in a followup post.


A shiny new blog called Off the Convex Path, with contributions from Sanjeev Arora, Moritz Hardt, and Nisheeth Vishnoi, “is dedicated to the idea that optimization methods—whether created by humans or nature, whether convex or nonconvex—are exciting objects of study and often lead to useful algorithms and insights into nature.” A December 21 post by Vishnoi looks at optimization mthods in the guise of dynamical systems—specifically, systems that converge to some set of fixed points. Vishnoi gives two examples, both from the life sciences: a version of Darwinian evolution, and a biological computer in which slime molds solve linear programming problems.


In the Comfortably Numbered blog, hardmath123 writes engagingly and entertainingly about numerical coincidences, like this one:

\(1337^{47168026} \approx \pi \cdot 10^{147453447}\) to within \(0.00000001\%\). It begins with the digits \(31415926 \ldots\).

The existence of such coincidences should not be a big surprise. As hardmath123 writes, “Since the rationals are dense in the reals, we can find a rational number arbitrarily close to any real number.” The question is how to find them. The answer involves continued fractions and the Dirichlet approximation theorem.


For the final acts of this carnival, we have a few items on mathematics in the arts, crafts, games, and other recreations.

In the Huffington Post arts and culture department, Dan Rockmore takes a mathematical gallery walk in New York. The first stop is the Whitney Museum of American Art, showing a retrospective of the paintings of Frank Stella, some of whose canvases are nonrectilinear, and even nonconvex. In another exhibit mathematics itself becomes art, with 10 equations and other expressions calligraphically rendered by 10 noted mathematicians and scientists.

Katherine, writing on the blog Will Knit for Math, tells how “Being a mathematician improves my knitting.” It’s not just a matter of counting stitches (although a scheme for counting modulo 18 does enter into the story). I hope we’ll someday get a followup post explaining how “Being a knitter improves my mathematics.”

Chelsea VanderZwaag, a student majoring in mathematics and elementary education, visits an arts-focused school school, where a lesson on paper-folding and fractions blends seamlessly into the curriculum. (An earlier post on the problem of creating a shape by folding paper and making a single cut with scissors provides a little more background.)

Dobble cardOn Medium, A Woman in Technology writes about Dobble, a combinatorial card game. “My children got this game for Christmas. We haven’t played it yet. We got as far as me opening the tin and reading the rules, at which point I got distracted by the maths and forgot about the game.”

There are 55 cards, each bearing eight symbols, and each card shares exactly one symbol with each of the other cards. How many distinct symbols do you need to construct such a deck of cards. The game instructions say there are 50, but the author determines through hard work, scribbling, and spreadsheets that the instructions are in error. It all comes to a happy end with the formula \(n^2 - n + 1\) and a suggested improvement to the game that the manufacturer should heed.


That’s it for Carnival of Math 130. My apologies to a few contributors whose interesting work I just couldn’t squeeze in here.

Next month the carnival makes a hometown stop with Katie at The Aperiodical.

Posted in mathematics | 4 Comments

Ten years of bit-playing

Scribble, scribble, scribble. As if the world didn’t get enough of my writing already, with a bimonthly column in American Scientist, now I’m equipped to publish my every thought on a momen’t notice.

That’s how it all began here on bit-player.org: The first post (with the first typo) appeared on January 9, 2006. I’ve published another 340 posts since then (including this one)—and doubtless many more typos and other errors. Many thanks to my readers, especially those who have contributed some 1,800 thoughtful comments.

Posted in meta | 5 Comments

Deep Dreaming with Every Card I Write

My closest friends and family must make do with an old-fashioned paper-and-postage greeting card, but for bit-player readers I can send some thoroughly modern pixels. Happy holidays to everyone.

In recent months I’ve been having fun with “deep dreaming,” the remarkable toy/tool for seeing what’s going on deep inside deep neural networks. Those networks have gotten quite good at identifying the subject matter of images. If you train the network on a large sample of images (a million or more) and then show it a picture of the family pet, it will tell you not just whether your best friend is a cat or a dog but whether it’s a Shih Tzu or a Bichon Frise.

What visual features of an image does the network seize upon to make these distinctions? Deep dreaming tries to answer this question. It probes a layer of the network, determines which neural units are most strongly stimulated by the image, and then translates that pattern of activation back into an array of pixels. The result is a strange new image embellished with all the objects and patterns and geometric motifs that the selected layer thinks it might be seeing. Some of these machine dreams are artful abstractions; some are reminiscent of drug-induced hallucinations; some are just bizarre or even grotesque, populated by two-headed birds and sea creatures swimming through the sky.

For this year’s holiday card I chose a scene appropriate to the season and ran it through the deep-dreaming program. You can see some of the output below, starting with the original image and progressing through fantasies extracted from deeper and deeper layers of the network. (Navigate with the icons below the image, or use the left and right arrow keys. Shorthand labels identifying the network layers appear at lower right.)

A few notes and observations:

  • The embellishments begin with fairly abstract motifs, then become more elaborate and figurative. But this evolution reaches a peak near the middle of the sequence; after that, things calm down a little. By the end, the trees look like trees again, and the sky has more snowflakes and fewer diaphanous monsters. Perhaps this turn toward realism is to be expected, since the later layers are where the neural network settles on a final interpretation of the image.
  • Information flows through the layers in sequence, with any hypothesis formed in one later passed along to the later ones. You might think this would tend to stabilize interpretations. If layer 4a sees a puppy face in a certain region, then layers 4b and 4c would be influenced by this verdict. And indeed there are places in the image where certain wild fantasies do persist from one stage to the next; for example, a brightly colored vehicle first appears in the lower right quadrant in layer 4c, and it reappears with variations in the next three layers—4d, 4e, and pool4. On the whole, however, there is less continuity from layer to layer than I would have expected.
  • The scene in layer 4c fascinates me. For one thing, it has a cast of characters quite unlike the surrounding layers—human figures (sort of) rather than animals, and buildings that look like gazebos or onion-domed spires. But what intrigues me most is the geometric transformation of the landscape. The world has been flattened; in the left half of the image, the buildings all have their foundations resting on a plane that doesn’t actually exist, and the people have their feet on the ground. The network is constructing a perspective view.

And some links:

  • Deep Dreaming was invented by three young engineers and interns at Google, Alexander Mordvintsev, Michael Tyka, and Christopher Olah. As far as I know, their only publications on the subject are a pair of blog posts. The first post announced the discovery and showed some sample images; the second provided links to the open-source code. The code itself is available on GitHub.
  • The neural network used in these experiments, called GoogLeNet, was devised by Christian Szegedy and several colleagues at Google Research. They describe it in arXiv:1409.4842.
  • For the experiments described here, the GoogLeNet program was trained on more than a million pre-labeled images retrieved from a database called ImageNET. The subjects of these images, which had been downloaded from the Internet, were what the neural network learned to recognize.
  • Computer Vision and Computer Hallucinations” is my American Scientist article on the subject (Vol. 103, No. 6, November–December 2015, pages 380–383).
  • If you would like to try playing with these toys yourself, all the software is open source, but getting it installed and running can be an adventure. I’ve written a memo on my own experiences, which includes links to other useful resources.
  • I would like to credit the photographer who created the original image, but I have not been able to track down the source. I found it at the Latvian website www.lejins.lv, there’s no information there about its provenance. Thus I am using it here without permission. Mea culpa.

Update 2015-12-31: In the comments, Ed Jones asks, “If the original image is changed slightly, how much do the deep dreaming images change?” It’s a very good question, but I don’t have a very good answer.

The deep dreaming procedure has some stochastic stages, and so the outcome is not deterministic. Even when the input image is unchanged, the output image is somewhat different every time. Below are enlargements cropped from three runs probing layer 4c, all with exactly the same input:

They are all different in detail, and yet at a higher level of abstraction they are all the same: They are recognizably products of the same process. That statement remains true when small changes—and even some not-so-small ones—are introduced into the input image. The figure below has undergone a radical shift in color balance (I have swapped the red and blue channels), but the deep dreaming algorithm produces similar embellishments, with an altered color palette:

In the pair of images below I have cloned a couple of trees from the background and replanted them in the foreground. They are promptly assimilated into the deep dream fantasy, but again the overall look and feel of the scene is totally familiar.

Based on the evidence of these few experiments, it seems the deep dreaming images are indeed quite robust, but there’s another side to the story. When these neural networks are used to recognize or classify images (the original design goal), it’s actually quite easy to fool them. Christian Szegedy and his colleagues have shown that certain imperceptible changes to an image can cause the network to misclassify it; to the human eye, the picture still looks like a school bus, but the network sees it as something else. And Ahn Nguyen et al. have tricked networks into confidently identifying images that look like nothing but noise. These results suggest that the classification methods are rather brittle or fragile, but that’s not quite right either. Such errors arise only with carefully crafted images, called “adversarial examples.” There is almost no chance that a random change to an image would trigger such a response.

Posted in computing, off-topic | 6 Comments

Geotargeted by the NY Times

Reading the Times this morning, I got to the middle of a story about the cost of medical care when I was stopped by this passage:

Consider Boston, our best guess for where you might be reading this article. It’s very expensive for spending on the average Medicare patient. But, when it comes to private health insurance, it’s about average.

Their best guess about my whereabouts was pretty good. I am indeed in the Boston area. I’m not surprised that they know that, but I was nonplussed to discover that they had altered the story according to my location. Here’s the markup for that paragraph:

<div class="g-insert">
<p class="g-body">
Consider <span class="g-custom-place g-selected-hrr-name">
Boston</span>
<span class="g-geotarget-success">, our best 
guess for where you might be reading this article</span>. 
<span class="g-new-york-city-addition g-custom-insert g-hidden"> 
(Here, the New York City region includes all boroughs but the 
Bronx, which is listed separately.)</span>

It’s <span class="g-medicare-adjective g-custom-place">
very expensive</span> 
for spending on the average Medicare patient. 
<span class="g-local-insert g-very-different">
But, w</span><span class="g-close g-same g-hidden g-local-insert">
W</span>hen it comes to private health insurance, it’s 
<span class="g-hidden g-same g-local-insert">also</span> 
<span class="g-private-adjective g-custom-place">
about average</span>. 

<span class="g-close g-hidden g-local-insert">
The study finds that the levels of spending for the two programs 
are unrelated. That means that, for about half of communities, 
spending is somewhat similar, like it is in 
<span class="g-custom-place g-selected-hrr-name g-hrr-only-no-state">
Boston</span>
</span>

<span class="g-same g-hidden g-local-insert g-same-sentence">
<span class="g-custom-place g-hrr-only-no-state">Boston</span> 
is one of the few places where spending for both programs 
is very similar – in most, there is some degree of mismatch.
</span>

<span class="g-new-york-addition g-local-insert g-hidden">
Several parts of the New York metropolitan area are outliers 
in the data – among the most expensive for both health 
insurance systems.</span>

<!-- <span class="g-atlanta-addition g-local-insert g-hidden">
(Atlanta is one of the few places in the country where spending 
for both programs is very similar. In most, there is some 
degree of mismatch.)</p> -->
</p>
</div>

It seems I live in a g-custom-place (“Boston”), which is associated with a g-medicare-adjective (“very expensive”) and a g-private-adjective (“about average”). Because the Times has been able to track me down, I get a g-geotarget-success message. But for the same reason I don’t get to see certain other text, such as a remark about New York as an outlier; those text spans are g-hidden.

Presumably, a program running on the server has located me by checking my IP number against a geographic database, then added various class names to the span tags. Some of the class names are processed by a CSS stylesheet; for example, g-hidden triggers the style directive display: none. The other class names are apparently processed by a Javascript program that inserts or removes text, such as those custom adjectives. Generating text in this way looks like a pretty tedious and precarious business, a little like writing poetry with refrigerator magnets. For example, an extra set of class names is needed to make sure that if a place is first mentioned as a city and state (e.g., “Springfield, Massachusetts”), the state won’t be repeated on subsequent references.

I suppose there’s no great harm in this bit of localizing embellishment. After all, they’re not tailoring the article based on whether I’m black or white, male or female, Democrat or Republican, rich or poor. It’s just a geographic split. But it makes me queasy all the same. When I cite an article here on bit-player, I want to think that everyone who follows the link will see the same article. It looks like I can’t count on that.

Update: Turns out this is not the Times’s first adventure in geotargeting. A story last May on “paths out of poverty” used the same technique, as reported by Nieman Lab. (Thanks to Andrew Silver (@asilver360) for the tip via Twitter.)

Update 2015-12-17: Margaret Sullivan, the Public Editor of the Times, writes today on the mixed response to the geotargeted story, concluding:

The Times could have quite easily provided readers with an opt-in: “Want to see results for your area? Click here.”

As the paper continues down this path, it’s important to do so with awareness and caution. For one thing, some readers won’t like any personalization and will regard it as intrusive. For another, personalization could deprive readers of a shared, and expertly curated, news experience, which is what many come to The Times for. Losing that would be a big mistake.

Posted in modern life | 3 Comments

The Carnival Is Coming to Town

Carnival-Cravings_300x119.jpgBit-player will be hosting the 130th Carnival of Mathematics, the monthly celebration of math(s) writing on the Web organized by The Aperiodical. Submissions are open now through January 9.

Please contribute! Anything that might engage or delight the mathematical mind is welcome: theorems, problems, games and recreations, notes on math education. We’ll take pure and applied, discrete and continuous, geometrical and arithmetical, differential and integral, polynomial and exponential…. You get the idea. And don’t be shy about proposing your own work!

Submit here.

Posted in mathematics | Comments Off

Mad Max Economics

Writing in The New York Times, the business columnist Eduardo Porter quotes Paul Ehrlich quoting Kenneth Boulding: “Anyone who believes exponential growth can go on forever in a finite world is either a madman or an economist.” Then Porter, siding with the economists if not the madmen, proclaims that economic growth must continue if “civilization as we know it” is to survive. (He doesn’t explicitly say the growth needs to be exponential.)

Porter links continuing growth not just to prosperity but also to a host of civic virtues. “Economic development was indispensable to end slavery,” he declares. “It was a critical precondition for the empowerment of women…. Indeed, democracy would not have survived without it.” As for what might happen to us without ongoing growth, he offers dystopian visions from history and Hollywood. Before the industrial revolution, “Zero growth gave us Genghis Khan and the Middle Ages, conquest and subjugation,” he says. A zero-growth future looks just as grim: “Imagine ‘Blade Runner,’ ‘Mad Max,’ and ‘The Hunger Games’ brought to real life.”

Let me draw you a picture of this vision of economic growth through the ages, as I understand it:

Economic growth modeled as a logistic curve
For hundreds or thousands of years before the modern era, average wealth and economic output were low, and they grew only very slowly. Life was solitary, poor, nasty, brutish, and short. Today we have vigorous economic growth, and the world is full of wonders. Life is sweet, for now. If growth comes to an end, however, civilization collapses and we are at the mercy of new barbarian hordes (equipped with a different kind of horsepower).

Something about this scenario puzzles me. In that frightful Mad Max future, even though economic growth has tapered off, the society is in fact quite wealthy; according to the graph, per capita gross domestic product is twice what it is today. So why the descent into brutality and plunder?

Porter has an answer at the ready. The appropriate measure of economic vitality, he implies, is not GDP itself but the rate of growth in GDP, or in other words the first derivative of GDP as a function of time:

First derivative of the logistic growth curve

If the world follows the trajectory of the blue curve, we have already reached our peak of wellbeing. It’s all downhill from here.

Some economists go even further, urging us to keep an eye on the second derivative of economic activity. Twenty-five years ago I was hired to edit the final report of an MIT commission on industrial productivity. Among the authors were two prominent economists, Robert Solow and Lester Thurow. I argued with them at some length about the following paragraph (they won):

In view of all the turmoil over the apparently declining stature of American industry, it may come as a surprise that the United States still leads the world in productivity. Averaged over the economy as a whole, for each unit of input the United States produces more output than any other nation. With this evidence of economic efficiency, is there any reason for concern? There are at least two reasons. First, American productivity is not growing as fast as it used to, and productivity in the United States is not growing as fast as it is elsewhere, most notably in Japan….

The phrase I have highlighted warns us that even though productivity is high and growing higher, we need to worry about the rate of change in the rate of growth:

Second derivative of the logistic growth curve

Taking the second derivative (the green curve) as the metric of economic health, it appears we have already fallen back to the medieval baseline, and life is about to get even worse than it was at the time of the Mongol conquests; the Mad Max world will be an improvement over what lies in store in the near future.


Why should human happiness and the fate of civilization depend on the time derivative of GDP, rather than on GDP itself? Why do we need not just wealth, but more and more wealth, growing faster and faster? Again, Porter has an answer. Without growth, he says, economic life becomes a zero-sum game. “As Martin Wolf, the Financial Times commentator has noted, the option for everybody to become better off—where one person’s gain needn’t require another’s loss—was critical for the development and spread of the consensual politics that underpin democratic rule.” In other words, the function of economic growth is to blunt the force of envy in a world with highly skewed distributions of income and wealth. I’m not persuaded that growth per se is either necessary or sufficient to deal with this issue.

Porter’s essay on zero growth was prompted by the climate-change negotiations now under way in Paris. He worries (along with many others) that curtailing consumption of fossil fuels will lead to lower overall production and consumption of goods and services. That’s surely a genuine risk, but what’s the alternative? If burning more and more carbon is the only way we can keep our civilization afloat, then somebody had better send for Mad Max. The age of fossil fuels is going to end, sooner or later, if not because of the climatic effects then because the supply is finite.

Economic growth is not necessarily tied to the carbon budget, but it can’t be cut loose entirely from physical resources. Even the ethereal goods that are now so prominent in commerce—code and data—require some sort of material infrastructure. Ultimately, whether growth continues is not a question of social and economic policy or moral philosophy; it’s a matter of physics and mathematics. I’m with Kenneth Boulding. I don’t see Mad Max in our future, but I’m not counting on perpetual growth, either.

Posted in mathematics, social science | 12 Comments

Ramsey Theory in the Dining Room

Friends were coming for dinner, and we would be eight at the table. When I pulled the wine glasses out of the cupboard, I found quite a miscellaneous array of stemware—an equilibrium distribution reached after many years of occasional additions and steady attrition.

19 stemmed glasses with no set of 8 matching

When I looked over the collection, I quickly realized that we could not form a set of eight matching glasses. The closest we could come was 6 + 2. But then I saw that we could form a set of eight glasses with no two alike. As I placed them on the table, I thought “Aha, Ramsey theory!”

8 glasses with no two alike

At the root of Ramsey theory lies this curious assertion: If a collection of objects is large enough, it cannot be entirely without structure or regularity. Dinner parties offer the canonical example: If you invite six people to dinner, then either at least three guests will already be mutual acquaintances (each knows all the others) or at least three guests will be strangers (none has met any of the others). This result has nothing to do with the nature of social networks; it is a matter of pure mathematics, first proved by the Cambridge philosopher, mathematician, and economist Frank Plumpton Ramsey (1903–1930).

Ramsey problems become a little easier to reason about when you transpose them into the language of graph theory. Consider a complete graph on six vertices (where every vertex has an edge connecting it with every other vertex, for a total of 15 edges):

complete graph on six vertices, with uncolored edges

The aim is to color all the edges of the graph red or blue in such as way that no three vertices are connected by edges of the same color (forming a “monochromatic clique”). The red edges might signify “mutually acquainted” and the blue ones “strangers.” As the diagrams below show, it’s easy to find a successful red-and-blue coloring of a complete graph on five vertices: In the pentagon at left, each vertex is connected to two other vertices by red edges, but those vertices are connected to each other by a blue edge. Thus there are no red triangles, and a similar analysis shows there are no blue ones either. The same scheme doesn’t work for a six-vertex graph, however. The attempt shown at right fails with two blue triangles. In fact, any two-coloring of this graph has monochromatic triangles. Ramsey’s 1928 proof of this assertion is based on the pigeonhole principle. These days, we also have the option of just checking all \(2^{15}\) possible colorings.

red and blue edge colorings of complete graphs on 5 and 6 vertices

More formally, the Ramsey number \(\mathcal{R}(m, n)\) is the number of vertices in the smallest complete graph for which a two-coloring of the edges is certain to yield a red clique of \(m\) edges or a blue clique of \(n\) edges (or both). In applying this notion to the wine glass problem, I was asking: How many glasses do I need to have in my cupboard to ensure there are either eight all alike or eight all different?


At dinner that night we cheerfully clinked our eight dissimilar glasses. Maybe we even completed the full round of \((8 \times 7) / 2 = 28\) clinks. Later on, after everyone had gone home and all the glasses were washed, my thoughts returned to Ramsey theory. I was wondering, “What is the value of \(\mathcal{R}(8, 8)\), the smallest complete graph that is sure to have a monochromatic subgraph of at least eight vertices? Lying awake in the middle of the night, I worked out a solution in terms of wine glasses.

Suppose you start with an empty cupboard and add glasses one at a time, aiming to assemble a collection in which no eight glasses are all alike and no eight glasses are all different. You could start by choosing seven different glasses—but no more than seven, lest you create an all-different set of eight. Every glass you subsequently add to the set must be the same as one of the original seven. You can keep going in this way until you have seven sets of seven identical glasses. When you add the next glass, however, you can’t avoid creating a set that either has eight glasses all alike or eight all different. Thus it appears that \(\mathcal{R}(8, 8) = 7^2 + 1 = 50\).

The moment I reached this conclusion, I knew something was dreadfully wrong. Computing Ramsey numbers is hard. After decades of mathematical and computational labor, exact \(\mathcal{R}(m, n)\) values are known for only nine cases, all with very small values of \(m\) and \(n\). Lying in the dark, without Google at my fingertips, I couldn’t remember the exact boundary between known and unknown, but I was pretty sure that \(\mathcal{R}(8, 8)\) lay on the wrong side. The idea that I might have just calculated this long-sought constant in my head was preposterous. And so, in a state of drowsy perplexity, I fell asleep.

Next morning, the mystery evaporated. Where did my reasoning go wrong? You might want to think a moment before revealing the answer.


Posted in mathematics, modern life | 9 Comments