Where’s My Petabyte Disk Drive?

Fourteen years ago I noted that disk drives were growing so fast I couldn’t fill them up. Between 1997 and 2002, storage capacity doubled every year, allowing me to replace a 3 gigabyte drive with a new 120 gigabyte model. I wrote:

Extrapolating the steep trend line of the past five years predicts a thousandfold increase in capacity by about 2012; in other words, today’s 120-gigabyte drive becomes a 120-terabyte unit.

Extending that same growth curve into 2016 would allow for another four doublings, putting us on the threshold of the petabyte disk drive (i.e., \(10^{15}\) bytes).

None of that has happened. The biggest drives in the consumer marketplace hold 2, 4, or 6 terabytes. A few 8- and 10-terabyte drives were recently introduced, but they are not yet widely available. In any case, 10 terabytes is only 1 percent of a petabyte. We have fallen way behind the growth curve.

The graph below extends an illustration that appeared in my 2002 article, recording growth in the areal density of disk storage, measured in bits per square inch:

Graph of hard disk areal density in bits per square inch from 1956 to 2016

The blue line shows historical data up to 2002 (courtesy of Edward Grochowski of the IBM Almaden Research Center). The bright green line represents what might have been, if the 1997–2002 trend had continued. The orange line shows the real status quo: We are three orders of magnitude short of the optimistic extrapolation. The growth rate has returned to the more sedate levels of the 1970s and 80s.


What caused the recent slowdown? I think it makes more sense to ask what caused the sudden surge in the 1990s and early 2000s, since that’s the kink in the long-term trend. The answers lie in the details of disk technology. More sensitive read heads developed in the 90s allowed information to be extracted reliably from smaller magnetic domains. Then there was a change in the geometry of the domains: the magnetic axis was oriented perpendicular to the surface of the disk rather than parallel to it, allowing more domains to be packed into the same surface area. As far as I know, there have been no comparable innovations since then, although a new writing technology is on the horizon. (It uses a laser to heat the domain, making it easier to change the direction of magnetization.)

As the pace of magnetic disk development slackens, an alternative storage medium is coming on strong. Flash memory, a semiconductor technology, has recently surpassed magnetic disk in areal density; Micron Technologies reports a laboratory demonstration of 2.7 terabits per square inch. And Samsung has announced a flash-based solid-state drive (SSD) with 15 terabytes of capacity, larger than any mechanical disk drive now on the market. SSDs are still much more expensive than mechanical disks—by a factor of 5 or 10—but they offer higher speed and lower power consumption. They also offer the virtue of total silence, which I find truly golden.

Flash storage has replaced spinning disks in about a quarter of new laptops, as well as in all phones and tablets. It is also increasingly popular in servers (including the machine that hosts bit-player.org). Do disks have a future?

open-case photo of a 15-year-old 2.5-inch disk drive

In my sentimental moments, I’ll be sorry to see spinning disks go away. They are such jewel-like marvels of engineering and manufacturing prowess. And they are the last link in a long chain of mechanical contrivances connecting us with the early history of computing—through Turing’s bombe and Babbage’s brass gears all the way back to the Antikythera mechanism two millennia ago. From here on out, I suspect, most computers will have no moving parts.

Maybe in a decade or two the spinning disk will make a comeback, the way vinyl LPs and vacuum tube amplifiers have. “Data that comes off a mechanical disk has a subtle warmth and presence that no solid-state drive can match,” the cogniscenti will tell us.


“You can never be too rich or too thin,” someone said. And a computer can never be too fast. But the demand for data storage is not infinitely elastic. If a file cabinet holds everything in the world you might ever want to keep, with room to spare, there’s not much added utility in having 100 or 1,000 times as much space.

In 2002 I questioned whether ordinary computer users would ever fill a 1-terabyte drive. Specifically, I expressed doubts that my own files would ever reach the million megabyte mark. Several readers reassured me that data will always expand to fill the space available. I could only respond “We’ll see.” Fourteen years later, I now have the terabyte drive of my dreams, and it holds all the words, pictures, music, video, code, and whatnot I’ve accumulated in a lifetime of obsessive digital hoarding. The drive is about half full. Or half empty. So I guess the outcome is still murky. I can probably fill up the rest of that drive, if I live long enough. But I’m not clamoring for more space.

One factor that has surely slowed demand for data storage is the emergence of cloud computing and streaming services for music and movies. I didn’t see that coming back in 2002. If you choose to keep some of your documents on Amazon or Azure, you obviously reduce the need for local storage. Moreover, offloading data and software to the cloud can also reduce the overall demand for storage, and thus the global market for disks or SSDs. A typical movie might take up 3 gigabytes of disk space. If a million people load a copy of the same movie onto their own disks, that’s 3 petabytes. If instead they stream it from Netflix, then in principle a single copy of the file could serve everyone.

In practice, Netflix does not store just one copy of each movie in some giant central archive. They distribute rack-mounted storage units to hundreds of internet exchange points and internet service providers, bringing the data closer to the viewer; this is a strategy for balancing the cost of storage against the cost of communications bandwidth. The current generation of the Netflix Open Connect Appliance has 36 disk drives of 8 terabytes each, plus 6 SSDs that hold 1 terabyte each, for a total capacity of just under 300 terabytes. (Even larger units are coming soon.) In the Netflix distribution network, files are replicated hundreds or thousands of times, but the total demand for storage space is still far smaller than it would be with millions of copies of every movie.

A recent blog post by Eric Brewer, Google’s vice president for infrastructure, points out:

The rise of cloud-based storage means that most (spinning) hard disks will be deployed primarily as part of large storage services housed in data centers. Such services are already the fastest growing market for disks and will be the majority market in the near future. For example, for YouTube alone, users upload over 400 hours of video every minute, which at one gigabyte per hour requires more than one petabyte (1M GB) of new storage every day or about 100x the Library of Congress.

Thus Google will not have any trouble filling up petabyte drives. An accompanying white paper argues that as disks become a data center specialty item, they ought to be redesigned for this environment. There’s no compelling reason to stick with the present physical dimensions of 2½ or 3½ inches. Moreover, data-center disks have different engineering priorities and constraints. Google would like to see disks that maximize both storage capacity and input-output bandwidth, while minimizing cost; reliability of individual drives is less critical because data are distributed redundantly across thousands of disks.

The white paper continues:

An obvious question is why are we talking about spinning disks at all, rather than SSDs, which have higher [input-output operations per second] and are the “future” of storage. The root reason is that the cost per GB remains too high, and more importantly that the growth rates in capacity/$ between disks and SSDs are relatively close . . . , so that cost will not change enough in the coming decade.


If the spinning disk is remodeled to suit the needs and the economics of the data center, perhaps flash storage can become better adapted to the laptop and desktop environment. Most SSDs today are plug-compatible replacements for mechanical disk drives. They have the same physical form, they expect the same electrical connections, and they communicate with the host computer via the same protocols. They pretend to have a spinning disk inside, organized into tracks and sectors. The hardware might be used more efficiently if we were to do away with this charade.

Or maybe we’d be better off with a different charade: Instead of dressing up flash memory chips in the disguise of a disk drive, we could have them emulate random access memory. Why, after all, do we still distinguish between “memory” and “storage” in computer systems? Why do we have to open and save files, launch and shut down applications? Why can’t all of our documents and programs just be everpresent and always at the ready?

In the 1950s the distinction between memory and storage was obvious. Memory was the few kilobytes of magnetic cores wired directly to the CPU; storage was the rack full of magnetic tapes lined up along the wall on the far side of the room. Loading a program or a data file meant finding the right reel, mounting it on a drive, and threading the tape through the reader and onto the take-up reel. In the 1970s and 80s the memory/storage distinction began to blur a little. Disk storage made data and programs instantly available, and virtual memory offered the illusion that files larger than physical memory could be loaded all in one go. But it still wasn’t possible to treat an entire disk as if all the data were all present in memory. The processor’s address space wasn’t large enough. Early Intel chips, for example, used 20-bit addresses, and therefore could not deal with code or data segments larger than \(2^{20} \approx 10^6\) bytes.

We live in a different world now. A 64-bit processor can potentially address \(2^{64}\) bytes of memory, or 16 exabytes (i.e., 16,000 petabytes). Most existing processor chips are limited to 48-bit addresses, but this still gives direct access to 281 terabytes. Thus it would be technically feasible to map the entire content of even the largest disk drive onto the address space of main memory.

In current practice, reading from or writing to a location in main memory takes a single machine instruction. Say you have a spreadsheet open; the program can get the value of any cell with a load instruction, or change the value with a store instruction. If the spreadsheet file is stored on disk rather than loaded into memory, the process is quite different, involving not single instructions but calls to input-output routines in the operating system. First you have to open the file and read it as a one-dimensional stream of bytes, then parse that stream to recreate the two-dimensional structure of the spreadsheet; only then can you access the cell you care about. Saving the file reverses these steps: The two-dimensional array is serialized to form a linear stream of bytes, then written back to the disk. Some of this overhead is unavoidable, but the complex conversions between serialized files on disk and more versatile data structures in memory could be eliminated. A modern processor could address every byte of data—whether in memory or storage—as if it were all one flat array. Disk storage would no longer be a separate entity but just another level in the memory hierarchy, turning what we now call main memory into a new form of cache. From the user’s point of view, all programs would be running all the time, and all documents would always be open.

Is this notion of merging memory and storage an attractive prospect or a nightmare? I’m not sure. There are some huge potential problems. For safety and sanity we generally want to limit which programs can alter which documents. Those rules are enforced by the file system, and they would have to be re-engineered to work in the memory-mapped environment.

Perhaps more troubling is the cognitive readjustment required by such a change in architecture. Do we really want everything at our fingertips all the time? I find it comforting to think of stored files as static objects, lying dormant on a disk drive, out of harm’s way; open documents, subject to change at any instant, require a higher level of alertness. I’m not sure I’m ready for a more fluid and frenetic world where documents are laid aside but never put away. But I probably said the same thing 30 years when I first confronted a machine capable of running multiple programs at once (anyone remember Multifinder?).

The dichotomy between temporary memory and permanent storage is certainly not something built into the human psyche. I’m reminded of this whenever I help a neophyte computer user. There’s always an incident like this:

“I was writing a letter last night, and this morning I can’t find it. It’s gone.”

“Did you save the file?”

“Save it? From what? It was right there on the screen when I turned the machine off.”


Finally the big questions: Will we ever get our petabyte drives? How long will it take? What sorts of stuff will we keep on them when the day finally comes?

The last time I tried to predict the future of mass storage, extrapolating from recent trends led me far astray. I don’t want to repeat that mistake, but the best I can suggest is a longer-term baseline. Over the past 50 years, the areal density of mass-storage media has increased by seven orders of magnitude, from about \(10^5\) bits per square inch to about \(10^{12}\). That works out to about seven years for a tenfold increase, on average. If that rate is an accurate predictor of future growth, we can expect to go from the present 10 terabytes to 1 petabyte in about 15 years. But I would put big error bars around that number.

I’m even less sure about how those storage units will be used, if in fact they do materialize. In 2002 my skepticism about filling up a terabyte of personal storage was based on the limited bandwidth of the human sensory system. If the documents stored on your disk are ultimately intended for your own consumption, there’s no point in keeping more text than you can possibly read in a lifetime, or more music than you can listen to, or more pictures than you can look at. I’m now willing to concede that a terabyte of information may not be beyond human capacity to absorb. But a petabyte? Surely no one can read a billion books or watch a million hours of movies.

This argument still seems sound to me, in the sense that the conclusion follows if the premise is correct. But I’m no longer so sure about the premise. Just because it’s my computer doesn’t mean that all the information stored there has to be meant for my eyes and ears. Maybe the computer wants to collect some data for its own purposes. Maybe it’s studying my habits or learning to recognize my voice. Maybe it’s gathering statistics from the refrigerator and washing machine. Maybe it’s playing go, or gossiping over some secret channel with the Debian machine across the alley.

We’ll see.

Posted in computing | 33 Comments

The Bug That Ate Thursday

As I was finishing up the previous post here at bit-player.org, I noticed something off-kilter about the appearance of a few mathematical expressions. Here’s an enlarged example:

Typeset expression a + b- c, with too little space between the b and the minus sign.

Notice the spacing around the minus sign. It’s too close to the argument on its left, whereas the plus sign lies right in the middle. The proper rendering of this expression looks like this:

Typeset expression a + b - c, with correct, symmetrical spacing around both plus and minus signs.

Closely comparing the two images, I realized that spacing isn’t the only issue. In the malformed version the minus sign is also a little too long, too low, and too skinny.

For typesetting mathematics, I rely on MathJax, an amazing JavaScript program created by Davide Cervone of Union College. It works like magic: I write in standard TeX (math mode only), and the typeset output appears beautifully formatted in your web browser, with no need to bother about installing fonts or downloading plugins. For the past few years MathJax has been totally reliable, so this spacing glitch came as an annoying surprise.

The notes that follow record both what I did and what I thought as I tried to track down the cause of this problem. If anyone else ever bumps into the bug, the existence of this document might save them some angst and agita. Besides, everybody likes a detective story—even if the detective turns out to be more bumbling than brilliant. (If you just want to know how it comes out, skip to the end.)


Hypothesis: My first thought on seeing the wayward minus sign was that I must have typed something wrong. The TeX source code for the expression shown above is so simple (just a + b - c) that there’s not much room for error, but accidents happen. Maybe one of those space characters is not an ordinary word space (ASCII 0x20) but a non-breaking space (HTML  ). Or maybe the hyphen that represents a minus sign is not really a hyphen (ASCII 0x2D) but an en-dash (HTML – or –) or a discretionary hyphen (HTML ­ or ­). Experiment 1: Try typing the expression again, very carefully. Result: No change. Experiment 2: Copy the original source text into an editor that shows raw hexadecimal byte values. Result: Nothing exotic. Experiment 3: Copy the source text into a different TeX system (Pierre-Yves Chatelier’s LaTeXiT). Result: Typesets correctly. Conclusion: Probably not a typo.

Question: Could it be a browser bug? Tests: Try it in Chrome, Firefox, Safari, Opera. Results: Same appearance in all of them. Conclusion: It’s not the browser.

Internet interlude: The most important debugging tools today are Google and Stack Overflow. Most likely the answer is already out there. But searches for “minus sign spacing MathJax” and “minus sign spacing TeX” turn up nothing useful. The most promising leads take me to discussions of the binary subtraction operator \(a - b\) vs. the unary negation operator \(-b\). That’s not the issue here, so I am thrown back on my own resources.

Question: Is it just my machine? Test: Try opening the same page on another laptop. Result: Same appearance. However, these two computers are very similar. In particular, they have the same fonts installed. Test: Try a third machine, with different fonts. Result: No change.

Question: Is the problem confined to the one article I’m currently writing, or does it show up in earlier blog posts as well? Research: Page back through the bit-player archives. I find several more instances of the bug. Followup question: Was the minus-sign spacing in those earlier articles already botched when I wrote and published them? Or were they correct then, and the bug was introduced by some later change in the software environment?

Clue: In the course of rummaging through old blog posts, I discover that the spacing anomaly appears only in “inline” math expressions (those that appear within the flow of a paragraph), not in “display” equations (which are set off on a line of their own). The two rendering modes are invoked by surrounding an expression with different sets of delimiters: \( ... \) for inline and \[ ... \] for display. By merely toggling between round and square brackets, I find I can turn the bug on and off. This discovery leads me to suppose there really might be something awry within MathJax. If it formats an expression correctly in one mode, why does it fail on the same input text in another mode?

Investigation: Using browser developer tools, I examine the HTML markup that MathJax writes into the document. In display mode (where the spacing is correct), here’s the coding for the minus sign:

<span class="mo" id="MathJax-Span-15"
style="font-family: STIXGeneral-Regular;
padding-left: 0.228em;">-</span>

The phrase I have highlighted in red is the crucial bit of styling that sets the spacing on the left side of the minus operator. Here’s the corresponding markup for the minus sign in the inline version of the same expression:

<span class="mo" id="MathJax-Span-22"
style="font-family: STIXGeneral-Regular;">–</span>

The padding-left statement is absent. This is the proximate cause of the incorrect spacing. But why does MathJax supply the appropriate spacing in display mode but omit it in inline mode? That’s the puzzle.

Inquiry: I turn to the MathJax source-code repository on GitHub, and browse the issues database. Nothing relevant turns up. Likewise the MathJax user group forum. Baffling. If the problem really is a MathJax bug, someone would surely have reported it, unless it’s quite new. I consider opening a new issue, but decide to wait until I know more.

Question: The bug seems to be everywhere on bit-player.org, but what about the rest of the web? On MathOverflow (which I know uses MathJax) it doesn’t take long to find an inline equation that includes a minus sign. It is formatted perfectly. David Mumford’s blog is another MathJax site; I poke around there and find another inline equation with a correctly spaced minus sign. Uh oh. The finger of blame is pointing back toward me and away from MathJax.

Question: Am I using the same version of MathJax as those other sites, and the same configuration file? Not exactly, but when I try several other versions (including older ones, in case this is a recently introduced bug), there’s no change.

Pause for reflection: MathJax seems to be behaving differently on bit-player than it does on other sites. What could account for that difference? There are dozens of possible factors, but I have a leading candidate: bit-player is built on the WordPress blogging platform, and the other sites I’m looking at are not. I have no idea how the interaction of WordPress and MathJax could lead to this particular outcome, but they are both complicated software systems, with lots going on behind the curtains.

Experiment: I can test the WordPress hypothesis by setting up a web page that has everything in common with the bit-player site—the same server hardware and software, and the same MathJax processor—but that lives outside the WordPress system. I do exactly that, and find that minus signs are correctly formatted in both display and inline equations. Conclusion: It sure looks like WordPress is messing with my TeX!

Revelation: Throughout this diagnostic adventure, I’ve been relying heavily on the developer tools in the Chrome and Firefox browsers. These tools provide a peek into a page’s HTML encoding as it is displayed by the browser, after MathJax and any other JavaScript programs have worked their transformations on the source text. Now, for sheer lack of any better ideas, I decide to try the View Source command, which shows the HTML as received from the server, before any JavaScript programs run, and in particular before MathJax has converted TeX source code into typeset mathematical output. Instantly, the root of the problem is staring me in the face. The display-mode TeX is exactly as I wrote it: \[a + b - c\]. But the inline-mode markup is this: \(a + b &#8211; c\). The HTML entity &#8211; specifies an en-dash. Where did that come from? Actually, I’m pretty sure I know where; what I don’t know is why. WordPress has built-in functions to “prettify” text, converting typewriter quote marks ('', "") to typographer’s quotes (‘ ’, “ ”). More to the point, the program also replaces a double hyphen (--) with an en-dash (–) and a triple hyphen (---) with an em-dash (—). Although I haven’t been typing double hyphens in the math expressions, I still suspect that the WordPress character substitution process has something to do with those troublesome en-dashes.

Confirmation: Before investing more effort in this hypothesis, I try to make sure I’m on the right track. Typing my test expression with an en-dash instead of a hyphen produces output identical to the buggy version, in display mode as well as inline mode. Performing the same experiment in LaTeXiT yields a very similar result.

The culprit exposed: Searching for #8211 in the WordPress source code takes me to the file formatting.php, where I find a function called wptexturize. PHP is not my favorite programming language, but it’s easy enough to guess what these lines are about (I have simplified and abbreviated the statements for clarity):

$static_characters = array( '---', ' -- ', '--', ' - ')
$static_replacements = array( $em_dash, ' ' . $em_dash . ' ',
    $en_dash, ' ' . $en_dash . ' ')

Note the fourth element of the $static_characters array: a hyphen surrounded by spaces. The corresponding element of $static_replacements is an en-dash surrounded by spaces. I call that a smoking gun. MathJax, like other TeX processors, expects an ASCII hyphen as a minus sign; if you feed it an en-dash, it’s not going to recognize it as a mathematical operator. (When Knuth was developing TeX, circa 1980, no standard character encoding existed beyond the 96 codes of plain ASCII.)

The fix: It could be as simple as writing a+b-c instead of a + b - c! When I make that minor change to the text, it works like a charm. Why didn’t I think of trying that sooner? I guess because TeX in math mode promises to ignore whitespace in the source code, and it never occurred to me that WordPress doesn’t have to honor that promise. Thus I can solve the immediate problem just by removing spaces around minus signs. As a permanent remedy, however, changing my writing habits is not appealing. Nor is sifting through all my earlier posts to remove those spaces. The fact is, I don’t want hyphens to magically become en-dashes while I’m not looking. It may be a feature for some people, but for me it’s a bug.

What I did. The first commandment of WordPress development is “Thou shall not modify the core files.” But in that respect I’m already a sinner, and unrepentant. Yeah, I edited those two arrays in the formatting.php file, and it felt good.

Lessons learned. In hindsight, I see that I missed several opportunities to root out the problem more quickly. Next time I’ll remember View Source. And if I had done a better job of early-stage analysis, I would have been able to find help more efficiently. I am not the only one to confront this glitch, but I needed better search terms to follow the breadcrumbs of those who went before. Also, along the way I misinterpreted some important clues. When I discovered that the bug affects only inline mode and not display mode, I was quite sure that fact implicated MathJax, but I was wrong. (As it happens, I still don’t really understand why display mode is immune to the bug. Why is the hyphen converted to an en-dash when I enclose it in slashed round brackets, but not when it appears in slashed square brackets? Evidently the wptexturizing treatment is skipped in the latter case, but I lack the stamina to slog through all that PHP to figure out why.)

The big picture: I’m not mad at WordPress. I still believe it is a wonder of the age, making millions of people into instant, pushbutton publishers. According to some reports, it powers a quarter of all web sites. In this respect it may well be the most important application-layer software for fulfilling the original promise of the World Wide Web: allowing all of us to be contributors and creators rather than merely consumers of mass media. But there’s a cost: Keeping WordPress easy on the outside seems to require a dense thicket of thorns and briers on the inside. As the years go by I find I spend too much time fighting against its automation, which is a joyless task. I would prefer something simpler. I have Jekyll envy.

Yet my main takeaway after this episode is gratitude for open-source software. If MathJax and WordPress had been sealed, blackbox applications, I would have been helpless to help myself, unable to do anything about the problem beyond whining and pleading.

Posted in computing | 5 Comments

Number Factoids

The web sites numbersaplenty.com and numberworld.info dish up a smorgasbord of facts about every natural number from 1 to 999,999,999,999,999. Type in your favorite positive integer (provided it’s less than 1015) and you’ll get a list of prime factors, a list of divisors, the number’s representation in various bases, its square root, and lots more.

I first stumbled upon these sites (and several others like them) about a year ago. I revisited them recently while putting together the Carnival of Mathematics. I was looking for something cute to say about the new calendar year and was rewarded with the discovery that 2016 is a triangular number: 2016 bowling pins can be arranged in an equilateral triangle with 63 pins per side.

This incident set me to thinking: What does it take to build a web site like these? Clearly, the sites do not have 999,999,999,999,999 HTML files sitting on a disk drive waiting to be served up when a visitor arrives. Everything must be computed on the fly, in response to a query. And it all has to be done in milliseconds. The question that particularly intrigued me was how the programs recognize that a given number has certain properties or is a member of a certain class—a triangular number, a square, a Fibonacci, a factorial, and so on.

I thought the best way to satisfy my curiosity would be to build a toy number site of my own. Here it is:

Number Factoids




Prime factors:
Prime number ?
Square-free number ?
Square-root-smooth number ?
Square number ?
Triangular number ?
Factorial number ?
Fibonacci number ?
Catalan number ?
Somos-4 number ?

Elapsed time:

This one works a little differently from the number sites I’ve found on the web. The computation is done not on my server but on your computer. When you type a number into the input field above, a JavaScript program running in your web browser computes the prime factors of the number and checks off various other properties. (The source code for this program is available on GitHub, and there’s also a standalone version of the Number Factoids calculator.)

Because the computation is being done by your computer, the performance depends on what hardware and software you bring to the task. Especially important is the JavaScript engine in your browser. As a benchmark, you might try entering the number 999,999,999,999,989, which is the largest prime less than 1015. The elapsed time for the computation will be shown at the bottom of the panel. On my laptop, current versions of Chrome, Firefox, Safari, and Opera give running times in the range of 150 to 200 milliseconds. (But an antique iPad takes almost 8 seconds.)

Most of that time is spent in factoring the integer (or attempting to factor it in the case of a prime). Factoring is reputed to be a hard problem, and so you might suppose it would make this whole project infeasible. But the factoring computation bogs down only with really big numbers—and a quadrillion just isn’t that big anymore. Even a crude trial-division algorithm can do the job. In the worst case we need to try dividing by all the odd numbers less than \(\sqrt{10^{15}}\). That means the inner loop runs about 16 million times—a mere blink of the eye.

Once we have the list of prime factors for a number N, other properties come along almost for free. Primality: We can tell whether or not N is prime just by looking at the length of the factor list. Square-freeness: N is square-free if no prime appears more than once in the list. Smoothness: N is said to be square-root smooth if the largest prime factor is no greater than \(\sqrt{N}\). (For example, \(12 = 2 \times 2 \times 3\) is square-root smooth, but \(20 = 2 \times 2 \times 5\) is not.)

The factor list could also be used to detect square numbers. N is a perfect square if every prime factor appears in the list an even number of times. But there are lots of other ways to detect squares that don’t require factorization. Indeed, running on a machine that has a built-in square-rooter, the JavaScript code for recognizing perfect squares can be as simple as this:

function isSquare(N) {
  var root = Math.floor(Math.sqrt(N));
  return root * root === N;
}

If you want to test this code in the Number Factoids calculator, you might start with 999,999,961,946,176, which is the largest perfect square less than \(10^{15}\).

Note that the isSquare function is a predicate: The return statement in the last line yields a boolean value, either true or false. The program might well be more useful if it could report not only that 121 is a square but also what it’s the square of. But the Number Factoids program is just a proof of concept, so I have stuck to yes-or-no questions.

Metafactoids: The Factoids calculator tests nine boolean properties. No number can possess all of these properties, but 1 gets seven green checkmarks. Can any other number equal this score? The sequence of numbers that exhibit none of the nine properties begins 20, 44, 52, 68, 76, 88, 92,… Are they all even numbers? (Hover for answer.)


What about detecting triangular numbers? N is triangular if it is the sum of all the integers from 1 through k for some integer k. For example, \(2016 = 1 + 2 + 3 + \dots + 63\). Given k, it’s easy enough to find the kth triangular number, but we want to work in the opposite direction: Given N, we want to find out if there is a corresponding k such that \(1 + 2 + 3 + \cdots + k = N\).

Young Carl Friedrich Gauss knew a shortcut for calculating the sums of consecutive integers: \(1 + 2 + 3 + \cdots + k = N = k\,(k+1)\,/\,2\). We need to invert this formula, solving for the value of k that yields a specified N (if there is one). Rearranging the equation gives \(k^2 + k - 2N = 0\), and then we can crank up the trusty old quadratic formula to get this solution:

\[k = \frac{-1 \pm \sqrt{1 + 8N}}{2}.\]

Thus k is an integer—and N is triangular—if and only if \(8N + 1\) is an odd perfect square. (Let’s ignore the negative root, and note that if \(8N + 1\) is a square at all, it will be an odd one.) Detecting perfect squares is a problem we’ve already solved, so the predicate for detecting triangular numbers takes this simple form:

function isTriangular(N) {
  return isSquare(8 * N + 1);
}

Try testing it with 999,999,997,764,120, the largest triangular number less than \(10^{15}\).


Factorials are the multiplicative analogues of triangular numbers: If N is the kth factorial, then \(N = k! = 1 \times 2 \times 3 \times \cdots \times k\). Is there a multiplicative trick that generates factorials in the same way that Gauss’s shortcut generates triangulars? Well, there’s Stirling’s approximation:

\[k! \approx \sqrt{2 \pi k} \left( \frac{k}{e} \right)^k.\]

We might try to invert this formula to get a function of \(k!\) whose value is \(k\), but I don’t believe this is a promising avenue to explore. The reason is that Stirling’s formula is only an approximation. It predicts, for example, that 5! is equal to 118.02, whereas the true value is 120. Thus taking the output of the inverse function and rounding to the nearest integer would produce wrong answers. We could add correction terms to get a closer approximation—but surely there’s a better way.

One approach is to work with the gamma (\(\Gamma\)) function, which extends the concept of a factorial from the integers to the real and complex numbers; if \(n\) is an integer, then \(\Gamma(n+1) = n!\), but the \(\Gamma\) function also interpolates between the factorial values. A recent paper by Mitsuru Uchiyama gives an explicit, analytic, inverse of the gamma function, but I understand only fragments of the mathematics, and I don’t know how to implement it algorithmically.

Fifteen years ago David W. Cantrell came up with another inverse of the gamma function, although this one is only approximate. Cantrell’s version is much less intimidating, and it is based on one of my favorite mathematical gadgets, the Lambert W function. A Mathematica implementation of Cantrell’s idea works as advertised—when it is given the kth factorial number as input, it returns a real number very close to \(k+1\). However, the approximation is not good enough to distinguish true factorials from nearby numbers. Besides, JavaScript doesn’t come with a built-in Lambert W function, and I am loath to try writing my own.

On the whole, it seems better to retreat from all this higher mathematics and go back to the definition of the factorial as a product of successive integers. Then we can reliably detect factorials with a simple linear search, expressed in the following JavaScript function:

function isFactorial(N) {
  var d = 2, q = N, r = 0;
  while (q > 1 && r === 0) {
    r = q % d;
    q = q / d;
    d += 1;
  }
  return (q === 1 && r === 0);
}

A factorial is built by repeated multiplication, so this algorithm takes it apart by repeated division. Initially, we set \(q = N\) and \(d = 2\). Then we replace \(q\) by \(q / d\), and \(d\) by \(d + 1\), while keeping track of the remainder \(r = q \bmod d\). If we can continue dividing until \(q\) is equal to 1, and the remainder of every division is 0, then N is a factorial. This is not a closed-form solution; it requires a loop. On the other hand, the largest factorial less than \(10^{15}\) is 17! = 355,687,428,096,000, so the program won’t be going around the loop more than 17 times.


The Fibonacci numbers are dear to the hearts of number nuts everywhere (including me). The sequence is defined by the recursion \(F_0 = 0, F_1 = 1, F_k = F_{k-1} + F_{k-2}\). How best to recognize these numbers? There is a remarkable closed-form formula, named for the French mathematician J. P. M. Binet:

\[F_k = \frac{1}{\sqrt{5}} \left[ \left(\frac{1 + \sqrt{5}}{2}\right)^k - \left(\frac{1 - \sqrt{5}}{2}\right)^k\right]\]

I call it remarkable because, unlike Stirling’s approximation for factorials, this is an exact formula; if you give it an integer k and an exact value of \(\sqrt{5}\)), it returns the kth Fibonacci number as an integer.

One afternoon last week I engaged in a strenuous wrestling match with Binet’s formula, trying to turn it inside out and thereby create a function of N that returns k if and only if \(N\) is the kth Fibonacci number. With some help from Mathematica I got as far as the following expression, which gives the right answer some of the time:

\[k(N) = \frac{\log \frac{1}{2} \left( \sqrt{5 N^2 + 4} - \sqrt{5} N \right)}{\log \frac{1}{2} \left( \sqrt{5} - 1 \right)}\]

Plugging in a few values of N yields the following table of values for \(k(N)\):

N k(N)
1 2.000000000000001
2 3.209573979673092
3 4.0000000000000036
4 4.578618254581733
5 5.03325648737724
6 5.407157747499656
7 5.724476891770392
8 6.000000000000018
9 6.243411773788614
10 6.4613916654135615
11 6.658737112471047
12 6.8390081849422675
13 7.00491857188792
14 7.158583717787527
15 7.3016843734535035
16 7.435577905992959
17 7.561376165404197
18 7.680001269357004
19 7.792226410280063
20 7.8987062604216005
21 7.999999999999939

In each of the green rows, the function correctly recognizes a Fibonacci number \(F_k\), returning the value of k as an integer. (Or almost an integer; the value would be exact if we could calculate exact square roots and logarithms.) Specifically, 1 is the second Fibonacci number (though also the first), 3 is the fourth, 8 is the sixth, and 21 is the eighth Fibonacci number. So far so good. But there’s something weird going on with the other Fibonacci numbers in the table, namely those with odd-numbered indices (red rows). For N = 2, 5, and 13, the inverse Binet function returns numbers that are close to the correct k values (3, 5, 7), but not quite close enough. What’s that about?

If I had persisted in my wrestling match, would I have ultimately prevailed? I’ll never know, because in this era of Google and MathOverflow and StackExchange, a spoiler lurks around every cybercorner. Before I could make any further progress, I stumbled upon pointers to the work of Ira Gessel of Brandeis, who neatly settled the matter of recognizing Fibonacci numbers more than 40 years ago, when he was an undergraduate at Harvard. Gessel showed that N is a Fibonacci number iff either \(5N^2 + 4\) or \(5N^2 - 4\) is a perfect square. Gessel introduced this short and sweet criterion and proved its correctness in a problem published in The Fibonacci Quarterly (1972, Vol. 10, No. 6, pp. 417–419). Phillip James, in a 2009 paper, presents the proof in a way I find somewhat easier to follow.

It is not a coincidence that the expression \(5N^2 + 4\) appears in both Gessel’s formula and in my attempt to construct an inverse Binet function. Furthermore, substituting Gessel’s \(5N^2 - 4\) into the inverse function (with a few other sign adjustments) yields correct results for the odd-indexed Fibonacci numbers. Implementing the Gessel test in JavaScript is a cinch:

function gessel(N) {
  var s = 5 * N * N;
  return isSquare(s + 4) || isSquare(s - 4);
}

So that takes care of the Fibonacci numbers, right? Alas, no. Although Gessel’s criterion is mathematically unassailable, it fails computationally. The problem arises from the squaring of \(N\). If \(N\) is in the neighborhood of \(10^{15}\), then \(N^2\) is near \(10^{30}\), which is roughly \(2^{100}\). JavaScript does all of its arithmetic with 64-bit double-precision floating-point numbers, which allow 53 bits for representing the mantissa, or significand. With values above \(2^{53}\), not all integers can be represented exactly—there are gaps between them. In this range the mapping between \(N\) and \(N^2\) is no longer a bijection (one-to-one in both directions), and the gessel procedure returns many errors.

I had one more hope of coming up with a closed-form Fibonacci recognizer. In the Binet formula, the term \(((1 - \sqrt{5})\,/\,2)^k\) becomes very small in magnitude as k grows large. By neglecting that term we get a simpler formula that still yields a good approximation to Fibonacci numbers:

\[F_k \approx \frac{1}{\sqrt{5}} \left(\frac{1 + \sqrt{5}}{2}\right)^k.\]

For any integer k, the value returned by that expression is within 0.5 of the Fibonacci number \(F_k\), and so simple rounding is guaranteed to yield the correct answer. But the inverse function is not so well-behaved. Although it has no \(N^2\) term that would overflow the 64-bit format, it relies on square-root and logarithm operations whose limited precision can still introduce errors.

So how does the Factoids calculator detect Fibonacci numbers? The old-fashioned way. It starts with 0 and 1 and iterates through the sequence of additions, stopping as soon as N is reached or exceeded:

function isFibo(N) {
  var a = 0, b = 1, tmp;
  while (a < N) {
    tmp = a;
    a = b;
    b = tmp + b;
  }
  return a === N;
}

As with the factorials, this is not a closed-form solution, and its computational complexity scales in linear proportion to N rather than being constant regardless of N. There are tricks for speeding it up to \(\log N\); Edsger Dijkstra described one such approach. But optimization hardly seems worth the bother. For N < \(10^{15}\), the while loop cannot be executed more than 72 times.


I’ve included two more sequences in the Factoids calculator, just because I’m especially fond of them. The Catalan numbers (1, 1, 2, 5, 14, 42, 132, 429…) are famously useful for counting all sorts of things—ways of triangulating a polygon, paths through the Manhattan street grid, sequences of properly nested parentheses. The usual definition is in terms of binomial coefficients or factorials:

\[C_k = \frac{1}{k+1} \binom{2k}{k} = \frac{(2k)!}{(k+1)! k!}\]

But there is also a recurrence relation:

\[C_0 = 1,\qquad C_k = \frac{4k-2}{k+1} C_{k-1}\]

The recognizer function in the Factoids Calculator does a bottom-up iteration based on the recurrence relation:

function isCatalan(N) {
  var c = 1, k = 0;
  while (c < N) {
    k += 1;
    c = c * (4 * k - 2) / (k + 1);
  }
  return c === N;
}

The final sequence in the calculator is one of several discovered by Michael Somos around 1980. It is defined by this recurrence:

\[S_0 = S_1 = S_2 = S_3 = 1,\qquad S_k = \frac{S_{k-1} S_{k-3} + S_{k-2}^2}{S_{k-4}}\]

The surprise here is that the elements of the sequence are all integers, beginning 1, 1, 1, 1, 2, 3, 7, 23, 59, 314, 1529, 8209. In writing a recognizer for these numbers I have made no attempt to be clever; I simply generate the sequence from the beginning and check for equality with the given N:

function isSomos(N) {
  var next = 1, S = [1, 1, 1, 1];
  while (next < N) {
    next = (S[3] * S[1] + S[2] * S[2]) / S[0];
    S.shift();
    S.push(next);
  }
  return next === N;
}

But there’s a problem with this code. Do you see it? As N approaches the \(10^{15}\) barrier, the subexpression S[3] * S[1] + S[2] * S[2] will surely break through that barrier. In fact, the version of the procedure shown above fails for 32,606,721,084,786, the largest Somos-4 number below \(10^{15}\). For the version of the program that’s actually running in the Factoids calculator I have repaired this flaw by rearranging the sequence of operations. (For details see the GitHub repository.)


The factorial, Fibonacci, Catalan, and Somos sequences all exhibit exponential growth, which means they are sprinkled very sparsely along the number line. That’s why a simple linear search algorithm—which just keeps going until it reaches or exceeds the target—can be so effective. For the same reason, it would be easy to precompute all of these numbers up to \(10^{15}\) and have the JavaScript program do a table lookup. I have ruled out this strategy for a simple reason: It’s no fun. It’s not sporting. I want to do real computing, not just consult a table.

Other number series, such as the square and triangular numbers, are more densely distributed. There are more than 30 million square and triangular numbers up to \(10^{15}\); downloading a table of that size would take longer than recomputing quite a few squares and triangulars. And then there are the primes—all 29,844,570,422,669 of them.

What would happen if we broke out of the 64-bit sandbox and offered to supply factoids about larger numbers? A next step might be a Megafactoids calculator that doubles the digit count, accepting integers up to \(10^{30}\). Computations in this system would require multiple-precision arithmetic, capable of handling numbers with at least 128 bits. Some programming languages offer built-in support for numbers of arbitrary size, and libraries can add that capability to other languages, including JavaScript. Although there is a substantial speed penalty for extended precision, most of the algorithms running in the Factoids program would still give correct results in acceptable time. In particular, there would no problem recognizing squares and triangulars, factorials, Fibonaccis, Catalans and Somos-4 numbers.

The one real problem area in a 30-digit factoid calculator is factoring. Trial division would be useless; instead of milliseconds, the worst-case running time would be months or years. However, much stronger factoring algorithms have been devised in the past 40 years. The algorithm that would be most suitable for this purpose is called the elliptic curve method, invented by Hendrik Lenstra in the 1980s. An implementation of this method built into PARI/GP, which in turn is built into Sage, can factor 30-digit numbers in about 20 milliseconds. A JavaScript implementation of the elliptic curve method seems quite doable. Whether it’s worth doing is another question. The world is not exactly clamoring for more and better factoids.

Addendum 2016-02-05: I’ve just learned (via Hacker News) that I may need to add a few more recognition predicates: detectors for “artisanal integers” in flavors hand-crafted in the Mission District, Brooklyn, and London.


Posted in computing, mathematics | 5 Comments

Carnival of Mathematics #130

Welcome to the 130th monthly Carnival of Mathematics, a roving blog organized by The Aperiodical. The previous edition was hosted by GanitCharcha. This month’s collection includes material appearing on the Web between December 10 and January 9 (with a bit of spillover on both ends).

Tradition obliges me to say something interesting about the number 130. I could mention that 130 = 5 × 5 × 5 + 5. Is that interesting? Meh, eh? The Wikipedia page for 130 reports the following curious observation:

\(130\) is the only integer that is the sum of the squares of its first four divisors, including \(1\): \(1^2 + 2^2 + 5^2 + 10^2 = 130\).

What’s interesting here is not the fact that 130 has this property. The provocative part is the statement that 130 is the only such integer. How can one know this? It would be easy enough to check that no smaller number qualifies, but how do you rule out the possibility that somewhere far out on the number line there lurks another example? Clearly, what’s needed is a proof, but Wikipedia offers none. I spent some time trying to come up with a proof myself but failed to put the pieces together. Maybe you’ll do better. If not, Robert Munafo explains all, and traces the question to the Russian website dxdy.ru.


Enough of 130. I’d like to celebrate another number of the season: 2016. As midnight approached on December 31, Twitter brought this news:

11111011111 + 1 = 11111100000. Can't wait for 11111111111 (2047)

In other words, the year we are now beginning is equal to \(2^{11} - 2^{5}\), and in another 32 years we’ll be celebrating the turn of a binary millennium, marking the passage of two kibiyears. (Year-zero deniers are welcome to wait an extra year.)

Fernando Juan, with an animated GIF, pointed out another nonobvious fact: \(2016 = 3^3 + 4^3 + 5^3 + 6^3 + 7^3 + 8^3 + 9^3\).

Even more noteworthy, May-Li Khoe and Federico Ardillo gave graphic proof that 2016 is a triangular number:

2016 dots arranged to form a equilateral triangle

Specifically, 2016 is the sum of the integers \(1 + 2 + 3 + \cdots + 63\), making it the \(63\)rd element of the sequence \(1, 3, 6, 10, 15, \ldots\). You could go bowling with \(2016\) pins!

According to a well-known anecdote, Carl Friedrich Gauss was a schoolboy when he discovered that the nth triangular number is equal to \(n(n + 1) / 2\), which is also the value of the binomial coefficient \(\binom{n + 1}{2}\). Hence 2016 is the number of ways of choosing two items at a time from a collection of 64 objects. A tweet by John D. Cook offers this interpretation: “2016 = The number of ways to place two pawns on a chessboard.”

Patrick Honner expressed the same idea another way: 2016 is the number of edges in a complete graph with 64 vertices.

The complete graph on 64 vertices, with 2016 edges.

If you attended a New Years Eve party with 64 people present, and at midnight every guest clinked glasses with every other guest, there were 2016 clinks in all. (And probably a lot of broken glassware.)

In a New Years story of another kind, Tim Harford warns of the cost of overconfidence when it comes to making resolutions, as when you buy a year’s gym membership and stop going after six weeks. “Some companies base their business models on our tendency to overestimate our willpower.”


Triangular numbers turn up in another holiday item as well. A video by Tipping Point Math offers a quantitative analysis of the gift-giving extravaganza in “The Twelve Days of Christmas.” On the \(n\)th day of Christmas you receive \(1 + 2 + 3 + \cdots + n\) gifts, which is clearly the triangular number \(T_n\). But what’s the total number of gifts when you add up the largesse over \(n\) days? These sums of triangular numbers are the tetrahedral numbers: \(1, 4, 10, 20, 35 \ldots\); the twelfth tetrahedral number is \(364\). The video shows where to find all these numbers in Pascal’s triangle.

Vince Knight, in Un Peu de Math, approaches the Christmas gift exchange as a problem in game theory. Two friends agree not to exchange gifts, but then they are both tempted to renege on the agreement and buy a present afterall. This situation is a variant of the game-theory puzzle known as prisoner’s dilemma—which sounds like a rather grim view of holiday tradition. But in fact the conclusion is cheery: “People enjoy giving gifts a lot more than receiving them.”

The December holiday season also brings Don Knuth’s annual Christmas lecture, which has been posted on YouTube. In previous years, this talk was the Christmas Tree lecture, because it touched on some aspect of trees as a data structure or a concept in graph theory. Now Knuth has branched out from the world of trees. His subject is comma-free codes—sets of words that can be jammed together without commas or spaces to mark the boundaries between words, and yet still be read without ambiguity. A set that lacks the comma-free property is {tea, ate, eat}, because a concatenation such as teateatea has within it not just tea,tea,tea but also eat,eat and ate,ate. But the words {sat, set, tea} do qualify as comma-free, because no sequence of the words can be partitioned in more than one way to yield words in the set.

The idea of comma-free codes arose in the late 1950s, when it was thought that the genetic code might have a comma-free structure. Experiments soon showed otherwise, and interest in comma-free codes waned. But Knuth has rediscovered a paper by Willard Eastman, published in 1965, that constructs an algorithm for generating a comma-free code (if possible) from a given set of symbols. Knuth’s lecture demonstrates the algorithm and gives a computer implementation.


Stephen Wolfram with a replica of Babbage’s difference
engine and a portrait of Ada Lovelace. Image courtesy
of Stephen Wolfram.

December 10, 2015, was the 200th birthday of Ada, Countess of Lovelace, who collaborated with Charles Babbage on the proposed computing machine called the analytic engine. The anniversary was commemorated in several ways, including an exhibition at the Science Museum in London, but here I want to call attention to a 12,000-word biographical essay by Stephen Wolfram, the creator of Mathematica.

The stories of Lovelace and Babbage have been told many times, but Wolfram’s account is worth reading even if you already know how the plot comes out. The principles of the machines and the algorithm that Lovelace devised as a demo (a computation of Bernoulli numbers) are described in detail, but the heart of the story is the human drama. Today we see these two figures as pioneers and heroes, but their lives were tinged with frustration and disappointment. Lovelace, who died at age 36, never got a proper opportunity to show off her ideas and abilities; in her one published work, all of her original contibutions were relegated to footnotes. Babbage in his later years felt aggrieved with the world for failing to support his vision. Wolfram makes an interesting biographer of this pair, never hesitating to see his subjects reflected in his own experience, and vice versa.


Back in the summer of 2012 Shinichi Mochizuki of Kyoto University released four long papers that claim to resolve an important problem in number theory called the abc conjecture. (I’m not going to try to explain the conjecture here; I did so in an earlier post.) More than three years later, no one in the mathematical community has been able to understand Mochizuki’s work well enough to verify that it is indeed a proof. In December more than 50 experts gathered at the University of Oxford to try to make progress on breaking the impasse. Brian Conrad of Stanford University, one of the workshop participants, wrote up his notes as a guest post in Cathy O’Neil’s Mathbabe blog. This is an insider’s account, and parts of the discussion will not make much sense unless you have fairly deep background in modern number theory. (I don’t.) But that inaccesibility illustrates the point, in a way. Number theorists themselves are in the same situation with regard to the Mochizuchi papers, which are clotted with idiosyncratic concepts such as Inter-universal Teichmuller Theory (IUT). Conrad writes:

After [Kiran] Kedlaya’s lectures, the remaining ones devoted to the IUT papers were impossible to follow without already knowing the material: there was a heavy amount of rapid-fire new notation and language and terminology, and everyone not already somewhat experienced with IUT got totally lost…. Persistent questions from the audience didn’t help to remove the cloud of fog that overcame many lectures in the final two days. The audience kept asking for examples (in some instructive sense, even if entirely about mathematical structures), but nothing satisfactory to much of the audience along such lines was provided.

It’s still not clear when or how the status of the proof will be resolved. O’Neil herself has taken a stern position on the issue of inpenetrable purported proofs. When the Mochizuchi papers were first released, she wrote: “If I claim to have proved something, it is my responsibility to convince others I’ve done so; it’s not their responsibility to try to understand it (although it would be very nice of them to try).”

Anthony Bonato, the Intrepid Mathematician, offers a friendlier introduction to the abc conjecture and its consequences. Also see comments on the conjecture and the workshop by Evelyn Lamb in the Scientific American blog Roots of Unity and by Anna Haensch in an American Mathematical Society blog.


Another number-theory event that attracted wide notice in recent weeks was the successful defense and publication of a doctoral thesis at Princeton University by Piper Harron. The title is “The Equidistribution of Lattice Shapes of Rings of Integers of Cubic, Quartic, and Quintic Number Fields: an Artist’s Rendering,” and it’s a great read (PDF). Here is the abstract:

A fascinating tale of mayhem, mystery, and mathematics. Attached to each degree \(n\) number field is a rank \(n\, - 1\) lattice called its shape. This thesis shows that the shapes of \(S_n\)-number fields (of degree \(n = 3, 4,\) or \(5)\) become equidistributed as the absolute discriminant of the number field goes to infinity. The result for \(n = 3\) is due to David Terr. Here, we provide a unified proof for \(n = 3, 4,\) and \(5\) based on the parametrizations of low rank rings due to Bhargava and Delone–Faddeev. We do not assume any of those words make any kind of sense, though we do make certain assumptions about how much time the reader has on her hands and what kind of sense of humor she has.

This is not your grandmother’s doctoral thesis! Harron interleaves sections of “laysplaining” and “mathsplaining,” and illustrates her work with an abundance of metaphors, jokes, asides to the reader, cartoons, and commentary on the course of her life during the 10 years she spent writing the thesis (e.g., a brief interruption to give birth). In terms of expository style, Mochizuki and Harron stand at opposite poles—a fact noted by at least two bloggers. Both of the posts by Evelyn Lamb and by Anna Haensch mentioned above in connection with Mochizuki also discuss Harron’s work.

Harron has a blog of her own with a post titled “Why I Do Not Talk About Math,” and she has written a guest post for Mathbabe with this description of her thesis:

My thesis is this thing that was initially going to be a grenade launched at my ex-prison, for better or for worse, and instead turned into some kind of positive seed bomb where flowers have sprouted beside the foundations I thought I wanted to crumble.

Reading her accounts of life as “an escaped graduate student,” I am in equal measures amused and horrified (not a comfortable combination). If nothing else, Harron’s “thesis grenade” promises to broaden—to diversify—the discussion of diversity in mathematical culture. It’s not just about race and gender (although Harron cares passionately about those issues). Human diversity also ranges across many other dimensions: modes of reasoning, approaches to learning, cultural contexts, styles of explaining, and ways of living. Harron’s thesis is a declaration that one can do original research-level mathematics without adopting the vocabulary, the styles, the attitudes, and the mental apparatus of one established academic community.


If I show you a cube, you can easily place it in a three-dimensional cartesian coordinate system in such a way that all the vertices have rational \(x,y,z\) coordinates. By scaling the cube, you can make all the coordinates integers. The same is true of the regular tetrahedron and the regular octahedron, but for these objects the scaling factor includes an irrational number, \(\sqrt{2}\). For the dodecahedron and the icosahedron, some of the vertex coordinates are themselves irrational no matter how the figure is scaled; the irrational number \(\varphi = (1 + \sqrt{5})\, /\, 2\) plays an essential role in the geometry.

This intrusion of irrationality into geometry troubled the ancients, but we seem to have gotten used to it by now. However, David Eppstein, a.k.a \(0\)xDE, writes about a class of polyhedra I still find deeply disconcerting: “Polyhedra whose vertex coordinates have no closed form formula.” In this context a closed-form formula is a mathematical expression that can be evaluated in a finite number of operations. There’s no universal agreement on exactly what operations are allowed; Eppstein works with computational models that allow the familiar operations \(+, -, \times, \div\) as well as finding roots of polynomials. He constructs a polyhedron—it looks something like the teepee his children used to play with—whose vertex coordinates cannot all be calculated by a finite sequence of these operations.

With this development we land in territory even stranger than that of the irrational numbers. I cannot draw an exact equilateral triangle on a computer screen because at least one of the coordinates is irrational; nevertheless, I can tell you that the vertex ought to have the coordinate \(y = 1 / \sqrt{3}\). In Eppstein’s polyhedron I can’t give you any such compact, digestible description of where the vertex belongs; the best anyone can offer is a program for approximating it.


Brent Yorgey in The Math Less Traveled offers an unlikely question: What’s the best way to read a manuscript printed on three-sided paper? If you have a sheaf of unbound pages printed on just one side, the obvious procedure is to read the top sheet, then shuffle it to the bottom of the heap, and continue until you come back to page 1. If the pages are printed on two sides, life gets a little more complicated. You can read the top page, flip it over and read the back, then flip it again and move it to the bottom of the sheaf. An alternative (attributed to John Horton Conway) is to read the front page, move it to the bottom of the heap, then flip over the entire stack to read the back side; then flip the stack again to read the front of the following page. In either case, you must alternate two distinct operations, which means you must somehow keep track of which move comes next.

The unsolved problem is to find the best algorithm when the pages are printed on three sides. “You may not be familiar with triple-sided sheets of paper,” Yorgey writes, “so here’s how they work: they stack nicely, just like regular sheets of paper, but you have to flip one over three times before you get back to the original side.”

Yorgey gives some criteria that a successful algorithm ought to satisfy, but the question remains open.


Mr. Honner takes up a problem encountered at a Math for America banquet:

Suppose you are standing several miles from the Pentagon. What is the probability you can see three sides of the building?

He discovers that it’s one of those cases where interpreting the problem is as much of a challenge as finding the answer. “In particular, it’s a reminder of how the different ways we model random selection can make for big differences in our solutions!”


Long-tailed data distributions—where extreme values are more common than they would be in, say, a normal distribution—are notoriously tricky. John D. Cook points out that long tails become even more treacherous when the variables are discrete (e.g., integers) rather than continuous values.

Suppose \(n\) data points are drawn at random from a distribution where each possible value \(x\) appears with frequency proportional to \(x^{-\alpha}\). Working from the data sample, we want to estimate the value of the exponent \(\alpha\). If \(x\) is a continuous variable, there’s a maximum-likelihood formula that generally works well: \(\hat{\alpha} = 1 + n\, / \sum \log x\). But with discrete variables, the same method leads to disastrous errors. In a test case with \(\alpha = 3\), the formula yields \(\hat{\alpha} = 6.87\). But we needn’t lose hope. Cook presents another approach that gives quite reliable results.


A post on DRHagen.com tackles a mystery found in an xkcd cartoon:

Calendar of meaningful dates, by Randall Munroe (xkcd). License: Creative Commons Attribution-NonCommercial 2.5.

The size of each date is proportional to its frequency of occurrence in the Google Books ngram database for English-language books published since 2000. September 11 is clearly an outlier. That’s not the mystery. The mystery is that the 11th of most other months is noticeably less common than other dates. The cartoonist, Randall Munroe, was puzzled by this; hover over the image to see his message. Hagen convincingly solves the mystery. I’m not going to give away the solution, and I urge you to try to come up with a hypothesis of your own before following the link to DRHagen.com. And if you get that one right, you might work on another calendrical anomaly—that the 2nd, 3rd, 22nd, and 23rd are underrepresented in books printed before the 20th century—which Hagen solves in a followup post.


A shiny new blog called Off the Convex Path, with contributions from Sanjeev Arora, Moritz Hardt, and Nisheeth Vishnoi, “is dedicated to the idea that optimization methods—whether created by humans or nature, whether convex or nonconvex—are exciting objects of study and often lead to useful algorithms and insights into nature.” A December 21 post by Vishnoi looks at optimization mthods in the guise of dynamical systems—specifically, systems that converge to some set of fixed points. Vishnoi gives two examples, both from the life sciences: a version of Darwinian evolution, and a biological computer in which slime molds solve linear programming problems.


In the Comfortably Numbered blog, hardmath123 writes engagingly and entertainingly about numerical coincidences, like this one:

\(1337^{47168026} \approx \pi \cdot 10^{147453447}\) to within \(0.00000001\%\). It begins with the digits \(31415926 \ldots\).

The existence of such coincidences should not be a big surprise. As hardmath123 writes, “Since the rationals are dense in the reals, we can find a rational number arbitrarily close to any real number.” The question is how to find them. The answer involves continued fractions and the Dirichlet approximation theorem.


For the final acts of this carnival, we have a few items on mathematics in the arts, crafts, games, and other recreations.

In the Huffington Post arts and culture department, Dan Rockmore takes a mathematical gallery walk in New York. The first stop is the Whitney Museum of American Art, showing a retrospective of the paintings of Frank Stella, some of whose canvases are nonrectilinear, and even nonconvex. In another exhibit mathematics itself becomes art, with 10 equations and other expressions calligraphically rendered by 10 noted mathematicians and scientists.

Katherine, writing on the blog Will Knit for Math, tells how “Being a mathematician improves my knitting.” It’s not just a matter of counting stitches (although a scheme for counting modulo 18 does enter into the story). I hope we’ll someday get a followup post explaining how “Being a knitter improves my mathematics.”

Chelsea VanderZwaag, a student majoring in mathematics and elementary education, visits an arts-focused school school, where a lesson on paper-folding and fractions blends seamlessly into the curriculum. (An earlier post on the problem of creating a shape by folding paper and making a single cut with scissors provides a little more background.)

Dobble cardOn Medium, A Woman in Technology writes about Dobble, a combinatorial card game. “My children got this game for Christmas. We haven’t played it yet. We got as far as me opening the tin and reading the rules, at which point I got distracted by the maths and forgot about the game.”

There are 55 cards, each bearing eight symbols, and each card shares exactly one symbol with each of the other cards. How many distinct symbols do you need to construct such a deck of cards. The game instructions say there are 50, but the author determines through hard work, scribbling, and spreadsheets that the instructions are in error. It all comes to a happy end with the formula \(n^2 - n + 1\) and a suggested improvement to the game that the manufacturer should heed.


That’s it for Carnival of Math 130. My apologies to a few contributors whose interesting work I just couldn’t squeeze in here.

Next month the carnival makes a hometown stop with Katie at The Aperiodical.

Posted in mathematics | 4 Comments

Ten years of bit-playing

Scribble, scribble, scribble. As if the world didn’t get enough of my writing already, with a bimonthly column in American Scientist, now I’m equipped to publish my every thought on a momen’t notice.

That’s how it all began here on bit-player.org: The first post (with the first typo) appeared on January 9, 2006. I’ve published another 340 posts since then (including this one)—and doubtless many more typos and other errors. Many thanks to my readers, especially those who have contributed some 1,800 thoughtful comments.

Posted in meta | 5 Comments

Deep Dreaming with Every Card I Write

My closest friends and family must make do with an old-fashioned paper-and-postage greeting card, but for bit-player readers I can send some thoroughly modern pixels. Happy holidays to everyone.

In recent months I’ve been having fun with “deep dreaming,” the remarkable toy/tool for seeing what’s going on deep inside deep neural networks. Those networks have gotten quite good at identifying the subject matter of images. If you train the network on a large sample of images (a million or more) and then show it a picture of the family pet, it will tell you not just whether your best friend is a cat or a dog but whether it’s a Shih Tzu or a Bichon Frise.

What visual features of an image does the network seize upon to make these distinctions? Deep dreaming tries to answer this question. It probes a layer of the network, determines which neural units are most strongly stimulated by the image, and then translates that pattern of activation back into an array of pixels. The result is a strange new image embellished with all the objects and patterns and geometric motifs that the selected layer thinks it might be seeing. Some of these machine dreams are artful abstractions; some are reminiscent of drug-induced hallucinations; some are just bizarre or even grotesque, populated by two-headed birds and sea creatures swimming through the sky.

For this year’s holiday card I chose a scene appropriate to the season and ran it through the deep-dreaming program. You can see some of the output below, starting with the original image and progressing through fantasies extracted from deeper and deeper layers of the network. (Navigate with the icons below the image, or use the left and right arrow keys. Shorthand labels identifying the network layers appear at lower right.)

A few notes and observations:

  • The embellishments begin with fairly abstract motifs, then become more elaborate and figurative. But this evolution reaches a peak near the middle of the sequence; after that, things calm down a little. By the end, the trees look like trees again, and the sky has more snowflakes and fewer diaphanous monsters. Perhaps this turn toward realism is to be expected, since the later layers are where the neural network settles on a final interpretation of the image.
  • Information flows through the layers in sequence, with any hypothesis formed in one later passed along to the later ones. You might think this would tend to stabilize interpretations. If layer 4a sees a puppy face in a certain region, then layers 4b and 4c would be influenced by this verdict. And indeed there are places in the image where certain wild fantasies do persist from one stage to the next; for example, a brightly colored vehicle first appears in the lower right quadrant in layer 4c, and it reappears with variations in the next three layers—4d, 4e, and pool4. On the whole, however, there is less continuity from layer to layer than I would have expected.
  • The scene in layer 4c fascinates me. For one thing, it has a cast of characters quite unlike the surrounding layers—human figures (sort of) rather than animals, and buildings that look like gazebos or onion-domed spires. But what intrigues me most is the geometric transformation of the landscape. The world has been flattened; in the left half of the image, the buildings all have their foundations resting on a plane that doesn’t actually exist, and the people have their feet on the ground. The network is constructing a perspective view.

And some links:

  • Deep Dreaming was invented by three young engineers and interns at Google, Alexander Mordvintsev, Michael Tyka, and Christopher Olah. As far as I know, their only publications on the subject are a pair of blog posts. The first post announced the discovery and showed some sample images; the second provided links to the open-source code. The code itself is available on GitHub.
  • The neural network used in these experiments, called GoogLeNet, was devised by Christian Szegedy and several colleagues at Google Research. They describe it in arXiv:1409.4842.
  • For the experiments described here, the GoogLeNet program was trained on more than a million pre-labeled images retrieved from a database called ImageNET. The subjects of these images, which had been downloaded from the Internet, were what the neural network learned to recognize.
  • Computer Vision and Computer Hallucinations” is my American Scientist article on the subject (Vol. 103, No. 6, November–December 2015, pages 380–383).
  • If you would like to try playing with these toys yourself, all the software is open source, but getting it installed and running can be an adventure. I’ve written a memo on my own experiences, which includes links to other useful resources.
  • I would like to credit the photographer who created the original image, but I have not been able to track down the source. I found it at the Latvian website www.lejins.lv, there’s no information there about its provenance. Thus I am using it here without permission. Mea culpa.

Update 2015-12-31: In the comments, Ed Jones asks, “If the original image is changed slightly, how much do the deep dreaming images change?” It’s a very good question, but I don’t have a very good answer.

The deep dreaming procedure has some stochastic stages, and so the outcome is not deterministic. Even when the input image is unchanged, the output image is somewhat different every time. Below are enlargements cropped from three runs probing layer 4c, all with exactly the same input:

They are all different in detail, and yet at a higher level of abstraction they are all the same: They are recognizably products of the same process. That statement remains true when small changes—and even some not-so-small ones—are introduced into the input image. The figure below has undergone a radical shift in color balance (I have swapped the red and blue channels), but the deep dreaming algorithm produces similar embellishments, with an altered color palette:

In the pair of images below I have cloned a couple of trees from the background and replanted them in the foreground. They are promptly assimilated into the deep dream fantasy, but again the overall look and feel of the scene is totally familiar.

Based on the evidence of these few experiments, it seems the deep dreaming images are indeed quite robust, but there’s another side to the story. When these neural networks are used to recognize or classify images (the original design goal), it’s actually quite easy to fool them. Christian Szegedy and his colleagues have shown that certain imperceptible changes to an image can cause the network to misclassify it; to the human eye, the picture still looks like a school bus, but the network sees it as something else. And Ahn Nguyen et al. have tricked networks into confidently identifying images that look like nothing but noise. These results suggest that the classification methods are rather brittle or fragile, but that’s not quite right either. Such errors arise only with carefully crafted images, called “adversarial examples.” There is almost no chance that a random change to an image would trigger such a response.

Posted in computing, off-topic | 6 Comments

Geotargeted by the NY Times

Reading the Times this morning, I got to the middle of a story about the cost of medical care when I was stopped by this passage:

Consider Boston, our best guess for where you might be reading this article. It’s very expensive for spending on the average Medicare patient. But, when it comes to private health insurance, it’s about average.

Their best guess about my whereabouts was pretty good. I am indeed in the Boston area. I’m not surprised that they know that, but I was nonplussed to discover that they had altered the story according to my location. Here’s the markup for that paragraph:

<div class="g-insert">
<p class="g-body">
Consider <span class="g-custom-place g-selected-hrr-name">
Boston</span>
<span class="g-geotarget-success">, our best 
guess for where you might be reading this article</span>. 
<span class="g-new-york-city-addition g-custom-insert g-hidden"> 
(Here, the New York City region includes all boroughs but the 
Bronx, which is listed separately.)</span>

It’s <span class="g-medicare-adjective g-custom-place">
very expensive</span> 
for spending on the average Medicare patient. 
<span class="g-local-insert g-very-different">
But, w</span><span class="g-close g-same g-hidden g-local-insert">
W</span>hen it comes to private health insurance, it’s 
<span class="g-hidden g-same g-local-insert">also</span> 
<span class="g-private-adjective g-custom-place">
about average</span>. 

<span class="g-close g-hidden g-local-insert">
The study finds that the levels of spending for the two programs 
are unrelated. That means that, for about half of communities, 
spending is somewhat similar, like it is in 
<span class="g-custom-place g-selected-hrr-name g-hrr-only-no-state">
Boston</span>
</span>

<span class="g-same g-hidden g-local-insert g-same-sentence">
<span class="g-custom-place g-hrr-only-no-state">Boston</span> 
is one of the few places where spending for both programs 
is very similar – in most, there is some degree of mismatch.
</span>

<span class="g-new-york-addition g-local-insert g-hidden">
Several parts of the New York metropolitan area are outliers 
in the data – among the most expensive for both health 
insurance systems.</span>

<!-- <span class="g-atlanta-addition g-local-insert g-hidden">
(Atlanta is one of the few places in the country where spending 
for both programs is very similar. In most, there is some 
degree of mismatch.)</p> -->
</p>
</div>

It seems I live in a g-custom-place (“Boston”), which is associated with a g-medicare-adjective (“very expensive”) and a g-private-adjective (“about average”). Because the Times has been able to track me down, I get a g-geotarget-success message. But for the same reason I don’t get to see certain other text, such as a remark about New York as an outlier; those text spans are g-hidden.

Presumably, a program running on the server has located me by checking my IP number against a geographic database, then added various class names to the span tags. Some of the class names are processed by a CSS stylesheet; for example, g-hidden triggers the style directive display: none. The other class names are apparently processed by a Javascript program that inserts or removes text, such as those custom adjectives. Generating text in this way looks like a pretty tedious and precarious business, a little like writing poetry with refrigerator magnets. For example, an extra set of class names is needed to make sure that if a place is first mentioned as a city and state (e.g., “Springfield, Massachusetts”), the state won’t be repeated on subsequent references.

I suppose there’s no great harm in this bit of localizing embellishment. After all, they’re not tailoring the article based on whether I’m black or white, male or female, Democrat or Republican, rich or poor. It’s just a geographic split. But it makes me queasy all the same. When I cite an article here on bit-player, I want to think that everyone who follows the link will see the same article. It looks like I can’t count on that.

Update: Turns out this is not the Times’s first adventure in geotargeting. A story last May on “paths out of poverty” used the same technique, as reported by Nieman Lab. (Thanks to Andrew Silver (@asilver360) for the tip via Twitter.)

Update 2015-12-17: Margaret Sullivan, the Public Editor of the Times, writes today on the mixed response to the geotargeted story, concluding:

The Times could have quite easily provided readers with an opt-in: “Want to see results for your area? Click here.”

As the paper continues down this path, it’s important to do so with awareness and caution. For one thing, some readers won’t like any personalization and will regard it as intrusive. For another, personalization could deprive readers of a shared, and expertly curated, news experience, which is what many come to The Times for. Losing that would be a big mistake.

Posted in modern life | 3 Comments

The Carnival Is Coming to Town

Carnival-Cravings_300x119.jpgBit-player will be hosting the 130th Carnival of Mathematics, the monthly celebration of math(s) writing on the Web organized by The Aperiodical. Submissions are open now through January 9.

Please contribute! Anything that might engage or delight the mathematical mind is welcome: theorems, problems, games and recreations, notes on math education. We’ll take pure and applied, discrete and continuous, geometrical and arithmetical, differential and integral, polynomial and exponential…. You get the idea. And don’t be shy about proposing your own work!

Submit here.

Posted in mathematics | Comments Off