When I first started playing with computers, back in the Pleistocene, writing a few lines of code and watching the machine carry out my instructions was enough to give me a little thrill. “Look! It can count to 10!” Today, learning a new programming language is my way of reviving that sense of wonder.
Lately I’ve been learning Julia, which describes itself as “a high-level, high-performance dynamic programming language for technical computing.” I’ve been dabbling with Julia for a couple of years, but this spring I completed my first serious project—my first attempt to do something useful with the language. I wrote a bunch of code to explore correlations between consecutive prime numbers mod m, inspired by the recent paper of Lemke Oliver and Soundararajan. The code from that project, wrapped up in a Jupyter notebook, is available on GitHub. A bit-player article published last month presented the results of the computation. Here I want to say a few words about my experience with the language.
I also discussed Julia a year ago, on the occasion of JuliaCon 2015, the annual gathering of the Julia community. Parts of this post were written at JuliaCon 2016, held at MIT June 21–25.
This document is itself derived from a Jupyter notebook, although it has been converted to static HTML—meaning that you can only read it, not run the code examples. (I’ve also done some reformatting to save space and improve appearance.) If you have a working installation of Julia and Jupyter, I suggest you download the fully interactive notebook from GitHub, so that you can edit and run the code. If you haven’t installed Julia, download the notebook anyway. You can upload it to JuliaBox.org and run it online.
A 2014 paper by the founders of the Julia project sets an ambitious goal. To paraphrase: There are languages that make programming quick and easy, and there are languages that make computation fast and efficient. The two sets of languages have long been disjoint. Julia aims to fix that. It offers high-level programming, where algorithms are expressed succinctly and without much fussing over data types and memory allocation. But it also strives to match the performance of lower-level languages such as Fortran and C. Achieving these dual goals requires attention both to the design of the language itself and to the implementation.
The Julia project was initiated by a small group at MIT, including Jeff Bezanson, Alan Edelman, Stefan Karpinski, and Viral B. Shah, and has since attracted about 500 contributors. The software is open source, hosted at GitHub.
In my kind of programming, the end product is not the program itself but the answers it computes. As a result, I favor an incremental and interactive style. I want to write and run small chunks of code—typically individual procedures—without having to build a lot of scaffolding to support them. For a long time I worked mainly in Lisp, although in recent years I’ve also written a fair amount of Python and JavaScript. I mention all this because my background inevitably colors my judgment of programming languages and environments. If you’re a software engineer on a large team building large systems, your needs and priorities are surely different from mine.
Enough generalities. Here’s a bit of Julia code, plucked from my recent project:
# return the next prime after x
# define the function...
function next_prime(x)
x += iseven(x) ? 1 : 2
while !isprime(x)
x += 2
end
return x
end
# and now invoke it
next_prime(10000000) ⇒ 10000019
Given an integer x
, we add 1 if x
is even and otherwise add 2, thereby ensuring that the new value of x
is odd. Now we repeatedly check the isprime(x)
predicate, or rather its negation !isprime(x)
. As long as x
is not a prime number, we continue incrementing by 2; when x
finally lands on a prime value (it must do so eventually, according to Euclid), the while
loop terminates and the function returns the prime value of x
.
In this code snippet, note the absence of punctuation: No semicolons separate the statements, and no curly braces delimit the function body. The syntax owes a lot to MATLAB—most conspicuously the use of the end
keyword to mark the end of a block, without any corresponding begin
. The resemblance to MATLAB is more than coincidence; some of the key people in the Julia project have a background in numerical analysis and linear algebra, where MATLAB has long been a standard tool. One of those people is Alan Edelman. I remember asking him, 15 years ago, “How do you compute the eigenvalues of a matrix?” He explained: “You start up MATLAB and type eig
.” The same incantation works in Julia.
The sparse punctuation and the style of indentation also make Julia code look a little like Python, but in this case the similarity is only superficial. In Python, indentation determines the scope of statements, but in Julia indentation is merely an aid to the human reader; the program would compute the same result even if all the lines were flush left.
The next_prime
function above shows a while
loop. Here’s a for
loop:
# for-loop factorial
function factorial_for(n)
fact = 1
for i in 1:n # i takes each value in 1, 2, ..., n
fact *= i # equivalent to fact = fact * i
end
return fact
end
factorial_for(10) ⇒ 3628800
The same idea can be expressed more succinctly and idiomatically:
# factorial via range product
function factorial_range(n)
return prod(1:n)
end
factorial_range(11) ⇒ 39916800
Here 1:n
denotes the numeric range \(1, 2, 3, …, n\), and the prod
function returns the product of all the numbers in the range. Still too verbose? Using an abbreviated style of function definition, it becomes a true one-liner:
fact_range(n) = prod(1:n)
fact_range(12) ⇒ 479001600
As an immigrant from the land of Lisp, I can’t in good conscience discuss the factorial function without also presenting a recursive version:
# recursive definition of factorial
function factorial_rec(n)
if n < 2
return 1
else
return n * factorial_rec(n - 1)
end
end
factorial_rec(13) ⇒ 6227020800
You can save some keystrokes by replacing the if... else... end
statement with a ternary operator. The syntax is
predicate ? consequent : alternative.
If predicate
evaluates to true, the expression returns the value of consequent
; otherwise it returns the value of alternative
. With this device, the recursive factorial function looks like this:
# recursive factorial with ternary operator
function factorial_rec_ternary(n)
n < 2 ? 1 : n * factorial_rec_ternary(n - 1)
end
factorial_rec_ternary(14) ⇒ 87178291200
One sour note on the subject of recursion: The Julia compiler does not perform tail-call conversion, which endows recursive procedure definitions with the same space and time efficiency as iterative structures such as while
loops. Consider this pair of daisy-plucking procedures:
function she_loves_me(petals)
petals == 0 ? true : she_loves_me_not(petals - 1)
end
function she_loves_me_not(petals)
petals == 0 ? false : she_loves_me(petals - 1)
end
she_loves_me(1001) ⇒ false
These are mutually recursive definitions: Control passes back and forth between them until one or the other terminates when the petals are exhausted. The function works correctly on small values of petals
. But let’s try a more challenging exercise:
she_loves_me(1000000)
LoadError: StackOverflowError:
while loading In[8], in expression starting on line 1
in she_loves_me at In[7]:2 (repeats 80000 times)
The stack overflows because Julia is pushing a procedure-call record onto the stack for every invocation of either she_loves_me
or she_loves_me_not
. These records will never be needed; when the answer (true
or false
) is finally determined, it can be returned directly to the top level, without having to percolate up through the cascade of stack frames. The technique for eliminating such unneeded stack frames has been well known for more than 30 years. Implementing tail-call optimization has been discussed by the Julia developers but is “not a priority.” For me this is not a catastrophic defect, but it’s a disappointment. It rules out a certain style of reasoning that in some cases is the most direct way of expressing mathematical ideas.
Given Julia’s heritage in linear algebra, it’s no surprise that there are rich facilities for describing matrices and other array-like objects. One handy tool is the array comprehension, which is written as a pair of square brackets surrounding a description of what the compiler should compute to build the array.
A = [x^2 for x in 1:5] ⇒ [1, 4, 9, 16, 25]
The idea of comprehensions comes from the set-builder notation of mathematics. It was adopted in the programming language SETL in the 1960s, and it has since wormed its way into several other languages, including Haskell and Python. But Julia’s comprehensions are unusual: Whereas Python comprehensions are limited to one-dimensional vectors or lists, Julia comprehensions can specify multidimensional arrays.
B = [x^y for x in 1:3, y in 1:3] ⇒
3x3 Array{Int64,2}:
1 1 1
2 4 8
3 9 27
Multidimensionality comes at a price. A Python comprehension can have a filter clause that selects some of the array elements and excludes others. If Julia had such comprehension filters, you could generate an array of prime numbers with an expression like this: [x for x in 1:100 if isprime(x)]
. But adding filters to multidimensional arrays is problematic; you can’t just pluck values out of a matrix and keep the rest of the structure intact. Nevertheless, it appears a solution is in hand. After three years of discussion, a fix has recently been added to the code base. (I have not yet tried it out.)
This sounds like something the 911 operator might do in response to a five-alarm fire, but in fact multiple dispatch is a distinctive core idea in Julia. Understanding what it is and why you might care about it requires a bit of context.
In almost all programming languages, an expression along the lines of x * y
multiplies x
by y
, whether x
and y
are integers, floating-point numbers, or maybe even complex numbers or matrices. All of these operations qualify as multiplication, but the underlying algorithms—the ways the bits are twiddled to compute the product—are quite different. Thus the *
operator is polymorphic: It’s a single symbol that evokes many different actions. One way to handle this situation is to write a single big function—call it mul(x, y)
—that has a chain of if... then... else
statements to select the right multiplication algorithm for each x, y
pair. If you want to multiply all possible combinations of \(n\) kinds of numbers, the if
statement needs \(n^2\) branches. Maintaining that monolithic mul
procedure can become a headache. Suppose you want to add a new type of number, such as quaternions. You would have to patch in new routines throughout the existing mul
code.
When object-oriented programming (OOP) swept through the world of computing in the 1980s, it offered an alternative. In an object-oriented setting, each class of numbers has its own set of multiplication procedures, or methods. Instead of one universal mul
function, there’s a mul
method for integers, another mul
for floats, and so on. To multiply x
by y
, you don’t write mul(x, y)
but rather something like x.mul(y)
. The object-oriented programmer thinks of this protocol as sending a message to the x
object, asking it to apply its mul
method to the y
argument. You can also describe the scheme as a single-dispatch mechanism. The dispatcher is a behind-the-scenes component that keeps track of all methods with the same name, such as mul
, and chooses which of those methods to invoke based on the data type of the single argument x
. (The x
object still needs some internal mechanism to examine the type of y
to know which of its own methods to apply.)
Multiple dispatch is a generalization of single dispatch. The dispatcher looks at all the arguments and their types, and chooses the method accordingly. Once that decision is made, no further choices are needed. There might be separate mul
methods for combining two integers, for an integer and a float, for a float and a complex, etc. Julia has 138 methods linked to the multiplication operator *
, including a few mildly surprising combinations. (Quick, what’s the value of false * 42
?)
# Typing "methods(f)" for any function f, or for an operator
# such as "*", produces a list of all the methods and their
# type signatures. The links take you to the source code.
methods(*) ⇒
Before Julia came along, the best-known example of multiple dispatch was the Common Lisp Object System, or CLOS. As a Lisp guy, I’m familiar with CLOS, but I’ve seldom found it useful; it’s too heavy a hammer for most of my little nails. But whereas CLOS is an optional add-on to Lisp, multiple dispatch is deeply embedded in the infrastructure of Julia. You can’t really ignore it.
Multiple dispatch encourages a style of programming in which you write an abundance of short, single-purpose methods that handle specific combinations of argument types. This is surely an improvement over writing one huge function with an internal tangle of spaghetti logic to handle dozens special cases. The Julia documentation offers this example:
# roll-your-own method dispatch
function norm(A)
if isa(A, Vector)
return sqrt(real(dot(A,A)))
elseif isa(A, Matrix)
return max(svd(A)[2])
else
error("norm: invalid argument")
end
end
# let the compiler do method dispatch
norm(x::Vector) = sqrt(real(dot(x,x)))
norm(A::Matrix) = max(svd(A)[2])
# Note: The 'x::y syntax declares that variable x will have
# a value of type y.
Splitting the procedure into two methods makes the definition clearer and more concise, and it also promises better performance. There’s no need to crawl through the if... else...
chain at runtime; the appropriate method is chosen at compile time.
That example seems pretty compelling. Edelman gives another illuminating demo, defining tropical arithmetic in a few lines of code. (In tropical arithmetic, plus
is min
and times
is plus
.)
Unfortunately, my own attempts to structure code in this way have not been so successful. Take the next_prime(x)
function shown at the beginning of this notebook. The argument x
might be either an ordinary integer (some subtype of Int
, such as Int64
) or a BigInt
, a numeric type accommodating integers of unbounded size. So I tried writing separate methods next_prime(x::Int)
and next_prime(x::BigInt)
. The result, however, was annoying code duplication: The bodies of those two methods were identical. Furthermore, splitting the two methods yielded no performance gain; the compiler was able to produce efficient code without any type annotations at all.
I remain somewhat befuddled about how best to exploit multiple dispatch. Suppose you are creating a graphics program that can plot either data from an array or a mathematical function. You could define a generic plot
procedure with two methods:
function plot(data::Array)
# ... code for array plotting
end
function plot(fn::Function)
# ... code for function plotting
end
With these definitions, calls to plot([2, 4, 6, 8])
and plot(sin)
would automatically be routed to the appropriate method. However, making the two methods totally independent is not quite right either. They both need to deal with scales, grids, tickmarks, labels, and other graphics accoutrements. I’m still learning how to structure such code. No doubt it will become clearer with experience; in the meantime, advice is welcome.
Multiple dispatch is not a general pattern-matching mechanism. It discriminates only on the basis of the type signature of a function’s arguments, not on the values of those arguments. In Haskell—a language that does have pattern matching—you can write a function definition like this:
factorial 0 = 1
factorial n = n * factorial (n - 1)
That won’t work in Julia. (Actually, there may be a baroque way to do it, but it’s not recommended.) (And I’ve just learned there’s a pattern-matching package.) (And another.)
In CLOS, multiple dispatch is an adjunct to an object-oriented programming system; in Julia, multiple dispatch is an alternative to OOP, and perhaps a repudiation of it. OOP is a noun-centered protocol: Objects are things that own both data and procedures that act on the data. Julia is verb-centered: Procedures (or functions) are the active elements of a program, and they can be applied to any data, not just their own private variables. The Julia documentation notes:
Multiple dispatch is particularly useful for mathematical code, where it makes little sense to artificially deem the operations to “belong” to one argument more than any of the others: does the addition operation in
x + y
belong tox
any more than it does toy
? The implementation of a mathematical operator generally depends on the types of all of its arguments. Even beyond mathematical operations, however, multiple dispatch ends up being a powerful and convenient paradigm for structuring and organizing programs.
I agree about the peculiar asymmetry of asking x
to add y
to itself. And yet that’s only one small aspect of what object-oriented programming is all about. There are many occasions when it’s really handy to bundle up related code and data in an object, if only for the sake of tidyness. If I had been writing my primes program in an object-oriented language, I would have created a class of objects to hold various properties of a sequence of consecutive primes, and then instantiated that class for each sequence I generated. Some of the object fields would have been simple data, such as the length of the sequence. Others would have been procedures for calculating more elaborate statistics. Some of the data and procedures would have been private—accessible only from within the object.
No such facility exists in Julia, but there are other ways of addressing some of the same needs. Julia’s user-defined composite types look very much like objects in some respects; they even use the same dot notation for field access that you see in Java, JavaScript and Python. A type definition looks like this.
type Location
latitude::Float64
longitude::Float64
end
Given a variable of this type, such as Paris = Location(48.8566, 2.3522)
, you can access the two fields as Paris.latitude
and Paris.longitude
. You can even have a field of type Function
inside a composite type, as in this declaration:
type Location
latitude::Float64
longitude::Float64
distance_to_pole::Function
end
Then, if you store an appropriate function in that field, you can invoke it as Paris.distance_to_pole()
. However, the function has no special access to the other fields of the type; in particular, it doesn’t know which Location
it’s being called from, so it doesn’t work like an OOP method.
Julia also has modules, which are essentially private namespaces. Only variables that are explicitly marked as exported
can be seen from outside a module. A module can encapsulate both code and data, and so it is probably the closest approach to objects as a mechanism of program organization. But there’s still nothing like the this or self keyword of true object-oriented languages.
Is it brave or foolish, at this point in the history of computer science, to turn your back on the object-oriented paradigm and try something else? It’s surely bold. Two or three generations of programmers have grown up on a steady diet OOP. On the other hand, we have not yet found the One True Path to software wisdom, so there’s surely room for exploring other corners of the ecosystem.
A language for “technical computing” had better have strong numerical abilities. Julia has the full gamut of numeric types, with integers of various fixed sizes (from 8 bits to 128 bits), both signed and unsigned, and a comparable spectrum of floating-point precisions. There are also BigInts
and BigFloats
that expand to accommodate numbers of any size (up to the bounds of available memory). And you can do arithmetic with complex numbers and exact rationals.
This is a well-stocked toolkit, suitable for many kinds of computations. However, the emphasis is on fast floating-point arithmetic with real or complex values, approximated at machine precision. If you want to work in number theory, combinatorics, or cryptography—fields that require exact integers and rational numbers of unbounded size—the tools are provided, but using them requires some extra contortions.
To my taste, the most elegant implementation of numeric types is found in the Scheme programming language, a dialect of Lisp. The Scheme numeric “tower” begins with integers at its base. The integers are a subset of the rational numbers, which in turn are a subset of the reals, which are a subset of the complex numbers. Alongside this hierarchy, and orthogonal to it, Scheme also distinguishes between exact and inexact numbers. Integers and rationals can be represented exactly in a computer system, but many real and complex values can only be approximated, usually by floating-point numbers. In doing basic arithmetic, the Scheme interpreter or compiler preserves exactness. This is easy when you’re adding, subtracting, or multiplying integers, since the result is also invariably an integer. With division of integers, Scheme returns an integer when possible (e.g., \(15 \div 3 = 5\)) and otherwise an exact rational (\(4 \div 6 = 2/3\)). Some Schemes go a step further and return exact results for operations such as square root, when it’s possible to do so (\(\sqrt{4/9} = 2/3\); but \(\sqrt{5} = 2.236068\); the latter is inexact). These are numbers you can count on; they mostly obey well-known principles of mathematics, such as \(\frac{m}{n} \times \frac{n}{m} = 1\).
Julia has the same tower of numeric types, but it is not so scrupulous about exact and inexact values. Consider this series of results:
15 / 3 ⇒ 5.0
4 / 6 ⇒ 0.6666666666666666
(15 / 17) * (17 / 15) ⇒ 0.9999999999999999
Even when the result of dividing two integers is another integer, Julia converts the quotient to a floating-point value (signaled by the presence of a decimal point). We also get floats instead of exact rationals for noninteger quotients. Because exactness is lost, mathematical identities become unreliable.
I think I know why the Julia designers chose this path. It’s all about type stability. The compiler can produce faster code if the output type of a method is always the same. If division of integers could yield either an integer or a float, the result type would not be known until runtime. Using exact rationals rather than floats would impose a double penalty: Rational arithmetic is much slower than (hardware-assisted) floating point.
Given the priorities of the Julia project, consistent floating-point quotients were probably the right choice. I’ll concede that point, but it still leaves me grumpy. I’m tempted to ask, “Why not just make all numbers floating point, the way JavaScript does?” Then all arithmetic operations would be type stable by default. (This is not a serious proposal. I deplore JavaScript’s all-float policy.)
Julia does offer the tools to build your own procedures for exact arithmetic. Here’s one way to conquer divide:
function xdiv(x::Int, y::Int)
q = x // y # yields rational quotient
den(q) == 1 ? num(q) : q # return int if possible
end
xdiv(15, 5) ⇒ 3
xdiv(15, 9) ⇒ 5//3
Another numerical quirk gives me the heebie-jeebies. Let’s look again at one of those factorial functions defined above (it doesn’t matter which one).
# run the factorial function f on integers from 1 to limit
function test_factorial(f::Function, limit::Int)
for n in 1:limit
@printf("n = %d, f(n) = %d\n", n, f(n))
end
end
test_factorial(factorial_range, 25) ⇒
n = 1, f(n) = 1
n = 2, f(n) = 2
n = 3, f(n) = 6
n = 4, f(n) = 24
n = 5, f(n) = 120
n = 6, f(n) = 720
n = 7, f(n) = 5040
n = 8, f(n) = 40320
n = 9, f(n) = 362880
n = 10, f(n) = 3628800
n = 11, f(n) = 39916800
n = 12, f(n) = 479001600
n = 13, f(n) = 6227020800
n = 14, f(n) = 87178291200
n = 15, f(n) = 1307674368000
n = 16, f(n) = 20922789888000
n = 17, f(n) = 355687428096000
n = 18, f(n) = 6402373705728000
n = 19, f(n) = 121645100408832000
n = 20, f(n) = 2432902008176640000
n = 21, f(n) = -4249290049419214848
n = 22, f(n) = -1250660718674968576
n = 23, f(n) = 8128291617894825984
n = 24, f(n) = -7835185981329244160
n = 25, f(n) = 7034535277573963776
Everything is copacetic through \(n = 20\), and then suddenly we enter an alternative universe where multiplying a bunch of positive integers gives a negative result. You can probably guess what’s happening here. The computation is being done with 64-bit, twos-complement signed integers, with the most-significant bit representing the sign. When the magnitude of the number exceeds \(2^{63} - 1\), the bit pattern is interpreted as a negative value; then, at \(2^{64}\), it crosses zero and becomes positive again. Essentially, we’re doing arithmetic modulo \(2^{64}\), with an offset.
When a number grows beyond the space allotted for it, something has to give. In some languages the overflowing integer is gracefully converted to a bignum
format, so that it can keep growing without constraint. Most Lisps are in this family; so is Python. JavaScript, with its all-floating-point arithmetic, gives approximate answers beyond \(20!\), and beyond \(170!\) all results are deemed equal to Infinity
. In several other languages, programs halt with an error message on integer overflow. C is one language that has the same wraparound behavior as Julia. (Actually, the C standard says that signed-integer overflow is “undefined,” which means the compiler can do anything it pleases; but the C compiler I just tested does a wraparound, and I think that’s the common policy.)
The Julia documentation includes a thorough and thoughtful discussion of integer overflow, defending the wraparound strategy by showing that all the alternatives are unacceptable for one reason or another. But even if you go along with that conclusion, you might still feel that wraparound is also unacceptable.
Casual disregard of integer overflow has produced some notably stinky bugs, causing everything from video-game glitches in Pac Man and Donkey Kong to the failure of the first Ariane 5 rocket launch. Dietz, Li, Regehr, and Adve have surveyed bugs of this kind in C programs, finding them even in a module called SafeInt that was specifically designed to protect against such errors. In my little prime-counting project with Julia, I stumbled over overflow problems several times, even after I understood exactly where the risks lay.
In certain other contexts Julia doesn’t play quite so fast and loose. It does runtime bounds checking on array references, throwing an error if you ask for element five of a four-element vector. But it also provides a macro (@inbounds
) that allows self-confident daredevils to turn off this safeguard for the sake of speed. Perhaps someday there will be a similar option for integer overflows.
Until then, it’s up to us to write in any overflow checks we think might be prudent. The Julia developers themselves have done so in many places, including their built-in factorial
function. See error message below.
test_factorial(factorial, 25) ⇒
n = 1, f(n) = 1
n = 2, f(n) = 2
n = 3, f(n) = 6
n = 4, f(n) = 24
n = 5, f(n) = 120
n = 6, f(n) = 720
n = 7, f(n) = 5040
n = 8, f(n) = 40320
n = 9, f(n) = 362880
n = 10, f(n) = 3628800
n = 11, f(n) = 39916800
n = 12, f(n) = 479001600
n = 13, f(n) = 6227020800
n = 14, f(n) = 87178291200
n = 15, f(n) = 1307674368000
n = 16, f(n) = 20922789888000
n = 17, f(n) = 355687428096000
n = 18, f(n) = 6402373705728000
n = 19, f(n) = 121645100408832000
n = 20, f(n) = 2432902008176640000
LoadError: OverflowError()
while loading In[19], in expression starting on line 1
in factorial_lookup at combinatorics.jl:29
in factorial at combinatorics.jl:37
in test_factorial at In[18]:5
Julia is gaining traction in scientific computing, in finance, and other fields; it has even been adopted as the specification language for an aircraft collision-avoidance system. I do hope that everyone is being careful.
“Batteries included” is a slogan of the Python community, boasting that everything you need is provided in the box. And indeed Python comes with a large standard library, supplemented by a huge variety of user-contributed modules (the PyPI index lists about 85,000 of them). On the other hand, although batteries of many kinds are included, they are not installed by default. You can’t accomplish much in Python without first importing a few modules. Just to take a square root, you have to import math
. If you want complex numbers, they’re in a different module, and rationals are in a third.
In contrast, Julia is refreshingly self-contained. The standard library has a wide range of functions, and they’re all immediately available, without tracking down and loading external files. There are also about a thousand contributed packages, but you can do quite a lot of useful computing without ever glancing at them. In this respect Julia reminds me of Mathematica, which also tries to put all the commonly needed tools at your fingertips.
A Julia sampler: you get sqrt(x)
of course. Also cbrt(x)
and hypot(x, y)
and log(x)
and exp(x)
. There’s a full deck of trig functions, both circular and hyperbolic, with a variant subset that take their arguments in degrees rather than radians. For combinatorialists we have the factorial
function mentioned above, as well as binomial
, and various kinds of permutations and combinations. Number theorists can test numbers for primality, and factor those that are composite, calculate gcds and lcms, and so on. Farther afield we have beta
, gamma
, digamma
, eta
, and zeta
functions, Airy functions, and a dozen kinds of Bessel functions.
For me Julia is like a well-equipped playground, where I can scamper from the swings to the teeter-totter to the monkey bars. At the recent JuliaCon Stefan Karpinsky mentioned a plan to move some of these functions out of the automatically loaded Base
module into external libraries. My request: Please don’t overdo it.
Julia works splendidly as a platform for incremental, interactive, exploratory programming. There’s no need to write and compile a program as a monolithic structure, with a main
function as entry point. You can write a single procedure, immediately compile it and run it, test it, modify it, and examine the assembly code generated by the compiler. It’s ad hack programming at its best.
To support this style of work and play, Julia comes with a built-in REPL, or read-eval-print loop, that operates from a command-line interface. Type an expression at the prompt, press enter, and the system executes the code (compiling if necessary); the value is printed, followed by a new prompt for the next command:
The REPL is pretty good, but I prefer working in a Jupyter notebook, which runs in a web browser. There’s also a development environment called Juno, but I haven’t yet done much with it.
An incremental and iterative style of development can be a challenge for the compiler. When you process an entire program in one piece, the compiler starts from a blank slate. When you compile individual functions, which need to interact with functions compiled earlier, it’s all too easy for the various parts of the system to get out of sync. Gears may fail to mesh.
Problems of this kind can come up with any language, but multiple dispatch introduces some unique quirks. Suppose you’re writing Julia code in a Jupyter notebook. You write and compile a function f(x)
, which the system accepts as a “generic function with one method.” Later you realize that f
should have been a function of two arguments, and so you go back to edit the definition. The modified source code now reads f(x, y)
. In most languages, this new definition of f
would replace the old one, which would thereafter cease to exist. But in Julia you have haven’t replaced anything; instead you now have a “generic function with 2 methods.” The first definition is still present and active in the internal workspace, and so you can call both f(x)
and f(x, y)
. However, the source code for f(x)
is nowhere to be found in the notebook. Thus if you end the Jupyter session and later reload the file, any calls to f(x)
will produce an error message.
A similar but subtler problem with redefinitions has been a subject of active discussion among the Julia developers for almost five years. There’s hope for a fix in the next major release.
If redefining a function during an interactive session can lead to confusion, trying to redefine a custom type runs into an impassable barrier. Types are immutable. Once defined, they can’t be modified in any way. If you create a type T
with two fields a
and b
, then later decide you also need a field c
, you’re stuck. The only way to redefine the type is to shut down the session and restart, losing all computations completed so far. How annoying. There’s a workaround: If you put the type definition inside a module, you can simply reload the module.
Sometimes it’s the little things that arouse the most heated controversy.
In counting things, Julia generally starts with 1. A vector with n
elements is indexed from 1
through n
. This is also the default convention in Fortran, MATLAB, Mathematica, and a few other languages; the C family, Python, and JavaScript count from 0
through n-1
. The wisdom of 1-based indexing has been a subject of long and spirited debate on the Julia issues forum, on the developer mailing list, and on StackOverflow.
Programs where the choice of indexing origin makes any difference are probably rare, but as it happens my primes study was one of them. I was looking at primes reduced modulo \(m\), and counting the number of primes in each of the \(m\) possible residue classes. Since the residues range from \(0\) to \(m - 1\), it would have been convenient to store the counts in an array with those indices. However, I did not find it terribly onerous to add \(1\) to each index.
Blessedly, the indexing war may soon be over. Tim Holy, a biologist and intrepid Julia contributor, discovered a way to give users free choice of indexing with only a few lines of code and little cost in efficiency. An experimental version of this new mechanism is in the 0.5 developer release.
After the indexing truce, we can still fight over the storage order for arrays and matrices. Julia again follows the precedent of Fortran and MATLAB, reading arrays in column-major order. Given the matrix
$$\begin{matrix}
x_{11} & x_{12} & x_{13} \\
x_{21} & x_{22} & x_{23} \\
\end{matrix}$$
Julia will serialize the elements as \(x_{11}, x_{21}, x_{12}, x_{22}, x_{13}, x_{23}\). C and many other recent languages read across the rows rather than down the columns. This is another largely arbitrary convention, although it may be of some practical importance that most graphic formats are row-major.
Finally there’s the pressing question of what symbol to choose for the string-concatenation operator. Where Python has "abc" + "def" ⇒ "abcdef"
, Julia does "abc" * "def" ⇒ "abcdef"
. The reason cited is pretty doctrinaire: In abstract algebras, ‘+’ generally denotes a commutative operation, and string concatenation is not commutative. If you’d like to know more, see the extensive, long-running discussion.
The official online documentation is reasonably complete, consistent, and clearly written, but it gives too few examples. Moreover, the organization is sometimes hard to fathom. For example, there’s a chapter on “Multidimensional Arrays” and another titled “Arrays”; I’m often unsure where to look first.
Some of the contributed packages have quite sparse documentation. I made heavy use of a graphics package called GadFly, which is a Julia version of the R package called ggplot2. I had a hard time getting started with the supplied documentation, until I realized that GadFly is close enough to ggplot2 that I could rely on the manual for the latter program.
The simple advice when you’re stumped is: Read the source, Luke. It’s all readily available and easily searched. And almost everything is written in Julia, so you’ll not only find the answers to your questions but also see instructive examples of idiomatic usage.
A few other online resources:
Julia by Example (Samuel Colvin)
Learn X in Y Minutes, Where X = Julia (Leah Hanson and others)
Introducing Julia (a Wikibook)
The Julia Express (Bogumił Kamínski)
Videos of the talks from JuliaCon 2016 are on YouTube.
As far as I know, Julia has no standard or specification document, which is an interesting absence. For a long time, programming languages had rather careful, formal descriptions of syntax and semantics, laying out exactly which expressions were legal and meaningful in a program. Sometimes the standard came first, before anyone tried to write a compiler or interpreter; Algol 60 was the premier example of this discipline. Sometimes the standard came after, as a device to reconcile diverging implementations; this was the situation with Lisp and C. For now, at least, Julia is apparently defined only by its evolving implementation. Where’s the BNF?
The rise and fall of programming languages seems to be governed by a highly nonlinear dynamic. Why is Python so popular? Because it’s so popular! With millions of users, there are more add-on packages, more books and articles, more conferences, more showcase applications, more jobs for Python programmers, more classes where Python is the language of instruction. Success breeds success. It’s a bandwagon effect.
If this rich-get-richer economy were the whole story, the world would have only one viable programming language. But there’s a price to be paid for popularity: The overcrowded bandwagon gets trapped in its own wheel ruts and becomes impossible to steer. Think of those 85,000 Python packages. If an improvement in the core language is incompatible with too much of that existing code, the change can be made only with great effort. For example, persuading Python users to migrate from version 2 to version 3 has taken eight years, and the transition is not complete yet.
Julia is approaching the moment when these contrary imperatives—to gain momentum but to stay maneuverable—both become powerful. At the recent JuliaCon, Stefan Karpinsky gave a talk on the path to version 1.0 and beyond. The audience listened with the kind of studious attention that Janet Yellin gets when she announces Fed policy on inflation and interest rates. Toward the end, Karpinsky unveiled the plan: to release version 1.0 within a year, by the time of the 2017 JuliaCon. Cheers and applause.
The 1.0 label connotes maturity—or at least adulthood. It’s a signal that the language is out of school and ready for work in the real-world. And thus begins the struggle to gain momentum while continuing to innovate. For members of Julia’s developer group, there might be a parallel conflict between making a living and continuing to have fun. I’m fervently hoping all these tensions can be resolved.
Will Julia attract enough interest to grow and thrive? I feel I can best address that question by turning it back on myself. Will I continue to write code in Julia?
I’ve said that I do exploratory programming, but I also do expository programming. Computational experiments are an essential part of my work as a science writer. They help me understand what I’m writing about, and they also aid in the storytelling process, through simulations, computer-generated illustrations, and interactive gadgets. Although only a small subset of my readers ever glance at the code I publish, I want to maximize their number and do all I can to deepen their engagement—enticing them to read the code, run it, modify it, and ideally build on it. At this moment, Julia is not the most obvious choice for reaching a broad audience of computational dabblers, but I think it has considerable promise. In some respects, getting started with Julia is easier than it is with Python, if only because there’s less prior art—less to install and configure. I expect to continue working in other languages as well, but I do want to try some more Julia experiments.
Yet that’s not the end of the story. The distribution of the primes looks random, with irregular gaps and clusters that seem quite haphazard. If there’s a pattern, it’s inscrutable. As a matter of fact, the primes look random enough that you could play dice with them. Make a list of consecutive prime numbers (perhaps starting with 11, 13, 17, 19, . . . ) and reduce them modulo 7. In other words, divide each prime by 7 and keep only the remainder. The result is a sequence of integers drawn from the set {1, 2, 3, 4, 5, 6} that looks much like the outcome of repeatedly rolling a fair die.
$$\begin{align*}
11 \bmod 7 & \rightarrow 4 \qquad 47 \bmod 7 \rightarrow 5 \\
13 \bmod 7 & \rightarrow 6 \qquad 53 \bmod 7 \rightarrow 4 \\
17 \bmod 7 & \rightarrow 3 \qquad 59 \bmod 7 \rightarrow 3 \\
19 \bmod 7 & \rightarrow 5 \qquad 61 \bmod 7 \rightarrow 5 \\
23 \bmod 7 & \rightarrow 2 \qquad 67 \bmod 7 \rightarrow 4 \\
29 \bmod 7 & \rightarrow 1 \qquad 71 \bmod 7 \rightarrow 1 \\
31 \bmod 7 & \rightarrow 3 \qquad 73 \bmod 7 \rightarrow 3 \\
37 \bmod 7 & \rightarrow 2 \qquad 79 \bmod 7 \rightarrow 2 \\
41 \bmod 7 & \rightarrow 6 \qquad 83 \bmod 7 \rightarrow 6 \\
43 \bmod 7 & \rightarrow 1 \qquad 89 \bmod 7 \rightarrow 5 \\
\end{align*}$$
Working with a larger sample (the first million primes greater than \(10^7\)), I have tallied up the number of primes with each of the six possible remainders mod 7 (otherwise known as the six possible congruence classes mod 7). I have also simulated a million rolls of a six-sided die. Looking at the results of these two exercises, can you tell which is which?
1 2 3 4 5 6 166,787 166,569 166,714 166,573 166,665 166,692 120 -98 47 -94 -2 25 1 2 3 4 5 6 166,768 166,290 166,412 166,638 167,282 166,610 101 -377 -255 -29 615 -57
In each table the first line counts the number of outcomes, \(x\), in each of the six classes; the second line shows the difference \(x - \bar{x}\), where \(\bar{x}\) is the mean value 1,000,000 / 6 = 166,667. In both cases the numbers seem to be spread pretty evenly, without any obvious biases. The first table represents the prime residues mod 7. They have a somewhat flatter distribution than the simulated die, with smaller departures from the mean; the standard deviations of the two samples are 84 and 346 respectively. On the evidence of these tables it looks like either process could supply the randomness needed for a casual game of dice.
There’s more to randomness, however, than just ensuring that the results are evenly distributed across the allowed range. Individual events in the series must also be independent of one another. One roll of a die should have no effect on the outcome of the next roll. As a test of independence, we can look at pairs of successive events. How many times is a 1 followed by another 1, by a 2, by a 3, and so on? A 6 × 6 matrix serves to record the counts of the 36 possible pairs. If the process is truly random, all 36 pairs should be equally frequent, apart from small statistical fluctuations. We can turn the matrix into a color-coded “heatmap,” where cells with higher-than-average counts are shown in warm shades of pink and red, while those below the mean are filled with cooler shades of blue. (The quantity plotted is not the actual count \(x\) but a normalized variable \(w = (x_{i,\,j} - \bar{x})\, /\, \bar{x}\), where \(\bar{x}\) is again the mean value—in this case 1,000,000 / 36 = 27,778.) Here is the heatmap for the simulated fair die:
Figure 1.
Not much going on there. Almost all the counts are so close to the mean value that the matrix cells appear as a neutral gray; a few are very pale pink or blue. It’s just what you would expect if consecutive rolls of a die are uncorrelated, and all possible pairs are equally likely.
Now for the corresponding matrix of consecutive primes mod 7:
Figure 2.
Well! I guess we’re not in Randomland anymore; this is where the old gray movie turns into Technicolor. The heatmap has a blue streak along the main diagonal (upper left to lower right), indicating that consecutive pairs of primes that have the same value mod 7 are strongly suppressed. In other words, the pairs \((1, 1), (2, 2), \ldots (6, 6)\) appear less often than they would in a truly random sequence. The superdiagonal (just above the main diagonal) is a lighter blue, meaning that \((i, j)\) pairs with \(j=i+1\) are seen at a little less than average frequency; for example, \((2, 3)\) and \((5, 6)\) have slightly negative values of normalized frequency. On the other hand, the subdiagonal (below the main diagonal) is all pink and red; pairs such as \((3, 2)\) or \((5, 4)\), with \(j=i-1\), occur with higher than average frequency. Away from the diagonal, in the upper right and lower left corners, we see a pastel checkerboard pattern.
If you’d prefer to squint at numbers rather than colored squares, here’s the underlying matrix:
pairs of consecutive primes mod 7 1 2 3 4 5 6 1 15656 24376 33891 29964 33984 28916 2 37360 15506 22004 32645 25095 33959 3 25307 41107 14823 22747 32721 30009 4 32936 26183 37129 14681 21852 33791 5 24984 34207 26231 41154 15560 24529 6 30543 25190 32636 25382 37453 15488
The departures from uniformity are anything but subtle. The third row, for example, shows that if you have just seen a 3 in the sequence of primes mod 7, the next number is much more likely to be a 2 than another 3. If you were wagering on a game played with prime-number dice, this bias could make a huge difference in the outcome. The prime dice are rigged!
These remarkably strong correlations in pairs of consecutive primes were discovered by Robert J. Lemke Oliver and Kannan Soundararajan of Stanford University, who discuss them in a preprint posted to the arXiv in March. What I find most surprising about the discovery is that no one noticed these patterns long ago. They are certainly conspicuous enough once you know how to look for them.
I suppose we can’t fault Euclid for missing them; ideas about randomness and probability were not well developed in antiquity. But what about Gauss? He was a connoisseur of prime tables, and he compiled his own lists of thousands of primes. In his youth, he wrote, “one of my first projects was to turn my attention to the decreasing frequency of primes, to which end I counted the primes in several chiliads . . . .” Furthermore, Gauss more less invented the idea of congruence classes and modular arithmetic. But apparently he never suspected there might be anything odd lurking in the congruences of pairs of consecutive primes.
In the 1850s the Russian mathematician Pafnuty Lvovich Chebyshev pointed out a subtle bias in the primes. Reducing the odd primes modulo 4 splits them into two subsets. All primes in the sequence 5, 13, 17, 29, 37, . . . are congruent to 1 mod 4; those in the sequence 3, 7, 11, 19, 23, 31, . . . are congruent to 3 mod 4. Chebyshev observed that primes in the latter category seem to be more abundant. Among the first 10,000 odd primes, for example, there are 4,943 congruent to 1 and 5,057 congruent to 3. However, the effect is tiny compared with the disparities seen in pairs of consecutive primes.
In modern times a few authors have reported glimpses of the consecutive-primes phenomenon; Lemke Oliver and Soundararajan mention three such sightings. (See references at the end of this article.) In the 1950s and 60s, Stanislaw Knapowski and Paul Turán investigated various aspects of prime residues mod m; in one paper, published posthumously in 1977, they discuss consecutive primes mod 4, with residues of 1 or 3. They “guess” that consecutive pairs with the same residue and those with different residues “are not equally probable.” In 2002 Chung-Ming Ko looked at sequences of consecutive primes (not just pairs of them) and constructed elaborate fractal patterns based on their varying frequencies. Then in 2011 Avner Ash and colleagues published an extended analysis of “Frequencies of Successive Pairs of Prime Residues,” including some matrices in which the diagonal depression is clearly evident.
Given these precedents, are Lemke Oliver and Soundararajan really the discoverers of the consecutive prime correlations? In my view the answer is yes. Although others may have seen the patterns before, they did not describe them in a way that registered on the consciousness of the mathematical community. As a matter of fact, when Lemke Oliver and Soundararajan announced their findings, the response was surprise verging on incredulity. Erica Klarreich, writing in Quanta, cited the reaction of James Maynard, a number theorist at Oxford:
When Soundararajan first told Maynard what the pair had discovered, “I only half believed him,” Maynard said. “As soon as I went back to my office, I ran a numerical experiment to check this myself.”
Evidently that was a common reaction. Evelyn Lamb, writing in Nature, quotes Soundararajan: “Every single person we’ve told this ends up writing their own computer program to check it for themselves.”
Well, me too! For the past few weeks I’ve been noodling away at lots of code to analyze primes mod m. What follows is an account of my attempts to understand where the patterns come from. My methods are computational and visual more than mathematical; I can’t prove a thing. Lemke Oliver and Soundararajan take a more rigorous and analytical approach; I’ll say a little more about their results at the end of this article.
If you would like to launch your own investigation, you’re welcome to use my code as a starting point. It is written in the Julia programming language, packed up in a Jupyter notebook, and available on GitHub. (Incidentally, this program was my first nontrivial experiment with Julia. I’ll have more to say about my experience with the language in a later post.)
All the examples presented above concern primes taken modulo 7, but there’s nothing special about the number 7 here. I chose it merely because the six possible remainders {1, 2, 3, 4, 5, 6} match the faces of an ordinary cubical die. Other moduli give similar results. Lemke Oliver and Soundararajan do much of their analysis with primes modulo 3, where there are only two congruence classes: A prime greater than 3 must leave a remainder of either 1 or 2 when divided by 3. This is the matrix of pair counts for the first million primes above \(10^7\):
1 2 1 218578 281453 2 281454 218514
Figure 3.
The pattern is rather minimalist but still recognizable: The off-diagonal entries for sequences \((1, 2)\) and \((2, 1)\) are larger than the on-diagonal entries for \((1, 1)\) and \((2, 2)\).
Primes modulo 10 have four congruence classes: 1, 3, 7, 9. Working in decimal notation, we don’t even need to do any arithmetic to see this. When numbers are written in base 10, every prime greater than 5 has a trailing digit of 1, 3, 7, or 9. Here are the frequency counts for the 16 pairs of consecutive final digits:
1 3 7 9 1 43811 76342 78170 51644 3 58922 41148 71824 78049 7 64070 68542 40971 76444 9 83164 63911 59063 43924
Figure 4.
The blue stripe along the main diagonal is clearly present, although elsewhere in the matrix the pattern is somewhat muted and muddled.
I have found that the correlations between successive primes show through most clearly when the modulus itself is a prime and also is not too small. Take a look at the heatmaps for consecutive primes mod 13 and mod 17:
Figure 5.
Figure 6.
Or how about mod 31?
Figure 7.
These would make great patterns for a quilt or a tiled floor, no? And there are interesting regularities visible in all of them. Diagonal stripes are prominent not just on the main corner-to-corner diagonal but throughout the matrix. Those stripes also generate a checkerboard motif; along any row or column, cells tend to alternate between red and blue. A subtler feature is an approximate bilateral symmetry across the antidiagonal (which runs from lower left to upper right). If you were to fold the square along this line, the cells brought together would be closely matched in color. (This is a fact noticed by Ash and his co-authors.)
As a focus of further analysis I have settled on looking at consecutive primes mod 19, a modulus large enough to yield clearly differentiated stripes but not so large as to make the matrix unweildy.
Figure 8.
How to make sense of what we’re seeing? A starting point is the observation that all the primes in our sample are odd numbers, and hence all the intervals between those primes are even numbers. For any given prime \(p\), the next candidates for primehood are \(p+2, p+4, p+6, \ldots\). Could this have something to do with the checkerboard pattern? If the steps between primes must be multiples of 2, that could certainly create correlations between every second cell in a given column or row. (Indeed, the every-other-cell correlations would be starkly obvious—all even-numbered entries would be exactly zero—if the modulus were an even number. It is only by “wrapping around” the edge of the matrix at an odd boundary that any of the even-numbered cells can be populated.)
The diagonal stripes in the matrix suggest strong correlations between all pairs of primes separated by a certain numerical interval. For example, the deepest blue diagonal and the brightest red diagonal are formed by cells separated by six places along the j axis. In the first row are cells 1 and 7, then 2 and 8, 3 and 9, and so on. It occurred to me that this relationship would be easier to perceive if I could “twist” the matrix, so that diagonals became columns. The idea is to apply a cyclic shift to each row; all the values in the row slide to the left, and those that fall off the left edge are reinserted on the right. The first row shifts by zero positions, the next row by one position, and so on. (Is there a name for this transformation? I’m just calling it a twist.)
When I wrote the code to apply this transformation, the result was not quite what I expected:
Figure 9.
What are those zigzags all along the antidiagonal? I guessed that I must have an off-by-one error. Indeed this was the nature of the problem, though the bug lay in the data, not the algorithm. The matrices I’ve displayed in all the figures above are only partial; they suppress empty congruence classes. In particular, the matrix for pairs of primes modulo 19 ignores all primes congruent to 0 mod 19—on the sensible-sounding grounds that there are no such primes. After all, if \(p > 19\) and \(p \equiv 0 \bmod 19\), then \(p\) cannot be prime because it is divisible by 19. Nevertheless, a row and a column for \(p \equiv 0 \bmod 19\) are properly part of the matrix. When they are included, the color-coded tableau looks like this:
Figure 10.
The presence of the zero row and column makes the definition of the twist transformation somewhat tidier: For each row \(i\), apply a leftward cyclic shift of \(i\) places. The resulting twisted matrix is also tidier:
Figure 11.
What do those vertical stripes tell us? In the original matrix, entry \(i, j\) represents the frequency with which \(i \bmod 19\) is followed by \(j \bmod 19\). Here, the color in cell \(i, j\) indicates the frequency with which \(i \bmod 19\) is followed by \((i + j) \bmod 19\). In other words, each column brings together entries with same interval mod 19 between two primes. For example, the leftmost column includes all pairs separated by an interval of length \(0 \bmod 19\), and the bright red column at \(j = 6\) counts all the cases where successive primes are separated by \(6 \bmod 19\).
The color coding gives a qualitative impression of which intervals are seen more or less commonly. For a more precise quantitative measure, we can sum along the columns and display the totals in a bar chart:
Figure 12.
Three obervations:
I wanted to understand the origin of these patterns. What makes interval 6 such a magnet for pairs of consecutive primes, and why do almost all the primes shun poor interval 0?
For the popularity of 6, I already had an inkling. In the 1990s Andrew Odlyzko, Michael Rubinstein, and Marek Wolf undertook a computational study of prime “jumping champions”:
An integer D is called a jumping champion if it is the most frequently occurring difference between consecutive primes ≤ x for some x.
Among the smallest primes (x less than about 600), the jumping champion is usually 2, but then 6 takes over and dominates for quite a long stretch of the number line. Somewhere around \(x = 10^{35}\), 6 cedes the championship to 30, which eventually gives way to 210. Odlyzko et al. estimate that the latter transition takes place near \(x = 10^{425}\). The numbers in this sequence of jumping champions—2, 6, 30, 210, . . . —are the primorials; the nth primorial is the product of the first n primes.
Why should primorials be the favored intervals between consecutive primes? If \(p\) is a large enough prime, then \(p + 2\) cannot be divisible by 2, \(p + 6\) cannot be divisible by either 2 or 3, \(p + 30\) cannot be divisible by 2, 3, or 5, and in general \(p + P_{n}\), where \(P_{n}\) is the nth primorial, cannot be divisible by any of the first n primes. Of course \(p + P_{n}\) might still be divisible by some larger prime, or there might be another prime between \(p\) and \(p + P_{n}\), so that the interprime interval is certainly not guaranteed to be a primorial. But these intervals have an edge over other contenders.
We can see this reasoning in action by taking the differences between successive elements in our list of a million eight-digit primes, then plotting their frequencies:
Figure 13.
Again interval 6 is the clear standout, accounting for 13.7 percent of the total; higher multiples of 6 also poke above their immediate neighbors. And note the overall shape of the distribution: a lump at the left (with a peak at 6), followed by a steady decline. The trend looks a little like a Poisson distribution, and indeed this is thought to be the correct description.
The color scheme slices the data set into tranches of 19 values each. The blue tranche, which includes inter-prime intervals of length 0 to 18, accounts for 68 percent of all the intervals present in the sample of a million primes; the gold tranche adds another 23 percent. The remaining 9 percent are spread widely and thinly. Not all of the intervals are shown in the graph; the spectrum extends as far as 210. (A single pair of consecutive primes in the sample has a separation of 210, namely 20,831,323 and 20,831,533.)
Figure 13 seems to reveal a great deal about the patterns of consecutive primes mod 19. I can make the graph even more informative with a simple rearrangement. Slide each 19-element tranche to the left until it aligns with the 0 tranche, stacking up bars that fall in the same column. Thus the second (gold) tranche moves left until bar 19 lines up with bar 0, and the third (rose) tranche brings together bar 38 with bar 0. Physically, this process can be imagined as wrapping the graph around a cylinder of circumference 19; mathematically, it amounts to reducing the inter-prime intervals modulo 19.
Figure 14.
If you ignore the garish colors, Figure 14 is identical to Figure 12: All the bar heights match up. This should not be a surprise. In Figure 12 we reduce the primes modulo 19 and then take the differences between successive reduced primes; in Figure 14 we take the differences between the primes themselves and then reduce those differences modulo 19. The two procedures are equivalent:
\[(p \bmod 19 - q \bmod 19) \bmod 19 = (p - q) \bmod 19.\]
Looking at the colors now, the pieces of the puzzle fall into place. Why are primes mod 19 so often separated by an interval of 6? Well, “mod 19” has very little to do with it; 6 itself is by far the most common interval between primes in this sample. The only other nonnegligible contribution to \(\delta \equiv 6 \bmod 19\) comes from the third tranche, specifically a few pairs of primes at a distance of 44.
The predominance of the first tranche also explains the disparity between odd and even intervals. All the intervals in the first tranche are necessarily even; odd intervals (mod 19) begin to appear only with the second tranche (intervals 19 to 37) and for that reason alone they are less well populated. For the eight-digit primes in this sample, more than two-thirds of consecutive pairs are closer than 19 units and thus wind up in the first tranche. (The median spacing between the primes is 12. The mean interval is 16.68, in close accord with the theoretical prediction of 16.72.)
Finally, Figure 14 also has something to say about the rarity of 0 intervals mod 19. No two consecutive primes can fall into the same congruence class mod 19 unless they are separated by a distance of 38 or some multiple of 38. Thus such pairs don’t enter the scene until the beginning of the third tranche, and there can’t be very many of them. The million-prime sample has 8,384 consecutive pairs at a distance of 38—less than 1 percent of the total. This is the main reason that a prime-number die so rarely shows the same face twice in a row. It’s the origin of the blue diagonal streak in all the matrices.
I find it interesting that we can explain so much about the pattern of consecutive primes mod m without delving into any of the deep and distinctive properties of prime numbers. In fact, we can replicate much of the pattern without introducing primes at all.
Two hundred years ago, Gauss and Legendre observed that in the neighborhood of a number \(x\), the fraction of all integers that are prime is about \(1 / \log x\). In 1936 the Swedish mathematician Harald Cramér suggested that we interpret this fraction as a probability. The idea is to go through the integers in sequence, accepting each \(x\) with probability \(1 / \log x\). The numbers in the accepted set will not be prime except by coincidence, but they will have the same large-scale distribution as the primes. Here are the first few entries in a list of a million such “Cramér primes,” where the random selection process started with \(x = 10^7\):
10000007 10000022 10000042 10000065 10000068 10000098 10000110 10000116 10000119 10000128 10000166
Now suppose we put these numbers through the same machinery we applied to the primes. We’ll reduce each Carmér prime mod 19 and then construct the 19 × 19 matrix of successors:
Figure 15.
The prominent diagonal features look familiar, but they are much simpler than those in the corresponding prime diagrams. For any Cramér prime p mod 19, the most likely successor is p + 1 mod 19, and the least likely is p + 19 mod 19. Between these extremes there’s a smooth gradient in frequency or probability, with just a few small fluctuations that can probably be attributed to statistical noise.
One thing that’s missing from this matrix is the checkerboard motif. We can restore some of that structure by generating a new set of numbers I call Cramér semiprimes. They are formed by the same kind of probabilistic sieving of the integers, but this time we consider only the odd numbers as candidates, and adjust the probability to \(2\, / \log x\) to keep the overall density the same:
Figure 16.
That’s more like it! With all even numbers excluded from the sequence, the minimum interval between semicramers is 2, and that is also the likeliest spacing.
With one further modification, we get an even closer imitation of the true prime matrix. In addition to excluding all integers divisible by 2, we knock out those divisible by 3, and adjust the probability of selection accordingly. Call the resulting numbers Cramér demisemiprimes.
Figure 17.
Note that 6 mod 19 is the likeliest interval between Cramér demisemiprimes, just as it is between true primes, and there are the same echoes at intervals of 12 and 18. Indeed, the only conspicuous difference between this matrix and Figure 10 (the corresponding diagram for true primes) is in the column and the row for numbers congruent to 0 mod 19. There can be no such numbers among the primes. If we eliminate them also from the Cramér numbers, the two matrices become hard to distinguish. Here they are side by side:
Figure 18.
If you look closely, there are differences to be found—check out the diagonal extending southeast from row 1, column 15—but overall these modified Cramer numbers are shockingly good fake primes. Even the symmetry about the antidiagonal is visible in both diagrams. And keep in mind that the two sets have only about 19 percent of their values in common; the Cramérs include 189,794 genuine primes.
I have one more twist to add to the tale. All the examples above are based on primes (or prime analogues) of eight decimal digits, or in other words numbers in the vicinity of \(10^7\). Do the same conclusions hold up with larger primes? Consider the tableau created by consecutive pairs of a million 40-digit primes, taken mod 19. The pattern is familiar but faded:
Figure 19.
Going on to primes of 400 digits each, again reduced mod 19, we find the colors have been bleached almost to oblivion:
Figure 20.
The blue streak on the main diagonal is barely discernible, and other features amount to mere random mottling.
Thus it seems that size matters when it comes to pairs of consecutive primes. For a hint as to why, take a look at the tally of differences between successive primes for the 40-digit sample:
Figure 21.
Compared with the distribution of intervals for eight-digit primes (Figure 13), the spectrum is much broader and flatter. In this representation the graph is truncated at interval 240; the long tail actually stretches far to the right, with the largest gap between consecutive primes at 1,328. Also, as predicted by Odlyzko and his colleagues, the most frequent interval between 40-digit primes is not 6 but 30.
Because of the wider distribution of intervals, the first tranche cannot dominate the behavior of the system in the way it does with eight-digit primes. When we stack up the tranches mod 19 (Figure 22, below), the first six or eight tranches all make substantial contributions. The odd-even alternation is still present, but the amplitude of these oscillations is much reduced. The leftmost bar in the graph, representing intervals congruent to 0 mod 19, is stunted but not as severely.
Figure 22.
The flattening of the spectrum becomes even more pronounced in the sample of a million 400-digit primes:
Figure 23.
Now the gaps between primes extend all the way out to 15,080, creating almost 800 tranches mod 19 (though only 13 are shown). And there’s a lot of intriguing, comblike structure in the array. In general, bars at multiples of 6 stand out at almost double the height of their near neighbors, showing the continuing influence of the smallest prime factors 2 and 3. Multiples of 6 that are also multiples of 30 reach even greater heights. Values in the sequence 42, 84, 126, 168, 210, . . . are also enhanced; these numbers are multiples of 42 = 2 × 3 × 7. And notice that 210, which is a multiple of 6 and 30 and 42, is the new champion interval, again supporting an Odlyzko prediction.
Despite all this intricate structure, when the bars are stacked up mod 19, the mixing of the 800 tranches is so thorough that the heights are almost uniform. All that’s left is a tiny bit of even-odd disparity.
Figure 24.
And the chronically unpopular class of intervals congruent to 0 mod 19 has finally caught up with its peers. Most of the height of the bars comes not from the dozen early tranches but from the hundreds of later ones representing intervals between 228 and 15,080 (all lumped together in the teal green area of the graph).
The experiments with large primes suggest a plausible surmise: As the size of the primes goes to infinity, all traces of correlations will fade to gray, and consecutive pairs of primes will be as random as rolls of an ideal die. But is it so? There are several reasons to be skeptical of this hypothesis. First, if we scale the modulus m along with the size of the primes—making it comparable in magnitude to the median gap between primes—the correlations may still show through. For my 40-digit sample, the median gap between primes is 66, so let’s look at the successive-pairs matrix mod 61. (To limit statistical noise, I did this computation with a sample of 10 million 40-digit primes rather than 1 million.)
Figure 25.
The stripes are back! Indeed, in addition to the familiar bright red stripes at intervals of 6, there’s a more diffuse pink-and-blue undulation with a period of 30. I would love to see a matrix for primes of 400 digits, which might well have even more complex features, with interacting waves at periods of 6, 30, and 210. Regrettably, I can’t show you that figure. The median gap between 400-digit primes is about 640, so we’d need to set m equal to a prime in this range, say 641. Filling that 641 × 641 matrix would require about a billion consecutive 400-digit primes, which is more than I’m prepared to calculate.
There are other reasons to doubt that the correlations disappear entirely as the primes grow larger. The comblike structure seen so clearly in Figures 21 and 23 suggests that rules of divisibility by small primes have a major influence on the distribution of large primes mod m—and this influence does not wane when the primes grow larger still. Furthermore, even when m is much smaller than the median inter-prime interval, the blue streak remains faintly visible. Here is the matrix for pairs of consecutive 400-digit primes mod 3:
1 2 1 248291 251128 2 251127 249453
Differences between on-diagonal and off-diagonal elements are much smaller than with eight-digit primes (compare Figure 3), but the discrepancies still don’t look like random variation.
To get a clearer picture of how the correlations vary as a function of the size of the primes, I set out to sample the sequence of primes over the entire range from 1-digit numbers to 400-digit numbers. In this project I decided to go Gauss one better: He tabulated primes by the chiliad (a group of 1,000), and I’ve been computing them by the myriad (a group of 10,000). To measure the correlations among primes mod m, I calculated the mean value of the diagonal elements of the matrix and the mean of the off-diagonal elements, then took the off/on ratio. If successive primes were totally uncorrelated, the ratio should converge to 1.
Figure 26 shows the result for 797 myriads of primes mod 3. The curve is concave upward, with a steep initial decline and then a much flatter segment. Starting at about 100 digits, there are samples with off/on ratios of less than 1, meaning that the diagonal is more densely populated than the off-diagonal regions. But even at 400 digits the majority of the ratios are still above 1. What are we looking at here? Does the curve slowly approach a ratio of 1, or is there a limiting value slightly greater than 1? Unfortunately, computational experiments will not give a definitive answer.
Figure 26.
The paper by Lemke Oliver and Soundararajan brings quite different tools to bear on this problem. Although they do some numerical exploration, their focus is on finding an analytic solution. The goal is a mathematical function or formula whose inputs are four positive integers: m is a modulus, a and b are congruence classes of primes mod m, and x is an upper limit on the size of the primes. The formula should tell us how often a is followed by b among all primes smaller than x. If we were in possession of such a formula, we could color in all the squares in the m × m successor matrix without ever having to compute the actual primes up to x.
Describing the behavior of all primes up to x is far more challenging than taking a sample of primes in the neighborhood of x. And the analytic approach is harder in other ways: It requires ideas rather than just cpu cycles. The reward is also potentially greater. The equation \(A = \pi r^2\) yields an exact truth about all circles, something no finite series of computations (with a finite approximation to \(\pi\)) can give us. There’s the promise not just of rigor but of insight.
Sadly, I’ve not yet been able to gain much insight from reading the analysis of Lemke Oliver and Soundararajan. The blame lies mainly with gaping holes in my knowledge of analytic number theory, but I think it’s also fair to say that the math gets pretty hairy at certain points in this discourse. The equation below constitutes the Main Conjecture of Lemke Oliver and Soundararajan. (I have made a minor change of notation and simplified one aspect of the equation: The original applies to sequences of r consecutive primes, but this version describes pairs only, i.e., \(r = 2\).)
\[\pi(x; m, a, b) = \frac{\mathrm{li}(x)}{\phi(m)^2}\left(1 + c_1(m; a, b)\frac{\log \log x}{\log x} + c_2(m; a, b) \frac{1}{\log x} + O\Big( \frac{1}{(\log x)^{7/4}} \Big) \right)\]
I think I understand enough of what’s going on here to at least offer a glossary. To the left of the equal sign, \(\pi(x; m, a, b)\) denotes a counting function; whereas \(\pi(x)\) counts the primes up to \(x\), \(\pi(x; m, a, b)\) is the number of pairs of consecutive primes mod \(m\) that fall into the congruence classes \(a\) and \(b\). To the right of the equal sign, the main coefficient \(\mathrm{li}(x) / \phi(m)^2\) is essentially the mean or expected number of pairs if the primes were distributed uniformly at random, with no correlations between successive primes; \(\mathrm{li}(x)\) is the logarithmic integral of \(x\), an approximation to \(\pi(x)\), and \(\phi(m)^2\) is the Euler totient function, which counts the square of the number of possible congruence classes for \(m\), or in other words the number of elements in the successor matrix.
The leading term inside the large parentheses is simply \(1\), and so it takes on the value of the main coefficient \(\mathrm{li}(x) / \phi(m)^2\); thus the mean number of pairs \((a, b)\) becomes the first approximation to the counting function. The three following terms act as corrections to this first approximation; for large \(x\) they should get progressively smaller, because \(\log \log x / \log x \gt 1 / \log x \gt 1 / (\log x)^{7/4}\) whenever \(x \gt e^e \approx 15\).
What about the coefficients of those three correction terms? The notation O(\cdot) for the smallest term indicates that we’re only going to worry about the term’s order of magnitude—which will be small for large \(x\). The coefficient \(c_1\) takes the following form in the case of \(r = 2\):
\[c_1(m; a, b) = \frac{1}{2} - \frac{\phi(m)}{2} (\#\{a \equiv b \bmod m\})\]
The expression \(\#\{a \equiv b \bmod m\}\) counts the number of cases where \(a\) and \(b\) lie in the same congruence class mod \(m\). Thus the effect of the term (if I understand correctly) is to reduce the overall count along the matrix diagonal, where \(a \equiv b \bmod m\).
As for coefficient \(c_2\), Lemke Oliver and Soundararajan remark that “in general, [it] seems complicated.” Indeed it does. And so this is the place where I should encourage those readers who want to know more to go read the original.
The complexity of the mathematical treatment leaves me feeling frustrated, but it’s hardly unusual for an easily stated problem to require a deep and difficult solution. I hang onto the hope that some of the technicalities will be brushed aside and the main ideas will emerge more clearly with further work. In the meantime, it’s still possible to explore a fascinating and long-hidden corner of number theory with the simplest of computational tools and a bit of graphics.
“God may not play dice with the universe, but something strange is going on with the prime numbers”—so said Paul Erdős and/or Mark Kac, though only with a little help from Carl Pomerance. The strangeness seems to be at its strangest when we play dice with the primes.
Addendum 2016-06-14. I noted above that the distribution of primes mod 7 seems flatter, or more nearly uniform, than the result of rolling a fair die. John D. Cook has taken a chi-squared test to the data and shows that the fit to uniform distribution is way too good to be the plausible outcome of a random process. His first post deals with the specific case of primes modulo 7; his second post considers other moduli.
References
Ash, Avner, Laura Beltis, Robert Gross, and Warren Sinnott. 2011. Frequencies of successive pairs of prime residues. Experimental Mathematics 20(4):400–411.
Chebyshev, Pafnuty Lvovich. 1853. Lettre de M. le Professeur Tchébychev à M. Fuss sur un nouveaux théorème relatif aux nombres premiers contenus dans les formes 4n + 1 et 4n + 3. Bulletin de la Class Physico-mathematique de l’Academie Imperiale des Sciences de Saint-Pétersbourg 11:208. Google Books
Cramér, Harald. 1936. On the order of magnitude of the difference between consecutive prime numbers. Acta Arithmetica 2:23–46. PDF
Derbyshire, John. 2002. Chebyshev’s bias.
Granville, Andrew. 1995. Harald Cramér and the distribution of prime numbers. Harald Cramér Symposium, Stockholm, 1993. Scandinavian Actuarial Journal 1:12–28. PDF
Granville, Andrew, and Greg Martin. 2004. Prime number races. arXiv
Hamza, Kais, and Fima Klebaner. 2012. On the statistical independence of primes. The Mathematical Scientist 37:97–105.
Klarreich, Erica. 2016. Mathematicians discover prime conspiracy. Quanta.
Knapowski, S., and P. Turán. 1977. On prime numbers ? 1 resp. 3 mod 4. In Number Theory and Algebra: Collected Papers Dedicated to Henry B. Mann, Arnold E. Ross, and Olga Taussky-Todd, pp. 157–165. Edited by Hans Zassenhaus. New York: Academic Press.
Ko, Chung-Ming. 2002. Distribution of the units digit of primes. Chaos Solitons Fractals 13(6):1295–1302.
Lamb, Evelyn. 2016. Peculiar pattern found in ‘random’ prime numbers. Nature doi:10.1038/nature.2016.19550.
Lemke Oliver, Robert J., and Kannan Soundararajan. 2016 preprint. Unexpected biases in the distribution of consecutive primes. arXiv
Odlyzko, Andrew, Michael Rubinstein, and Marek Wolf. 1999. Jumping champions. Experimental Mathematics 8(2):107–118.
Rubinstein, Michael, and Peter Sarnak. 1994. Chebyshev’s bias. Experimental Mathematics 3:173–197. Project Euclid
Tao, Terrence. Structure and randomness in the prime numbers. PDF
]]>Extrapolating the steep trend line of the past five years predicts a thousandfold increase in capacity by about 2012; in other words, today’s 120-gigabyte drive becomes a 120-terabyte unit.
Extending that same growth curve into 2016 would allow for another four doublings, putting us on the threshold of the petabyte disk drive (i.e., \(10^{15}\) bytes).
None of that has happened. The biggest drives in the consumer marketplace hold 2, 4, or 6 terabytes. A few 8- and 10-terabyte drives were recently introduced, but they are not yet widely available. In any case, 10 terabytes is only 1 percent of a petabyte. We have fallen way behind the growth curve.
The graph below extends an illustration that appeared in my 2002 article, recording growth in the areal density of disk storage, measured in bits per square inch:
The blue line shows historical data up to 2002 (courtesy of Edward Grochowski of the IBM Almaden Research Center). The bright green line represents what might have been, if the 1997–2002 trend had continued. The orange line shows the real status quo: We are three orders of magnitude short of the optimistic extrapolation. The growth rate has returned to the more sedate levels of the 1970s and 80s.
What caused the recent slowdown? I think it makes more sense to ask what caused the sudden surge in the 1990s and early 2000s, since that’s the kink in the long-term trend. The answers lie in the details of disk technology. More sensitive read heads developed in the 90s allowed information to be extracted reliably from smaller magnetic domains. Then there was a change in the geometry of the domains: the magnetic axis was oriented perpendicular to the surface of the disk rather than parallel to it, allowing more domains to be packed into the same surface area. As far as I know, there have been no comparable innovations since then, although a new writing technology is on the horizon. (It uses a laser to heat the domain, making it easier to change the direction of magnetization.)
As the pace of magnetic disk development slackens, an alternative storage medium is coming on strong. Flash memory, a semiconductor technology, has recently surpassed magnetic disk in areal density; Micron Technologies reports a laboratory demonstration of 2.7 terabits per square inch. And Samsung has announced a flash-based solid-state drive (SSD) with 15 terabytes of capacity, larger than any mechanical disk drive now on the market. SSDs are still much more expensive than mechanical disks—by a factor of 5 or 10—but they offer higher speed and lower power consumption. They also offer the virtue of total silence, which I find truly golden.
Flash storage has replaced spinning disks in about a quarter of new laptops, as well as in all phones and tablets. It is also increasingly popular in servers (including the machine that hosts bit-player.org). Do disks have a future?
In my sentimental moments, I’ll be sorry to see spinning disks go away. They are such jewel-like marvels of engineering and manufacturing prowess. And they are the last link in a long chain of mechanical contrivances connecting us with the early history of computing—through Turing’s bombe and Babbage’s brass gears all the way back to the Antikythera mechanism two millennia ago. From here on out, I suspect, most computers will have no moving parts.
Maybe in a decade or two the spinning disk will make a comeback, the way vinyl LPs and vacuum tube amplifiers have. “Data that comes off a mechanical disk has a subtle warmth and presence that no solid-state drive can match,” the cogniscenti will tell us.
“You can never be too rich or too thin,” someone said. And a computer can never be too fast. But the demand for data storage is not infinitely elastic. If a file cabinet holds everything in the world you might ever want to keep, with room to spare, there’s not much added utility in having 100 or 1,000 times as much space.
In 2002 I questioned whether ordinary computer users would ever fill a 1-terabyte drive. Specifically, I expressed doubts that my own files would ever reach the million megabyte mark. Several readers reassured me that data will always expand to fill the space available. I could only respond “We’ll see.” Fourteen years later, I now have the terabyte drive of my dreams, and it holds all the words, pictures, music, video, code, and whatnot I’ve accumulated in a lifetime of obsessive digital hoarding. The drive is about half full. Or half empty. So I guess the outcome is still murky. I can probably fill up the rest of that drive, if I live long enough. But I’m not clamoring for more space.
One factor that has surely slowed demand for data storage is the emergence of cloud computing and streaming services for music and movies. I didn’t see that coming back in 2002. If you choose to keep some of your documents on Amazon or Azure, you obviously reduce the need for local storage. Moreover, offloading data and software to the cloud can also reduce the overall demand for storage, and thus the global market for disks or SSDs. A typical movie might take up 3 gigabytes of disk space. If a million people load a copy of the same movie onto their own disks, that’s 3 petabytes. If instead they stream it from Netflix, then in principle a single copy of the file could serve everyone.
In practice, Netflix does not store just one copy of each movie in some giant central archive. They distribute rack-mounted storage units to hundreds of internet exchange points and internet service providers, bringing the data closer to the viewer; this is a strategy for balancing the cost of storage against the cost of communications bandwidth. The current generation of the Netflix Open Connect Appliance has 36 disk drives of 8 terabytes each, plus 6 SSDs that hold 1 terabyte each, for a total capacity of just under 300 terabytes. (Even larger units are coming soon.) In the Netflix distribution network, files are replicated hundreds or thousands of times, but the total demand for storage space is still far smaller than it would be with millions of copies of every movie.
A recent blog post by Eric Brewer, Google’s vice president for infrastructure, points out:
The rise of cloud-based storage means that most (spinning) hard disks will be deployed primarily as part of large storage services housed in data centers. Such services are already the fastest growing market for disks and will be the majority market in the near future. For example, for YouTube alone, users upload over 400 hours of video every minute, which at one gigabyte per hour requires more than one petabyte (1M GB) of new storage every day or about 100x the Library of Congress.
Thus Google will not have any trouble filling up petabyte drives. An accompanying white paper argues that as disks become a data center specialty item, they ought to be redesigned for this environment. There’s no compelling reason to stick with the present physical dimensions of 2½ or 3½ inches. Moreover, data-center disks have different engineering priorities and constraints. Google would like to see disks that maximize both storage capacity and input-output bandwidth, while minimizing cost; reliability of individual drives is less critical because data are distributed redundantly across thousands of disks.
The white paper continues:
An obvious question is why are we talking about spinning disks at all, rather than SSDs, which have higher [input-output operations per second] and are the “future” of storage. The root reason is that the cost per GB remains too high, and more importantly that the growth rates in capacity/$ between disks and SSDs are relatively close . . . , so that cost will not change enough in the coming decade.
If the spinning disk is remodeled to suit the needs and the economics of the data center, perhaps flash storage can become better adapted to the laptop and desktop environment. Most SSDs today are plug-compatible replacements for mechanical disk drives. They have the same physical form, they expect the same electrical connections, and they communicate with the host computer via the same protocols. They pretend to have a spinning disk inside, organized into tracks and sectors. The hardware might be used more efficiently if we were to do away with this charade.
Or maybe we’d be better off with a different charade: Instead of dressing up flash memory chips in the disguise of a disk drive, we could have them emulate random access memory. Why, after all, do we still distinguish between “memory” and “storage” in computer systems? Why do we have to open and save files, launch and shut down applications? Why can’t all of our documents and programs just be everpresent and always at the ready?
In the 1950s the distinction between memory and storage was obvious. Memory was the few kilobytes of magnetic cores wired directly to the CPU; storage was the rack full of magnetic tapes lined up along the wall on the far side of the room. Loading a program or a data file meant finding the right reel, mounting it on a drive, and threading the tape through the reader and onto the take-up reel. In the 1970s and 80s the memory/storage distinction began to blur a little. Disk storage made data and programs instantly available, and virtual memory offered the illusion that files larger than physical memory could be loaded all in one go. But it still wasn’t possible to treat an entire disk as if all the data were all present in memory. The processor’s address space wasn’t large enough. Early Intel chips, for example, used 20-bit addresses, and therefore could not deal with code or data segments larger than \(2^{20} \approx 10^6\) bytes.
We live in a different world now. A 64-bit processor can potentially address \(2^{64}\) bytes of memory, or 16 exabytes (i.e., 16,000 petabytes). Most existing processor chips are limited to 48-bit addresses, but this still gives direct access to 281 terabytes. Thus it would be technically feasible to map the entire content of even the largest disk drive onto the address space of main memory.
In current practice, reading from or writing to a location in main memory takes a single machine instruction. Say you have a spreadsheet open; the program can get the value of any cell with a load instruction, or change the value with a store instruction. If the spreadsheet file is stored on disk rather than loaded into memory, the process is quite different, involving not single instructions but calls to input-output routines in the operating system. First you have to open the file and read it as a one-dimensional stream of bytes, then parse that stream to recreate the two-dimensional structure of the spreadsheet; only then can you access the cell you care about. Saving the file reverses these steps: The two-dimensional array is serialized to form a linear stream of bytes, then written back to the disk. Some of this overhead is unavoidable, but the complex conversions between serialized files on disk and more versatile data structures in memory could be eliminated. A modern processor could address every byte of data—whether in memory or storage—as if it were all one flat array. Disk storage would no longer be a separate entity but just another level in the memory hierarchy, turning what we now call main memory into a new form of cache. From the user’s point of view, all programs would be running all the time, and all documents would always be open.
Is this notion of merging memory and storage an attractive prospect or a nightmare? I’m not sure. There are some huge potential problems. For safety and sanity we generally want to limit which programs can alter which documents. Those rules are enforced by the file system, and they would have to be re-engineered to work in the memory-mapped environment.
Perhaps more troubling is the cognitive readjustment required by such a change in architecture. Do we really want everything at our fingertips all the time? I find it comforting to think of stored files as static objects, lying dormant on a disk drive, out of harm’s way; open documents, subject to change at any instant, require a higher level of alertness. I’m not sure I’m ready for a more fluid and frenetic world where documents are laid aside but never put away. But I probably said the same thing 30 years when I first confronted a machine capable of running multiple programs at once (anyone remember Multifinder?).
The dichotomy between temporary memory and permanent storage is certainly not something built into the human psyche. I’m reminded of this whenever I help a neophyte computer user. There’s always an incident like this:
“I was writing a letter last night, and this morning I can’t find it. It’s gone.”
“Did you save the file?”
“Save it? From what? It was right there on the screen when I turned the machine off.”
Finally the big questions: Will we ever get our petabyte drives? How long will it take? What sorts of stuff will we keep on them when the day finally comes?
The last time I tried to predict the future of mass storage, extrapolating from recent trends led me far astray. I don’t want to repeat that mistake, but the best I can suggest is a longer-term baseline. Over the past 50 years, the areal density of mass-storage media has increased by seven orders of magnitude, from about \(10^5\) bits per square inch to about \(10^{12}\). That works out to about seven years for a tenfold increase, on average. If that rate is an accurate predictor of future growth, we can expect to go from the present 10 terabytes to 1 petabyte in about 15 years. But I would put big error bars around that number.
I’m even less sure about how those storage units will be used, if in fact they do materialize. In 2002 my skepticism about filling up a terabyte of personal storage was based on the limited bandwidth of the human sensory system. If the documents stored on your disk are ultimately intended for your own consumption, there’s no point in keeping more text than you can possibly read in a lifetime, or more music than you can listen to, or more pictures than you can look at. I’m now willing to concede that a terabyte of information may not be beyond human capacity to absorb. But a petabyte? Surely no one can read a billion books or watch a million hours of movies.
This argument still seems sound to me, in the sense that the conclusion follows if the premise is correct. But I’m no longer so sure about the premise. Just because it’s my computer doesn’t mean that all the information stored there has to be meant for my eyes and ears. Maybe the computer wants to collect some data for its own purposes. Maybe it’s studying my habits or learning to recognize my voice. Maybe it’s gathering statistics from the refrigerator and washing machine. Maybe it’s playing go, or gossiping over some secret channel with the Debian machine across the alley.
We’ll see.
]]>Notice the spacing around the minus sign. It’s too close to the argument on its left, whereas the plus sign lies right in the middle. The proper rendering of this expression looks like this:
Closely comparing the two images, I realized that spacing isn’t the only issue. In the malformed version the minus sign is also a little too long, too low, and too skinny.
For typesetting mathematics, I rely on MathJax, an amazing JavaScript program created by Davide Cervone of Union College. It works like magic: I write in standard TeX (math mode only), and the typeset output appears beautifully formatted in your web browser, with no need to bother about installing fonts or downloading plugins. For the past few years MathJax has been totally reliable, so this spacing glitch came as an annoying surprise.
The notes that follow record both what I did and what I thought as I tried to track down the cause of this problem. If anyone else ever bumps into the bug, the existence of this document might save them some angst and agita. Besides, everybody likes a detective story—even if the detective turns out to be more bumbling than brilliant. (If you just want to know how it comes out, skip to the end.)
Hypothesis: My first thought on seeing the wayward minus sign was that I must have typed something wrong. The TeX source code for the expression shown above is so simple (just a + b - c
) that there’s not much room for error, but accidents happen. Maybe one of those space characters is not an ordinary word space (ASCII 0x20
) but a non-breaking space (HTML
). Or maybe the hyphen that represents a minus sign is not really a hyphen (ASCII 0x2D
) but an en-dash (HTML –
or –
) or a discretionary hyphen (HTML ­
or ­
). Experiment 1: Try typing the expression again, very carefully. Result: No change. Experiment 2: Copy the original source text into an editor that shows raw hexadecimal byte values. Result: Nothing exotic. Experiment 3: Copy the source text into a different TeX system (Pierre-Yves Chatelier’s LaTeXiT). Result: Typesets correctly. Conclusion: Probably not a typo.
Question: Could it be a browser bug? Tests: Try it in Chrome, Firefox, Safari, Opera. Results: Same appearance in all of them. Conclusion: It’s not the browser.
Internet interlude: The most important debugging tools today are Google and Stack Overflow. Most likely the answer is already out there. But searches for “minus sign spacing MathJax” and “minus sign spacing TeX” turn up nothing useful. The most promising leads take me to discussions of the binary subtraction operator \(a - b\) vs. the unary negation operator \(-b\). That’s not the issue here, so I am thrown back on my own resources.
Question: Is it just my machine? Test: Try opening the same page on another laptop. Result: Same appearance. However, these two computers are very similar. In particular, they have the same fonts installed. Test: Try a third machine, with different fonts. Result: No change.
Question: Is the problem confined to the one article I’m currently writing, or does it show up in earlier blog posts as well? Research: Page back through the bit-player archives. I find several more instances of the bug. Followup question: Was the minus-sign spacing in those earlier articles already botched when I wrote and published them? Or were they correct then, and the bug was introduced by some later change in the software environment?
Clue: In the course of rummaging through old blog posts, I discover that the spacing anomaly appears only in “inline” math expressions (those that appear within the flow of a paragraph), not in “display” equations (which are set off on a line of their own). The two rendering modes are invoked by surrounding an expression with different sets of delimiters: \( ... \)
for inline and \[ ... \]
for display. By merely toggling between round and square brackets, I find I can turn the bug on and off. This discovery leads me to suppose there really might be something awry within MathJax. If it formats an expression correctly in one mode, why does it fail on the same input text in another mode?
Investigation: Using browser developer tools, I examine the HTML markup that MathJax writes into the document. In display mode (where the spacing is correct), here’s the coding for the minus sign:
<span class="mo" id="MathJax-Span-15"
style="font-family: STIXGeneral-Regular;
padding-left: 0.228em;">-</span>
The phrase I have highlighted in red is the crucial bit of styling that sets the spacing on the left side of the minus operator. Here’s the corresponding markup for the minus sign in the inline version of the same expression:
<span class="mo" id="MathJax-Span-22"
style="font-family: STIXGeneral-Regular;">–</span>
The padding-left
statement is absent. This is the proximate cause of the incorrect spacing. But why does MathJax supply the appropriate spacing in display mode but omit it in inline mode? That’s the puzzle.
Inquiry: I turn to the MathJax source-code repository on GitHub, and browse the issues database. Nothing relevant turns up. Likewise the MathJax user group forum. Baffling. If the problem really is a MathJax bug, someone would surely have reported it, unless it’s quite new. I consider opening a new issue, but decide to wait until I know more.
Question: The bug seems to be everywhere on bit-player.org, but what about the rest of the web? On MathOverflow (which I know uses MathJax) it doesn’t take long to find an inline equation that includes a minus sign. It is formatted perfectly. David Mumford’s blog is another MathJax site; I poke around there and find another inline equation with a correctly spaced minus sign. Uh oh. The finger of blame is pointing back toward me and away from MathJax.
Question: Am I using the same version of MathJax as those other sites, and the same configuration file? Not exactly, but when I try several other versions (including older ones, in case this is a recently introduced bug), there’s no change.
Pause for reflection: MathJax seems to be behaving differently on bit-player than it does on other sites. What could account for that difference? There are dozens of possible factors, but I have a leading candidate: bit-player is built on the WordPress blogging platform, and the other sites I’m looking at are not. I have no idea how the interaction of WordPress and MathJax could lead to this particular outcome, but they are both complicated software systems, with lots going on behind the curtains.
Experiment: I can test the WordPress hypothesis by setting up a web page that has everything in common with the bit-player site—the same server hardware and software, and the same MathJax processor—but that lives outside the WordPress system. I do exactly that, and find that minus signs are correctly formatted in both display and inline equations. Conclusion: It sure looks like WordPress is messing with my TeX!
Revelation: Throughout this diagnostic adventure, I’ve been relying heavily on the developer tools in the Chrome and Firefox browsers. These tools provide a peek into a page’s HTML encoding as it is displayed by the browser, after MathJax and any other JavaScript programs have worked their transformations on the source text. Now, for sheer lack of any better ideas, I decide to try the View Source command, which shows the HTML as received from the server, before any JavaScript programs run, and in particular before MathJax has converted TeX source code into typeset mathematical output. Instantly, the root of the problem is staring me in the face. The display-mode TeX is exactly as I wrote it: \[a + b - c\]
. But the inline-mode markup is this: \(a + b – c\)
. The HTML entity –
specifies an en-dash. Where did that come from? Actually, I’m pretty sure I know where; what I don’t know is why. WordPress has built-in functions to “prettify” text, converting typewriter quote marks ('', "") to typographer’s quotes (‘ ’, “ ”). More to the point, the program also replaces a double hyphen (--) with an en-dash (–) and a triple hyphen (---) with an em-dash (—). Although I haven’t been typing double hyphens in the math expressions, I still suspect that the WordPress character substitution process has something to do with those troublesome en-dashes.
Confirmation: Before investing more effort in this hypothesis, I try to make sure I’m on the right track. Typing my test expression with an en-dash instead of a hyphen produces output identical to the buggy version, in display mode as well as inline mode. Performing the same experiment in LaTeXiT yields a very similar result.
The culprit exposed: Searching for #8211
in the WordPress source code takes me to the file formatting.php
, where I find a function called wptexturize
. PHP is not my favorite programming language, but it’s easy enough to guess what these lines are about (I have simplified and abbreviated the statements for clarity):
$static_characters = array( '---', ' -- ', '--', ' - ') $static_replacements = array( $em_dash, ' ' . $em_dash . ' ', $en_dash, ' ' . $en_dash . ' ')
Note the fourth element of the $static_characters
array: a hyphen surrounded by spaces. The corresponding element of $static_replacements
is an en-dash surrounded by spaces. I call that a smoking gun. MathJax, like other TeX processors, expects an ASCII hyphen as a minus sign; if you feed it an en-dash, it’s not going to recognize it as a mathematical operator. (When Knuth was developing TeX, circa 1980, no standard character encoding existed beyond the 96 codes of plain ASCII.)
The fix: It could be as simple as writing a+b-c
instead of a + b - c
! When I make that minor change to the text, it works like a charm. Why didn’t I think of trying that sooner? I guess because TeX in math mode promises to ignore whitespace in the source code, and it never occurred to me that WordPress doesn’t have to honor that promise. Thus I can solve the immediate problem just by removing spaces around minus signs. As a permanent remedy, however, changing my writing habits is not appealing. Nor is sifting through all my earlier posts to remove those spaces. The fact is, I don’t want hyphens to magically become en-dashes while I’m not looking. It may be a feature for some people, but for me it’s a bug.
What I did. The first commandment of WordPress development is “Thou shall not modify the core files.” But in that respect I’m already a sinner, and unrepentant. Yeah, I edited those two arrays in the formatting.php
file, and it felt good.
Lessons learned. In hindsight, I see that I missed several opportunities to root out the problem more quickly. Next time I’ll remember View Source. And if I had done a better job of early-stage analysis, I would have been able to find help more efficiently. I am not the only one to confront this glitch, but I needed better search terms to follow the breadcrumbs of those who went before. Also, along the way I misinterpreted some important clues. When I discovered that the bug affects only inline mode and not display mode, I was quite sure that fact implicated MathJax, but I was wrong. (As it happens, I still don’t really understand why display mode is immune to the bug. Why is the hyphen converted to an en-dash when I enclose it in slashed round brackets, but not when it appears in slashed square brackets? Evidently the wptexturizing treatment is skipped in the latter case, but I lack the stamina to slog through all that PHP to figure out why.)
The big picture: I’m not mad at WordPress. I still believe it is a wonder of the age, making millions of people into instant, pushbutton publishers. According to some reports, it powers a quarter of all web sites. In this respect it may well be the most important application-layer software for fulfilling the original promise of the World Wide Web: allowing all of us to be contributors and creators rather than merely consumers of mass media. But there’s a cost: Keeping WordPress easy on the outside seems to require a dense thicket of thorns and briers on the inside. As the years go by I find I spend too much time fighting against its automation, which is a joyless task. I would prefer something simpler. I have Jekyll envy.
Yet my main takeaway after this episode is gratitude for open-source software. If MathJax and WordPress had been sealed, blackbox applications, I would have been helpless to help myself, unable to do anything about the problem beyond whining and pleading.
]]>The web sites numbersaplenty.com and numberworld.info dish up a smorgasbord of facts about every natural number from 1 to 999,999,999,999,999. Type in your favorite positive integer (provided it’s less than 10^{15}) and you’ll get a list of prime factors, a list of divisors, the number’s representation in various bases, its square root, and lots more.
I first stumbled upon these sites (and several others like them) about a year ago. I revisited them recently while putting together the Carnival of Mathematics. I was looking for something cute to say about the new calendar year and was rewarded with the discovery that 2016 is a triangular number: 2016 bowling pins can be arranged in an equilateral triangle with 63 pins per side.
This incident set me to thinking: What does it take to build a web site like these? Clearly, the sites do not have 999,999,999,999,999 HTML files sitting on a disk drive waiting to be served up when a visitor arrives. Everything must be computed on the fly, in response to a query. And it all has to be done in milliseconds. The question that particularly intrigued me was how the programs recognize that a given number has certain properties or is a member of a certain class—a triangular number, a square, a Fibonacci, a factorial, and so on.
I thought the best way to satisfy my curiosity would be to build a toy number site of my own. Here it is:
Prime factors: | |
Prime number | ? |
Square-free number | ? |
Square-root-smooth number | ? |
Square number | ? |
Triangular number | ? |
Factorial number | ? |
Fibonacci number | ? |
Catalan number | ? |
Somos-4 number | ? |
Elapsed time:
This one works a little differently from the number sites I’ve found on the web. The computation is done not on my server but on your computer. When you type a number into the input field above, a JavaScript program running in your web browser computes the prime factors of the number and checks off various other properties. (The source code for this program is available on GitHub, and there’s also a standalone version of the Number Factoids calculator.)
Because the computation is being done by your computer, the performance depends on what hardware and software you bring to the task. Especially important is the JavaScript engine in your browser. As a benchmark, you might try entering the number 999,999,999,999,989, which is the largest prime less than 10^{15}. The elapsed time for the computation will be shown at the bottom of the panel. On my laptop, current versions of Chrome, Firefox, Safari, and Opera give running times in the range of 150 to 200 milliseconds. (But an antique iPad takes almost 8 seconds.)
Most of that time is spent in factoring the integer (or attempting to factor it in the case of a prime). Factoring is reputed to be a hard problem, and so you might suppose it would make this whole project infeasible. But the factoring computation bogs down only with really big numbers—and a quadrillion just isn’t that big anymore. Even a crude trial-division algorithm can do the job. In the worst case we need to try dividing by all the odd numbers less than \(\sqrt{10^{15}}\). That means the inner loop runs about 16 million times—a mere blink of the eye.
Once we have the list of prime factors for a number N, other properties come along almost for free. Primality: We can tell whether or not N is prime just by looking at the length of the factor list. Square-freeness: N is square-free if no prime appears more than once in the list. Smoothness: N is said to be square-root smooth if the largest prime factor is no greater than \(\sqrt{N}\). (For example, \(12 = 2 \times 2 \times 3\) is square-root smooth, but \(20 = 2 \times 2 \times 5\) is not.)
The factor list could also be used to detect square numbers. N is a perfect square if every prime factor appears in the list an even number of times. But there are lots of other ways to detect squares that don’t require factorization. Indeed, running on a machine that has a built-in square-rooter, the JavaScript code for recognizing perfect squares can be as simple as this:
function isSquare(N) { var root = Math.floor(Math.sqrt(N)); return root * root === N; }
If you want to test this code in the Number Factoids calculator, you might start with 999,999,961,946,176, which is the largest perfect square less than \(10^{15}\).
Note that the isSquare
function is a predicate: The return
statement in the last line yields a boolean value, either true
or false
. The program might well be more useful if it could report not only that 121 is a square but also what it’s the square of. But the Number Factoids program is just a proof of concept, so I have stuck to yes-or-no questions.
Metafactoids: The Factoids calculator tests nine boolean properties. No number can possess all of these properties, but 1 gets seven green checkmarks. Can any other number equal this score? The sequence of numbers that exhibit none of the nine properties begins 20, 44, 52, 68, 76, 88, 92,… Are they all even numbers? (Hover for answer.)
What about detecting triangular numbers? N is triangular if it is the sum of all the integers from 1 through k for some integer k. For example, \(2016 = 1 + 2 + 3 + \dots + 63\). Given k, it’s easy enough to find the kth triangular number, but we want to work in the opposite direction: Given N, we want to find out if there is a corresponding k such that \(1 + 2 + 3 + \cdots + k = N\).
Young Carl Friedrich Gauss knew a shortcut for calculating the sums of consecutive integers: \(1 + 2 + 3 + \cdots + k = N = k\,(k+1)\,/\,2\). We need to invert this formula, solving for the value of k that yields a specified N (if there is one). Rearranging the equation gives \(k^2 + k - 2N = 0\), and then we can crank up the trusty old quadratic formula to get this solution:
\[k = \frac{-1 \pm \sqrt{1 + 8N}}{2}.\]
Thus k is an integer—and N is triangular—if and only if \(8N + 1\) is an odd perfect square. (Let’s ignore the negative root, and note that if \(8N + 1\) is a square at all, it will be an odd one.) Detecting perfect squares is a problem we’ve already solved, so the predicate for detecting triangular numbers takes this simple form:
function isTriangular(N) { return isSquare(8 * N + 1); }
Try testing it with 999,999,997,764,120, the largest triangular number less than \(10^{15}\).
Factorials are the multiplicative analogues of triangular numbers: If N is the kth factorial, then \(N = k! = 1 \times 2 \times 3 \times \cdots \times k\). Is there a multiplicative trick that generates factorials in the same way that Gauss’s shortcut generates triangulars? Well, there’s Stirling’s approximation:
\[k! \approx \sqrt{2 \pi k} \left( \frac{k}{e} \right)^k.\]
We might try to invert this formula to get a function of \(k!\) whose value is \(k\), but I don’t believe this is a promising avenue to explore. The reason is that Stirling’s formula is only an approximation. It predicts, for example, that 5! is equal to 118.02, whereas the true value is 120. Thus taking the output of the inverse function and rounding to the nearest integer would produce wrong answers. We could add correction terms to get a closer approximation—but surely there’s a better way.
One approach is to work with the gamma (\(\Gamma\)) function, which extends the concept of a factorial from the integers to the real and complex numbers; if \(n\) is an integer, then \(\Gamma(n+1) = n!\), but the \(\Gamma\) function also interpolates between the factorial values. A recent paper by Mitsuru Uchiyama gives an explicit, analytic, inverse of the gamma function, but I understand only fragments of the mathematics, and I don’t know how to implement it algorithmically.
Fifteen years ago David W. Cantrell came up with another inverse of the gamma function, although this one is only approximate. Cantrell’s version is much less intimidating, and it is based on one of my favorite mathematical gadgets, the Lambert W function. A Mathematica implementation of Cantrell’s idea works as advertised—when it is given the kth factorial number as input, it returns a real number very close to \(k+1\). However, the approximation is not good enough to distinguish true factorials from nearby numbers. Besides, JavaScript doesn’t come with a built-in Lambert W function, and I am loath to try writing my own.
On the whole, it seems better to retreat from all this higher mathematics and go back to the definition of the factorial as a product of successive integers. Then we can reliably detect factorials with a simple linear search, expressed in the following JavaScript function:
function isFactorial(N) { var d = 2, q = N, r = 0; while (q > 1 && r === 0) { r = q % d; q = q / d; d += 1; } return (q === 1 && r === 0); }
A factorial is built by repeated multiplication, so this algorithm takes it apart by repeated division. Initially, we set \(q = N\) and \(d = 2\). Then we replace \(q\) by \(q / d\), and \(d\) by \(d + 1\), while keeping track of the remainder \(r = q \bmod d\). If we can continue dividing until \(q\) is equal to 1, and the remainder of every division is 0, then N is a factorial. This is not a closed-form solution; it requires a loop. On the other hand, the largest factorial less than \(10^{15}\) is 17! = 355,687,428,096,000, so the program won’t be going around the loop more than 17 times.
The Fibonacci numbers are dear to the hearts of number nuts everywhere (including me). The sequence is defined by the recursion \(F_0 = 0, F_1 = 1, F_k = F_{k-1} + F_{k-2}\). How best to recognize these numbers? There is a remarkable closed-form formula, named for the French mathematician J. P. M. Binet:
\[F_k = \frac{1}{\sqrt{5}} \left[ \left(\frac{1 + \sqrt{5}}{2}\right)^k - \left(\frac{1 - \sqrt{5}}{2}\right)^k\right]\]
I call it remarkable because, unlike Stirling’s approximation for factorials, this is an exact formula; if you give it an integer k and an exact value of \(\sqrt{5}\)), it returns the kth Fibonacci number as an integer.
One afternoon last week I engaged in a strenuous wrestling match with Binet’s formula, trying to turn it inside out and thereby create a function of N that returns k if and only if \(N\) is the kth Fibonacci number. With some help from Mathematica I got as far as the following expression, which gives the right answer some of the time:
\[k(N) = \frac{\log \frac{1}{2} \left( \sqrt{5 N^2 + 4} - \sqrt{5} N \right)}{\log \frac{1}{2} \left( \sqrt{5} - 1 \right)}\]
Plugging in a few values of N yields the following table of values for \(k(N)\):
N | k(N) |
---|---|
1 | 2.000000000000001 |
2 | 3.209573979673092 |
3 | 4.0000000000000036 |
4 | 4.578618254581733 |
5 | 5.03325648737724 |
6 | 5.407157747499656 |
7 | 5.724476891770392 |
8 | 6.000000000000018 |
9 | 6.243411773788614 |
10 | 6.4613916654135615 |
11 | 6.658737112471047 |
12 | 6.8390081849422675 |
13 | 7.00491857188792 |
14 | 7.158583717787527 |
15 | 7.3016843734535035 |
16 | 7.435577905992959 |
17 | 7.561376165404197 |
18 | 7.680001269357004 |
19 | 7.792226410280063 |
20 | 7.8987062604216005 |
21 | 7.999999999999939 |
In each of the green rows, the function correctly recognizes a Fibonacci number \(F_k\), returning the value of k as an integer. (Or almost an integer; the value would be exact if we could calculate exact square roots and logarithms.) Specifically, 1 is the second Fibonacci number (though also the first), 3 is the fourth, 8 is the sixth, and 21 is the eighth Fibonacci number. So far so good. But there’s something weird going on with the other Fibonacci numbers in the table, namely those with odd-numbered indices (red rows). For N = 2, 5, and 13, the inverse Binet function returns numbers that are close to the correct k values (3, 5, 7), but not quite close enough. What’s that about?
If I had persisted in my wrestling match, would I have ultimately prevailed? I’ll never know, because in this era of Google and MathOverflow and StackExchange, a spoiler lurks around every cybercorner. Before I could make any further progress, I stumbled upon pointers to the work of Ira Gessel of Brandeis, who neatly settled the matter of recognizing Fibonacci numbers more than 40 years ago, when he was an undergraduate at Harvard. Gessel showed that N is a Fibonacci number iff either \(5N^2 + 4\) or \(5N^2 - 4\) is a perfect square. Gessel introduced this short and sweet criterion and proved its correctness in a problem published in The Fibonacci Quarterly (1972, Vol. 10, No. 6, pp. 417–419). Phillip James, in a 2009 paper, presents the proof in a way I find somewhat easier to follow.
It is not a coincidence that the expression \(5N^2 + 4\) appears in both Gessel’s formula and in my attempt to construct an inverse Binet function. Furthermore, substituting Gessel’s \(5N^2 - 4\) into the inverse function (with a few other sign adjustments) yields correct results for the odd-indexed Fibonacci numbers. Implementing the Gessel test in JavaScript is a cinch:
function gessel(N) { var s = 5 * N * N; return isSquare(s + 4) || isSquare(s - 4); }
So that takes care of the Fibonacci numbers, right? Alas, no. Although Gessel’s criterion is mathematically unassailable, it fails computationally. The problem arises from the squaring of \(N\). If \(N\) is in the neighborhood of \(10^{15}\), then \(N^2\) is near \(10^{30}\), which is roughly \(2^{100}\). JavaScript does all of its arithmetic with 64-bit double-precision floating-point numbers, which allow 53 bits for representing the mantissa, or significand. With values above \(2^{53}\), not all integers can be represented exactly—there are gaps between them. In this range the mapping between \(N\) and \(N^2\) is no longer a bijection (one-to-one in both directions), and the gessel
procedure returns many errors.
I had one more hope of coming up with a closed-form Fibonacci recognizer. In the Binet formula, the term \(((1 - \sqrt{5})\,/\,2)^k\) becomes very small in magnitude as k grows large. By neglecting that term we get a simpler formula that still yields a good approximation to Fibonacci numbers:
\[F_k \approx \frac{1}{\sqrt{5}} \left(\frac{1 + \sqrt{5}}{2}\right)^k.\]
For any integer k, the value returned by that expression is within 0.5 of the Fibonacci number \(F_k\), and so simple rounding is guaranteed to yield the correct answer. But the inverse function is not so well-behaved. Although it has no \(N^2\) term that would overflow the 64-bit format, it relies on square-root and logarithm operations whose limited precision can still introduce errors.
So how does the Factoids calculator detect Fibonacci numbers? The old-fashioned way. It starts with 0 and 1 and iterates through the sequence of additions, stopping as soon as N is reached or exceeded:
function isFibo(N) { var a = 0, b = 1, tmp; while (a < N) { tmp = a; a = b; b = tmp + b; } return a === N; }
As with the factorials, this is not a closed-form solution, and its computational complexity scales in linear proportion to N rather than being constant regardless of N. There are tricks for speeding it up to \(\log N\); Edsger Dijkstra described one such approach. But optimization hardly seems worth the bother. For N < \(10^{15}\), the while
loop cannot be executed more than 72 times.
I’ve included two more sequences in the Factoids calculator, just because I’m especially fond of them. The Catalan numbers (1, 1, 2, 5, 14, 42, 132, 429…) are famously useful for counting all sorts of things—ways of triangulating a polygon, paths through the Manhattan street grid, sequences of properly nested parentheses. The usual definition is in terms of binomial coefficients or factorials:
\[C_k = \frac{1}{k+1} \binom{2k}{k} = \frac{(2k)!}{(k+1)! k!}\]
But there is also a recurrence relation:
\[C_0 = 1,\qquad C_k = \frac{4k-2}{k+1} C_{k-1}\]
The recognizer function in the Factoids Calculator does a bottom-up iteration based on the recurrence relation:
function isCatalan(N) { var c = 1, k = 0; while (c < N) { k += 1; c = c * (4 * k - 2) / (k + 1); } return c === N; }
The final sequence in the calculator is one of several discovered by Michael Somos around 1980. It is defined by this recurrence:
\[S_0 = S_1 = S_2 = S_3 = 1,\qquad S_k = \frac{S_{k-1} S_{k-3} + S_{k-2}^2}{S_{k-4}}\]
The surprise here is that the elements of the sequence are all integers, beginning 1, 1, 1, 1, 2, 3, 7, 23, 59, 314, 1529, 8209. In writing a recognizer for these numbers I have made no attempt to be clever; I simply generate the sequence from the beginning and check for equality with the given N:
function isSomos(N) { var next = 1, S = [1, 1, 1, 1]; while (next < N) { next = (S[3] * S[1] + S[2] * S[2]) / S[0]; S.shift(); S.push(next); } return next === N; }
But there’s a problem with this code. Do you see it? As N
approaches the \(10^{15}\) barrier, the subexpression S[3] * S[1] + S[2] * S[2]
will surely break through that barrier. In fact, the version of the procedure shown above fails for 32,606,721,084,786, the largest Somos-4 number below \(10^{15}\). For the version of the program that’s actually running in the Factoids calculator I have repaired this flaw by rearranging the sequence of operations. (For details see the GitHub repository.)
The factorial, Fibonacci, Catalan, and Somos sequences all exhibit exponential growth, which means they are sprinkled very sparsely along the number line. That’s why a simple linear search algorithm—which just keeps going until it reaches or exceeds the target—can be so effective. For the same reason, it would be easy to precompute all of these numbers up to \(10^{15}\) and have the JavaScript program do a table lookup. I have ruled out this strategy for a simple reason: It’s no fun. It’s not sporting. I want to do real computing, not just consult a table.
Other number series, such as the square and triangular numbers, are more densely distributed. There are more than 30 million square and triangular numbers up to \(10^{15}\); downloading a table of that size would take longer than recomputing quite a few squares and triangulars. And then there are the primes—all 29,844,570,422,669 of them.
What would happen if we broke out of the 64-bit sandbox and offered to supply factoids about larger numbers? A next step might be a Megafactoids calculator that doubles the digit count, accepting integers up to \(10^{30}\). Computations in this system would require multiple-precision arithmetic, capable of handling numbers with at least 128 bits. Some programming languages offer built-in support for numbers of arbitrary size, and libraries can add that capability to other languages, including JavaScript. Although there is a substantial speed penalty for extended precision, most of the algorithms running in the Factoids program would still give correct results in acceptable time. In particular, there would no problem recognizing squares and triangulars, factorials, Fibonaccis, Catalans and Somos-4 numbers.
The one real problem area in a 30-digit factoid calculator is factoring. Trial division would be useless; instead of milliseconds, the worst-case running time would be months or years. However, much stronger factoring algorithms have been devised in the past 40 years. The algorithm that would be most suitable for this purpose is called the elliptic curve method, invented by Hendrik Lenstra in the 1980s. An implementation of this method built into PARI/GP, which in turn is built into Sage, can factor 30-digit numbers in about 20 milliseconds. A JavaScript implementation of the elliptic curve method seems quite doable. Whether it’s worth doing is another question. The world is not exactly clamoring for more and better factoids.
Addendum 2016-02-05: I’ve just learned (via Hacker News) that I may need to add a few more recognition predicates: detectors for “artisanal integers” in flavors hand-crafted in the Mission District, Brooklyn, and London.
]]>
Tradition obliges me to say something interesting about the number 130. I could mention that 130 = 5 × 5 × 5 + 5. Is that interesting? Meh, eh? The Wikipedia page for 130 reports the following curious observation:
\(130\) is the only integer that is the sum of the squares of its first four divisors, including \(1\): \(1^2 + 2^2 + 5^2 + 10^2 = 130\).
What’s interesting here is not the fact that 130 has this property. The provocative part is the statement that 130 is the only such integer. How can one know this? It would be easy enough to check that no smaller number qualifies, but how do you rule out the possibility that somewhere far out on the number line there lurks another example? Clearly, what’s needed is a proof, but Wikipedia offers none. I spent some time trying to come up with a proof myself but failed to put the pieces together. Maybe you’ll do better. If not, Robert Munafo explains all, and traces the question to the Russian website dxdy.ru.
Enough of 130. I’d like to celebrate another number of the season: 2016. As midnight approached on December 31, Twitter brought this news:
In other words, the year we are now beginning is equal to \(2^{11} - 2^{5}\), and in another 32 years we’ll be celebrating the turn of a binary millennium, marking the passage of two kibiyears. (Year-zero deniers are welcome to wait an extra year.)
Fernando Juan, with an animated GIF, pointed out another nonobvious fact: \(2016 = 3^3 + 4^3 + 5^3 + 6^3 + 7^3 + 8^3 + 9^3\).
Even more noteworthy, May-Li Khoe and Federico Ardillo gave graphic proof that 2016 is a triangular number:
Specifically, 2016 is the sum of the integers \(1 + 2 + 3 + \cdots + 63\), making it the \(63\)rd element of the sequence \(1, 3, 6, 10, 15, \ldots\). You could go bowling with \(2016\) pins!
According to a well-known anecdote, Carl Friedrich Gauss was a schoolboy when he discovered that the nth triangular number is equal to \(n(n + 1) / 2\), which is also the value of the binomial coefficient \(\binom{n + 1}{2}\). Hence 2016 is the number of ways of choosing two items at a time from a collection of 64 objects. A tweet by John D. Cook offers this interpretation: “2016 = The number of ways to place two pawns on a chessboard.”
Patrick Honner expressed the same idea another way: 2016 is the number of edges in a complete graph with 64 vertices.
If you attended a New Years Eve party with 64 people present, and at midnight every guest clinked glasses with every other guest, there were 2016 clinks in all. (And probably a lot of broken glassware.)
In a New Years story of another kind, Tim Harford warns of the cost of overconfidence when it comes to making resolutions, as when you buy a year’s gym membership and stop going after six weeks. “Some companies base their business models on our tendency to overestimate our willpower.”
Triangular numbers turn up in another holiday item as well. A video by Tipping Point Math offers a quantitative analysis of the gift-giving extravaganza in “The Twelve Days of Christmas.” On the \(n\)th day of Christmas you receive \(1 + 2 + 3 + \cdots + n\) gifts, which is clearly the triangular number \(T_n\). But what’s the total number of gifts when you add up the largesse over \(n\) days? These sums of triangular numbers are the tetrahedral numbers: \(1, 4, 10, 20, 35 \ldots\); the twelfth tetrahedral number is \(364\). The video shows where to find all these numbers in Pascal’s triangle.
Vince Knight, in Un Peu de Math, approaches the Christmas gift exchange as a problem in game theory. Two friends agree not to exchange gifts, but then they are both tempted to renege on the agreement and buy a present afterall. This situation is a variant of the game-theory puzzle known as prisoner’s dilemma—which sounds like a rather grim view of holiday tradition. But in fact the conclusion is cheery: “People enjoy giving gifts a lot more than receiving them.”
The December holiday season also brings Don Knuth’s annual Christmas lecture, which has been posted on YouTube. In previous years, this talk was the Christmas Tree lecture, because it touched on some aspect of trees as a data structure or a concept in graph theory. Now Knuth has branched out from the world of trees. His subject is comma-free codes—sets of words that can be jammed together without commas or spaces to mark the boundaries between words, and yet still be read without ambiguity. A set that lacks the comma-free property is {tea, ate, eat}
, because a concatenation such as teateatea
has within it not just tea,tea,tea
but also eat,eat
and ate,ate
. But the words {sat, set, tea}
do qualify as comma-free, because no sequence of the words can be partitioned in more than one way to yield words in the set.
The idea of comma-free codes arose in the late 1950s, when it was thought that the genetic code might have a comma-free structure. Experiments soon showed otherwise, and interest in comma-free codes waned. But Knuth has rediscovered a paper by Willard Eastman, published in 1965, that constructs an algorithm for generating a comma-free code (if possible) from a given set of symbols. Knuth’s lecture demonstrates the algorithm and gives a computer implementation.
December 10, 2015, was the 200th birthday of Ada, Countess of Lovelace, who collaborated with Charles Babbage on the proposed computing machine called the analytic engine. The anniversary was commemorated in several ways, including an exhibition at the Science Museum in London, but here I want to call attention to a 12,000-word biographical essay by Stephen Wolfram, the creator of Mathematica.
The stories of Lovelace and Babbage have been told many times, but Wolfram’s account is worth reading even if you already know how the plot comes out. The principles of the machines and the algorithm that Lovelace devised as a demo (a computation of Bernoulli numbers) are described in detail, but the heart of the story is the human drama. Today we see these two figures as pioneers and heroes, but their lives were tinged with frustration and disappointment. Lovelace, who died at age 36, never got a proper opportunity to show off her ideas and abilities; in her one published work, all of her original contibutions were relegated to footnotes. Babbage in his later years felt aggrieved with the world for failing to support his vision. Wolfram makes an interesting biographer of this pair, never hesitating to see his subjects reflected in his own experience, and vice versa.
Back in the summer of 2012 Shinichi Mochizuki of Kyoto University released four long papers that claim to resolve an important problem in number theory called the abc conjecture. (I’m not going to try to explain the conjecture here; I did so in an earlier post.) More than three years later, no one in the mathematical community has been able to understand Mochizuki’s work well enough to verify that it is indeed a proof. In December more than 50 experts gathered at the University of Oxford to try to make progress on breaking the impasse. Brian Conrad of Stanford University, one of the workshop participants, wrote up his notes as a guest post in Cathy O’Neil’s Mathbabe blog. This is an insider’s account, and parts of the discussion will not make much sense unless you have fairly deep background in modern number theory. (I don’t.) But that inaccesibility illustrates the point, in a way. Number theorists themselves are in the same situation with regard to the Mochizuchi papers, which are clotted with idiosyncratic concepts such as Inter-universal Teichmuller Theory (IUT). Conrad writes:
After [Kiran] Kedlaya’s lectures, the remaining ones devoted to the IUT papers were impossible to follow without already knowing the material: there was a heavy amount of rapid-fire new notation and language and terminology, and everyone not already somewhat experienced with IUT got totally lost…. Persistent questions from the audience didn’t help to remove the cloud of fog that overcame many lectures in the final two days. The audience kept asking for examples (in some instructive sense, even if entirely about mathematical structures), but nothing satisfactory to much of the audience along such lines was provided.
It’s still not clear when or how the status of the proof will be resolved. O’Neil herself has taken a stern position on the issue of inpenetrable purported proofs. When the Mochizuchi papers were first released, she wrote: “If I claim to have proved something, it is my responsibility to convince others I’ve done so; it’s not their responsibility to try to understand it (although it would be very nice of them to try).”
Anthony Bonato, the Intrepid Mathematician, offers a friendlier introduction to the abc conjecture and its consequences. Also see comments on the conjecture and the workshop by Evelyn Lamb in the Scientific American blog Roots of Unity and by Anna Haensch in an American Mathematical Society blog.
Another number-theory event that attracted wide notice in recent weeks was the successful defense and publication of a doctoral thesis at Princeton University by Piper Harron. The title is “The Equidistribution of Lattice Shapes of Rings of Integers of Cubic, Quartic, and Quintic Number Fields: an Artist’s Rendering,” and it’s a great read (PDF). Here is the abstract:
A fascinating tale of mayhem, mystery, and mathematics. Attached to each degree \(n\) number field is a rank \(n\, - 1\) lattice called its shape. This thesis shows that the shapes of \(S_n\)-number fields (of degree \(n = 3, 4,\) or \(5)\) become equidistributed as the absolute discriminant of the number field goes to infinity. The result for \(n = 3\) is due to David Terr. Here, we provide a unified proof for \(n = 3, 4,\) and \(5\) based on the parametrizations of low rank rings due to Bhargava and Delone–Faddeev. We do not assume any of those words make any kind of sense, though we do make certain assumptions about how much time the reader has on her hands and what kind of sense of humor she has.
This is not your grandmother’s doctoral thesis! Harron interleaves sections of “laysplaining” and “mathsplaining,” and illustrates her work with an abundance of metaphors, jokes, asides to the reader, cartoons, and commentary on the course of her life during the 10 years she spent writing the thesis (e.g., a brief interruption to give birth). In terms of expository style, Mochizuki and Harron stand at opposite poles—a fact noted by at least two bloggers. Both of the posts by Evelyn Lamb and by Anna Haensch mentioned above in connection with Mochizuki also discuss Harron’s work.
Harron has a blog of her own with a post titled “Why I Do Not Talk About Math,” and she has written a guest post for Mathbabe with this description of her thesis:
My thesis is this thing that was initially going to be a grenade launched at my ex-prison, for better or for worse, and instead turned into some kind of positive seed bomb where flowers have sprouted beside the foundations I thought I wanted to crumble.
Reading her accounts of life as “an escaped graduate student,” I am in equal measures amused and horrified (not a comfortable combination). If nothing else, Harron’s “thesis grenade” promises to broaden—to diversify—the discussion of diversity in mathematical culture. It’s not just about race and gender (although Harron cares passionately about those issues). Human diversity also ranges across many other dimensions: modes of reasoning, approaches to learning, cultural contexts, styles of explaining, and ways of living. Harron’s thesis is a declaration that one can do original research-level mathematics without adopting the vocabulary, the styles, the attitudes, and the mental apparatus of one established academic community.
If I show you a cube, you can easily place it in a three-dimensional cartesian coordinate system in such a way that all the vertices have rational \(x,y,z\) coordinates. By scaling the cube, you can make all the coordinates integers. The same is true of the regular tetrahedron and the regular octahedron, but for these objects the scaling factor includes an irrational number, \(\sqrt{2}\). For the dodecahedron and the icosahedron, some of the vertex coordinates are themselves irrational no matter how the figure is scaled; the irrational number \(\varphi = (1 + \sqrt{5})\, /\, 2\) plays an essential role in the geometry.
This intrusion of irrationality into geometry troubled the ancients, but we seem to have gotten used to it by now. However, David Eppstein, a.k.a \(0\)xDE, writes about a class of polyhedra I still find deeply disconcerting: “Polyhedra whose vertex coordinates have no closed form formula.” In this context a closed-form formula is a mathematical expression that can be evaluated in a finite number of operations. There’s no universal agreement on exactly what operations are allowed; Eppstein works with computational models that allow the familiar operations \(+, -, \times, \div\) as well as finding roots of polynomials. He constructs a polyhedron—it looks something like the teepee his children used to play with—whose vertex coordinates cannot all be calculated by a finite sequence of these operations.
With this development we land in territory even stranger than that of the irrational numbers. I cannot draw an exact equilateral triangle on a computer screen because at least one of the coordinates is irrational; nevertheless, I can tell you that the vertex ought to have the coordinate \(y = 1 / \sqrt{3}\). In Eppstein’s polyhedron I can’t give you any such compact, digestible description of where the vertex belongs; the best anyone can offer is a program for approximating it.
Brent Yorgey in The Math Less Traveled offers an unlikely question: What’s the best way to read a manuscript printed on three-sided paper? If you have a sheaf of unbound pages printed on just one side, the obvious procedure is to read the top sheet, then shuffle it to the bottom of the heap, and continue until you come back to page 1. If the pages are printed on two sides, life gets a little more complicated. You can read the top page, flip it over and read the back, then flip it again and move it to the bottom of the sheaf. An alternative (attributed to John Horton Conway) is to read the front page, move it to the bottom of the heap, then flip over the entire stack to read the back side; then flip the stack again to read the front of the following page. In either case, you must alternate two distinct operations, which means you must somehow keep track of which move comes next.
The unsolved problem is to find the best algorithm when the pages are printed on three sides. “You may not be familiar with triple-sided sheets of paper,” Yorgey writes, “so here’s how they work: they stack nicely, just like regular sheets of paper, but you have to flip one over three times before you get back to the original side.”
Yorgey gives some criteria that a successful algorithm ought to satisfy, but the question remains open.
Mr. Honner takes up a problem encountered at a Math for America banquet:
Suppose you are standing several miles from the Pentagon. What is the probability you can see three sides of the building?
He discovers that it’s one of those cases where interpreting the problem is as much of a challenge as finding the answer. “In particular, it’s a reminder of how the different ways we model random selection can make for big differences in our solutions!”
Long-tailed data distributions—where extreme values are more common than they would be in, say, a normal distribution—are notoriously tricky. John D. Cook points out that long tails become even more treacherous when the variables are discrete (e.g., integers) rather than continuous values.
Suppose \(n\) data points are drawn at random from a distribution where each possible value \(x\) appears with frequency proportional to \(x^{-\alpha}\). Working from the data sample, we want to estimate the value of the exponent \(\alpha\). If \(x\) is a continuous variable, there’s a maximum-likelihood formula that generally works well: \(\hat{\alpha} = 1 + n\, / \sum \log x\). But with discrete variables, the same method leads to disastrous errors. In a test case with \(\alpha = 3\), the formula yields \(\hat{\alpha} = 6.87\). But we needn’t lose hope. Cook presents another approach that gives quite reliable results.
A post on DRHagen.com tackles a mystery found in an xkcd cartoon:
The size of each date is proportional to its frequency of occurrence in the Google Books ngram database for English-language books published since 2000. September 11 is clearly an outlier. That’s not the mystery. The mystery is that the 11th of most other months is noticeably less common than other dates. The cartoonist, Randall Munroe, was puzzled by this; hover over the image to see his message. Hagen convincingly solves the mystery. I’m not going to give away the solution, and I urge you to try to come up with a hypothesis of your own before following the link to DRHagen.com. And if you get that one right, you might work on another calendrical anomaly—that the 2nd, 3rd, 22nd, and 23rd are underrepresented in books printed before the 20th century—which Hagen solves in a followup post.
A shiny new blog called Off the Convex Path, with contributions from Sanjeev Arora, Moritz Hardt, and Nisheeth Vishnoi, “is dedicated to the idea that optimization methods—whether created by humans or nature, whether convex or nonconvex—are exciting objects of study and often lead to useful algorithms and insights into nature.” A December 21 post by Vishnoi looks at optimization mthods in the guise of dynamical systems—specifically, systems that converge to some set of fixed points. Vishnoi gives two examples, both from the life sciences: a version of Darwinian evolution, and a biological computer in which slime molds solve linear programming problems.
In the Comfortably Numbered blog, hardmath123 writes engagingly and entertainingly about numerical coincidences, like this one:
\(1337^{47168026} \approx \pi \cdot 10^{147453447}\) to within \(0.00000001\%\). It begins with the digits \(31415926 \ldots\).
The existence of such coincidences should not be a big surprise. As hardmath123 writes, “Since the rationals are dense in the reals, we can find a rational number arbitrarily close to any real number.” The question is how to find them. The answer involves continued fractions and the Dirichlet approximation theorem.
For the final acts of this carnival, we have a few items on mathematics in the arts, crafts, games, and other recreations.
In the Huffington Post arts and culture department, Dan Rockmore takes a mathematical gallery walk in New York. The first stop is the Whitney Museum of American Art, showing a retrospective of the paintings of Frank Stella, some of whose canvases are nonrectilinear, and even nonconvex. In another exhibit mathematics itself becomes art, with 10 equations and other expressions calligraphically rendered by 10 noted mathematicians and scientists.
Katherine, writing on the blog Will Knit for Math, tells how “Being a mathematician improves my knitting.” It’s not just a matter of counting stitches (although a scheme for counting modulo 18 does enter into the story). I hope we’ll someday get a followup post explaining how “Being a knitter improves my mathematics.”
Chelsea VanderZwaag, a student majoring in mathematics and elementary education, visits an arts-focused school school, where a lesson on paper-folding and fractions blends seamlessly into the curriculum. (An earlier post on the problem of creating a shape by folding paper and making a single cut with scissors provides a little more background.)
On Medium, A Woman in Technology writes about Dobble, a combinatorial card game. “My children got this game for Christmas. We haven’t played it yet. We got as far as me opening the tin and reading the rules, at which point I got distracted by the maths and forgot about the game.”
There are 55 cards, each bearing eight symbols, and each card shares exactly one symbol with each of the other cards. How many distinct symbols do you need to construct such a deck of cards. The game instructions say there are 50, but the author determines through hard work, scribbling, and spreadsheets that the instructions are in error. It all comes to a happy end with the formula \(n^2 - n + 1\) and a suggested improvement to the game that the manufacturer should heed.
That’s it for Carnival of Math 130. My apologies to a few contributors whose interesting work I just couldn’t squeeze in here.
Next month the carnival makes a hometown stop with Katie at The Aperiodical.
]]>Scribble, scribble, scribble. As if the world didn’t get enough of my writing already, with a bimonthly column in American Scientist, now I’m equipped to publish my every thought on a momen’t notice.
That’s how it all began here on bit-player.org: The first post (with the first typo) appeared on January 9, 2006. I’ve published another 340 posts since then (including this one)—and doubtless many more typos and other errors. Many thanks to my readers, especially those who have contributed some 1,800 thoughtful comments.
]]>My closest friends and family must make do with an old-fashioned paper-and-postage greeting card, but for bit-player readers I can send some thoroughly modern pixels. Happy holidays to everyone.
In recent months I’ve been having fun with “deep dreaming,” the remarkable toy/tool for seeing what’s going on deep inside deep neural networks. Those networks have gotten quite good at identifying the subject matter of images. If you train the network on a large sample of images (a million or more) and then show it a picture of the family pet, it will tell you not just whether your best friend is a cat or a dog but whether it’s a Shih Tzu or a Bichon Frise.
What visual features of an image does the network seize upon to make these distinctions? Deep dreaming tries to answer this question. It probes a layer of the network, determines which neural units are most strongly stimulated by the image, and then translates that pattern of activation back into an array of pixels. The result is a strange new image embellished with all the objects and patterns and geometric motifs that the selected layer thinks it might be seeing. Some of these machine dreams are artful abstractions; some are reminiscent of drug-induced hallucinations; some are just bizarre or even grotesque, populated by two-headed birds and sea creatures swimming through the sky.
For this year’s holiday card I chose a scene appropriate to the season and ran it through the deep-dreaming program. You can see some of the output below, starting with the original image and progressing through fantasies extracted from deeper and deeper layers of the network. (Navigate with the icons below the image, or use the left and right arrow keys. Shorthand labels identifying the network layers appear at lower right.)
A few notes and observations:
And some links:
Update 2015-12-31: In the comments, Ed Jones asks, “If the original image is changed slightly, how much do the deep dreaming images change?” It’s a very good question, but I don’t have a very good answer.
The deep dreaming procedure has some stochastic stages, and so the outcome is not deterministic. Even when the input image is unchanged, the output image is somewhat different every time. Below are enlargements cropped from three runs probing layer 4c, all with exactly the same input:
They are all different in detail, and yet at a higher level of abstraction they are all the same: They are recognizably products of the same process. That statement remains true when small changes—and even some not-so-small ones—are introduced into the input image. The figure below has undergone a radical shift in color balance (I have swapped the red and blue channels), but the deep dreaming algorithm produces similar embellishments, with an altered color palette:
In the pair of images below I have cloned a couple of trees from the background and replanted them in the foreground. They are promptly assimilated into the deep dream fantasy, but again the overall look and feel of the scene is totally familiar.
Based on the evidence of these few experiments, it seems the deep dreaming images are indeed quite robust, but there’s another side to the story. When these neural networks are used to recognize or classify images (the original design goal), it’s actually quite easy to fool them. Christian Szegedy and his colleagues have shown that certain imperceptible changes to an image can cause the network to misclassify it; to the human eye, the picture still looks like a school bus, but the network sees it as something else. And Ahn Nguyen et al. have tricked networks into confidently identifying images that look like nothing but noise. These results suggest that the classification methods are rather brittle or fragile, but that’s not quite right either. Such errors arise only with carefully crafted images, called “adversarial examples.” There is almost no chance that a random change to an image would trigger such a response.
]]>
Consider Boston, our best guess for where you might be reading this article. It’s very expensive for spending on the average Medicare patient. But, when it comes to private health insurance, it’s about average.
Their best guess about my whereabouts was pretty good. I am indeed in the Boston area. I’m not surprised that they know that, but I was nonplussed to discover that they had altered the story according to my location. Here’s the markup for that paragraph:
<div class="g-insert"> <p class="g-body"> Consider <span class="g-custom-place g-selected-hrr-name"> Boston</span> <span class="g-geotarget-success">, our best guess for where you might be reading this article</span>. <span class="g-new-york-city-addition g-custom-insert g-hidden"> (Here, the New York City region includes all boroughs but the Bronx, which is listed separately.)</span> It’s <span class="g-medicare-adjective g-custom-place"> very expensive</span> for spending on the average Medicare patient. <span class="g-local-insert g-very-different"> But, w</span><span class="g-close g-same g-hidden g-local-insert"> W</span>hen it comes to private health insurance, it’s <span class="g-hidden g-same g-local-insert">also</span> <span class="g-private-adjective g-custom-place"> about average</span>. <span class="g-close g-hidden g-local-insert"> The study finds that the levels of spending for the two programs are unrelated. That means that, for about half of communities, spending is somewhat similar, like it is in <span class="g-custom-place g-selected-hrr-name g-hrr-only-no-state"> Boston</span> </span> <span class="g-same g-hidden g-local-insert g-same-sentence"> <span class="g-custom-place g-hrr-only-no-state">Boston</span> is one of the few places where spending for both programs is very similar – in most, there is some degree of mismatch. </span> <span class="g-new-york-addition g-local-insert g-hidden"> Several parts of the New York metropolitan area are outliers in the data – among the most expensive for both health insurance systems.</span> <!-- <span class="g-atlanta-addition g-local-insert g-hidden"> (Atlanta is one of the few places in the country where spending for both programs is very similar. In most, there is some degree of mismatch.)</p> --> </p> </div>
It seems I live in a g-custom-place
(“Boston”), which is associated with a g-medicare-adjective
(“very expensive”) and a g-private-adjective
(“about average”). Because the Times has been able to track me down, I get a g-geotarget-success
message. But for the same reason I don’t get to see certain other text, such as a remark about New York as an outlier; those text spans are g-hidden
.
Presumably, a program running on the server has located me by checking my IP number against a geographic database, then added various class names to the span tags. Some of the class names are processed by a CSS stylesheet; for example, g-hidden
triggers the style directive display: none
. The other class names are apparently processed by a Javascript program that inserts or removes text, such as those custom adjectives. Generating text in this way looks like a pretty tedious and precarious business, a little like writing poetry with refrigerator magnets. For example, an extra set of class names is needed to make sure that if a place is first mentioned as a city and state (e.g., “Springfield, Massachusetts”), the state won’t be repeated on subsequent references.
I suppose there’s no great harm in this bit of localizing embellishment. After all, they’re not tailoring the article based on whether I’m black or white, male or female, Democrat or Republican, rich or poor. It’s just a geographic split. But it makes me queasy all the same. When I cite an article here on bit-player, I want to think that everyone who follows the link will see the same article. It looks like I can’t count on that.
Update: Turns out this is not the Times’s first adventure in geotargeting. A story last May on “paths out of poverty” used the same technique, as reported by Nieman Lab. (Thanks to Andrew Silver (@asilver360) for the tip via Twitter.)
Update 2015-12-17: Margaret Sullivan, the Public Editor of the Times, writes today on the mixed response to the geotargeted story, concluding:
]]>The Times could have quite easily provided readers with an opt-in: “Want to see results for your area? Click here.”
As the paper continues down this path, it’s important to do so with awareness and caution. For one thing, some readers won’t like any personalization and will regard it as intrusive. For another, personalization could deprive readers of a shared, and expertly curated, news experience, which is what many come to The Times for. Losing that would be a big mistake.
Please contribute! Anything that might engage or delight the mathematical mind is welcome: theorems, problems, games and recreations, notes on math education. We’ll take pure and applied, discrete and continuous, geometrical and arithmetical, differential and integral, polynomial and exponential…. You get the idea. And don’t be shy about proposing your own work!
]]>Porter links continuing growth not just to prosperity but also to a host of civic virtues. “Economic development was indispensable to end slavery,” he declares. “It was a critical precondition for the empowerment of women…. Indeed, democracy would not have survived without it.” As for what might happen to us without ongoing growth, he offers dystopian visions from history and Hollywood. Before the industrial revolution, “Zero growth gave us Genghis Khan and the Middle Ages, conquest and subjugation,” he says. A zero-growth future looks just as grim: “Imagine ‘Blade Runner,’ ‘Mad Max,’ and ‘The Hunger Games’ brought to real life.”
Let me draw you a picture of this vision of economic growth through the ages, as I understand it:
For hundreds or thousands of years before the modern era, average wealth and economic output were low, and they grew only very slowly. Life was solitary, poor, nasty, brutish, and short. Today we have vigorous economic growth, and the world is full of wonders. Life is sweet, for now. If growth comes to an end, however, civilization collapses and we are at the mercy of new barbarian hordes (equipped with a different kind of horsepower).
Something about this scenario puzzles me. In that frightful Mad Max future, even though economic growth has tapered off, the society is in fact quite wealthy; according to the graph, per capita gross domestic product is twice what it is today. So why the descent into brutality and plunder?
Porter has an answer at the ready. The appropriate measure of economic vitality, he implies, is not GDP itself but the rate of growth in GDP, or in other words the first derivative of GDP as a function of time:
If the world follows the trajectory of the blue curve, we have already reached our peak of wellbeing. It’s all downhill from here.
Some economists go even further, urging us to keep an eye on the second derivative of economic activity. Twenty-five years ago I was hired to edit the final report of an MIT commission on industrial productivity. Among the authors were two prominent economists, Robert Solow and Lester Thurow. I argued with them at some length about the following paragraph (they won):
In view of all the turmoil over the apparently declining stature of American industry, it may come as a surprise that the United States still leads the world in productivity. Averaged over the economy as a whole, for each unit of input the United States produces more output than any other nation. With this evidence of economic efficiency, is there any reason for concern? There are at least two reasons. First, American productivity is not growing as fast as it used to, and productivity in the United States is not growing as fast as it is elsewhere, most notably in Japan….
The phrase I have highlighted warns us that even though productivity is high and growing higher, we need to worry about the rate of change in the rate of growth:
Taking the second derivative (the green curve) as the metric of economic health, it appears we have already fallen back to the medieval baseline, and life is about to get even worse than it was at the time of the Mongol conquests; the Mad Max world will be an improvement over what lies in store in the near future.
Why should human happiness and the fate of civilization depend on the time derivative of GDP, rather than on GDP itself? Why do we need not just wealth, but more and more wealth, growing faster and faster? Again, Porter has an answer. Without growth, he says, economic life becomes a zero-sum game. “As Martin Wolf, the Financial Times commentator has noted, the option for everybody to become better off—where one person’s gain needn’t require another’s loss—was critical for the development and spread of the consensual politics that underpin democratic rule.” In other words, the function of economic growth is to blunt the force of envy in a world with highly skewed distributions of income and wealth. I’m not persuaded that growth per se is either necessary or sufficient to deal with this issue.
Porter’s essay on zero growth was prompted by the climate-change negotiations now under way in Paris. He worries (along with many others) that curtailing consumption of fossil fuels will lead to lower overall production and consumption of goods and services. That’s surely a genuine risk, but what’s the alternative? If burning more and more carbon is the only way we can keep our civilization afloat, then somebody had better send for Mad Max. The age of fossil fuels is going to end, sooner or later, if not because of the climatic effects then because the supply is finite.
Economic growth is not necessarily tied to the carbon budget, but it can’t be cut loose entirely from physical resources. Even the ethereal goods that are now so prominent in commerce—code and data—require some sort of material infrastructure. Ultimately, whether growth continues is not a question of social and economic policy or moral philosophy; it’s a matter of physics and mathematics. I’m with Kenneth Boulding. I don’t see Mad Max in our future, but I’m not counting on perpetual growth, either.
]]>When I looked over the collection, I quickly realized that we could not form a set of eight matching glasses. The closest we could come was 6 + 2. But then I saw that we could form a set of eight glasses with no two alike. As I placed them on the table, I thought “Aha, Ramsey theory!”
At the root of Ramsey theory lies this curious assertion: If a collection of objects is large enough, it cannot be entirely without structure or regularity. Dinner parties offer the canonical example: If you invite six people to dinner, then either at least three guests will already be mutual acquaintances (each knows all the others) or at least three guests will be strangers (none has met any of the others). This result has nothing to do with the nature of social networks; it is a matter of pure mathematics, first proved by the Cambridge philosopher, mathematician, and economist Frank Plumpton Ramsey (1903–1930).
Ramsey problems become a little easier to reason about when you transpose them into the language of graph theory. Consider a complete graph on six vertices (where every vertex has an edge connecting it with every other vertex, for a total of 15 edges):
The aim is to color all the edges of the graph red or blue in such as way that no three vertices are connected by edges of the same color (forming a “monochromatic clique”). The red edges might signify “mutually acquainted” and the blue ones “strangers.” As the diagrams below show, it’s easy to find a successful red-and-blue coloring of a complete graph on five vertices: In the pentagon at left, each vertex is connected to two other vertices by red edges, but those vertices are connected to each other by a blue edge. Thus there are no red triangles, and a similar analysis shows there are no blue ones either. The same scheme doesn’t work for a six-vertex graph, however. The attempt shown at right fails with two blue triangles. In fact, any two-coloring of this graph has monochromatic triangles. Ramsey’s 1928 proof of this assertion is based on the pigeonhole principle. These days, we also have the option of just checking all \(2^{15}\) possible colorings.
More formally, the Ramsey number \(\mathcal{R}(m, n)\) is the number of vertices in the smallest complete graph for which a two-coloring of the edges is certain to yield a red clique of \(m\) edges or a blue clique of \(n\) edges (or both). In applying this notion to the wine glass problem, I was asking: How many glasses do I need to have in my cupboard to ensure there are either eight all alike or eight all different?
At dinner that night we cheerfully clinked our eight dissimilar glasses. Maybe we even completed the full round of \((8 \times 7) / 2 = 28\) clinks. Later on, after everyone had gone home and all the glasses were washed, my thoughts returned to Ramsey theory. I was wondering, “What is the value of \(\mathcal{R}(8, 8)\), the smallest complete graph that is sure to have a monochromatic subgraph of at least eight vertices? Lying awake in the middle of the night, I worked out a solution in terms of wine glasses.
Suppose you start with an empty cupboard and add glasses one at a time, aiming to assemble a collection in which no eight glasses are all alike and no eight glasses are all different. You could start by choosing seven different glasses—but no more than seven, lest you create an all-different set of eight. Every glass you subsequently add to the set must be the same as one of the original seven. You can keep going in this way until you have seven sets of seven identical glasses. When you add the next glass, however, you can’t avoid creating a set that either has eight glasses all alike or eight all different. Thus it appears that \(\mathcal{R}(8, 8) = 7^2 + 1 = 50\).
The moment I reached this conclusion, I knew something was dreadfully wrong. Computing Ramsey numbers is hard. After decades of mathematical and computational labor, exact \(\mathcal{R}(m, n)\) values are known for only nine cases, all with very small values of \(m\) and \(n\). Lying in the dark, without Google at my fingertips, I couldn’t remember the exact boundary between known and unknown, but I was pretty sure that \(\mathcal{R}(8, 8)\) lay on the wrong side. The idea that I might have just calculated this long-sought constant in my head was preposterous. And so, in a state of drowsy perplexity, I fell asleep.
Next morning, the mystery evaporated. Where did my reasoning go wrong? You might want to think a moment before revealing the answer.
No, I wasn’t drunk. The blue trace shows me lurching all over the track, straying onto the soccer field, and taking scandalous shortcuts in the turns—but none of that happened, I promise. During the entire run my feet never left the innermost lane of the oval. All of my apparent detours and diversions result from GPS measurement errors or from approximations made in reconstructing the path from a finite set of measured positions.
At the end of the run, the app tells me how far I’ve gone, and how fast. Can I trust those numbers? Looking at the map, the prospects for getting accurate summary statistics seem pretty dim, but you never know. Maybe, somehow, the errors balance out.
Consider the one-dimensional case, with a runner moving steadily to the right along the \(x\) axis. A GPS system records a series of measured positions \(x_0, x_1, \ldots, x_n\) with each \(x_i\) displaced from its true value by a random amount no greater than \(\pm\epsilon\). When we calculate total distance from the successive positions, most of the error terms cancel. If \(x_i\) is shifted to the right, it is farther from \(x_{i-1}\) but closer to \(x_{i+1}\). For the run as a whole, the worst-case error is just \(\pm 2 \epsilon\)—the same as if we had recorded only the endpoints of the trajectory. As the length of the run increases, the percentage error goes to zero.
In two dimensions the situation is more complicated, but one might still hope for a compensating mechanism whereby some errors would lengthen the path and others shorten it, and everything would come out nearly even in the end. Until a few days ago I might have clung to that hope. Then I read a paper by Peter Ranacher of the University of Salzburg and four colleagues. (Take your choice of the journal version, which is open access, or the arXiv preprint. Hat tip to Douglas McCormick in IEEE Spectrum, where I learned about the story.)
Ranacher’s conclusion is slightly dispiriting for the runner. On a two-dimensional surface, GPS position errors introduce a systematic bias, tending to exaggerate the length of a trajectory. Thus I probably don’t run as far or as fast as I had thought. But to make up for that disappointment, I have learned something new and unexpected about the nature of measurement in the presence of uncertainty, and along the way I’ve had a bit of mathematical adventure.
The Runmeter app works by querying the phone’s GPS receiver every few seconds and recording the reported longitude and latitude. Then it constructs a path by drawing straight line segments connecting successive points.
Two kinds of error can creep into the GPS trajectory. Measurement errors arise when the reported position differs from the true position. Interpolation errors come from the connect-the-dots procedure, which can miss wiggles in the path between sampling points. Ranacher et al. consider only the inaccuracies of measurement, on the grounds that interpolation errors can be reduced by more frequent sampling or a more sophisticated curve-fitting method (e.g., cubic splines rather than line segments). Interpolation error is eliminated altogether if the runner’s path is a straight line.
Suppose a runner on the \(x, y\) plane shuttles back and forth repeatedly between the points \(p = (0, 0)\) and \(q = (d, 0)\). In other words, the end points of the path lie \(d\) units apart along the \(x\) axis. After \(n\) trips, the true distance covered is clearly \(nd\). A GPS device records the runner’s position at the start and end of each segment, but introduces errors in both the \(x\) and \(y\) coordinates. Call the perturbed positions \(\hat{p}\) and \(\hat{q}\), and the Euclidean distance between them \(\hat{d}\). Ranacher and his colleagues show that for large \(n\) the total GPS distance \(n \hat{d}\) is strictly greater than \(nd\) unless all the measurement errors are perfectly correlated.
I wanted to see for myself how measured distance grows as a function of GPS error, so I wrote a simple Monte Carlo program. The Ranacher proof makes no assumptions about the statistical distribution of the errors, but in a computer simulation it’s necessary to be more concrete. I chose a model where the GPS positions are drawn uniformly at random from square boxes of edge length \(2 \epsilon\) centered on the points \(p\) and \(q\).
In the sketch above, the black dots, separated by distance \(d\), represent the true endpoints of the runner’s path. The red dots are two GPS coordinates \(\hat{p}\) and \(\hat{q}\), and the red line gives the measured distance between them. We want to know the expected length of the red line averaged over all possible \(\hat{p}\) and \(\hat{q}\).
Getting the answer is quite easy if you’ll accept a numerical approximation based on a finite random sample. Write a few lines of code, pick some reasonable values for \(d\) and \(\epsilon\), crank up the random number generator, and run off 10 million iterations. Some results:
\(\epsilon\) | \(d\) | \(\hat{d}\) |
---|---|---|
0.0 | 1.0 | 1.0000 |
0.1 | 1.0 | 1.0034 |
0.2 | 1.0 | 1.0135 |
0.3 | 1.0 | 1.0306 |
0.4 | 1.0 | 1.0554 |
0.5 | 1.0 | 1.0882 |
For each value of \(\epsilon\) the program generated \(10^7\) \((\hat{p}, \hat{q})\) pairs, calculated the Euclidean distance \(\hat{d}\) between them, and finally took the average \(\langle \hat{d} \rangle\) of all the distances. It’s clear that \(\langle \hat{d} \rangle > d\) when \(\epsilon > 0\). Not so clear is where these particular numbers come from. Can we understand how \(\hat{d}\) is determined by \(d\) and \(\epsilon\)?
For a little while, I thought I had a simple explanation. I reasoned as follows: We already know from the one-dimensional case that the \(x\) component of the measured distance has an expected value of \(d\). The \(y\) component, orthogonal to the direction of motion, is the difference between two randomly chosen points on a line of length \(2 \epsilon\); a symmetry argument gives this length an expected value of \(2 \epsilon / 3\). Hence the expected value of the measured distance is:
\[\hat{d} = \sqrt{d^2 + \left(\frac{2 \epsilon}{3}\right)^2}\, .\]
Ta-dah!
Then I tried plugging some numbers into that formula. With \(d = 1\) and \(\epsilon = 0.3\) I got a distance of 1.0198. The discrepancy between this value and the numerical result 1.0306 is much too large to dismiss.
What was my blunder? Repeat after me: The average of the squares is not the same as the square of the average. I was calculating the squared distance as \({ \langle x \rangle}^2 + {\langle y \rangle}^2 \) when what I should have been doing is \(\langle {x^2 + y^2}\rangle\). We need to average over all possible distances between a point in one square and a point in the other, not over all \(x\) and \(y\) components of those distances. Trouble is, I don’t know how to calculate the correct distance.
I thought I’d try to find an easier problem. Suppose the runner stops to tie a shoelace, so that the true distance \(d\) drops to zero; thus any movement detected is a result of GPS errors. As long as the runner remains stopped, the two error boxes exactly overlap, and so the problem reduces to finding the average distance between two randomly selected points in the unit square. Surely that’s not too hard! The answer ought to be some simple and tidy expression—don’t you think?
In fact the problem is not at all easy, and the answer is anything but tidy. We need to evaluate a terrifying quadruple integral:
\[\iiiint_0^1 \sqrt{(x_q - x_p)^2 + (y_q - y_p)^2} \, dx_p \, dx_q \, dy_p \, dy_q\, .\]
Lucky for me, I live in the age of MathOverflow and StackExchange, where powerful wizards have already done my homework for me.
\[\frac{2+\sqrt{2}+5\log(1+\sqrt{2})}{15} \approx 0.52140543316\]
Nothing to it, eh?
The corresponding expression for nonzero \(d\) is doubtless even more of a monstrosity, but I’ve made no attempt to derive it. I am left with nothing but the Monte Carlo results. (For what it’s worth, the simulations do agree on the value \(\hat{d} = 0.5214\) for \(d = 0\)).
I tried applying the Monte Carlo program to my 1,600-meter run. In the Runmeter data my position is sampled 100 times, or every six seconds on average, which means that \(d\) (the true average distance between samples) should be about 16 meters. Estimating \(\epsilon\) is not as easy. In the map above there’s one point on the blue path that’s displaced by at least 10 meters, but if we ignore that outlier most of the other points are probably within about 3 meters of the correct lane. Plugging in \(d = 16\) and \(\epsilon = 3\) yields about 1,620 meters as the expected measured distance.
What does the Runmeter app have to say? It reports a total distance of 1,599.5 meters, which is, I’m inclined to say, way too good to be true. Part of the explanation is that the measurement errors are not uniform random variables; there are strong correlations in both space and time. Also, measurement errors and interpolation errors surely have canceled out to some extent. (It’s even possible that the developers of the app have chosen the sampling interval to optimize the balance between the two error types.) Still, I have to say that I am quite surprised by this uncanny accuracy. I’ll have to run some more laps to see if the performance is repeatable.
Another thought: People have been measuring distances for millennia. How is it that no one noticed the asymmetric impact of measurement errors before the GPS era? Wouldn’t land surveyors have figured it out? Or navigators? Distinguished mathematicians, including Gauss and Legendre, took an interest in the statistical analysis of errors in surveying and geodesy. They even did field work. Apparently, though, they never stumbled on the curious fact that position errors orthogonal to the direction of measurement lead to a systematic bias toward greater lengths.
There’s yet another realm in which such biases may have important consequences: measurement in high-dimensional spaces. Inaccuracies that cause a statistical bias of 2 percent in two-dimensional space give rise to a 19 percent overestimate in 10-dimensional space. The reason is that errors along all the axes orthogonal to the direction of measurement contribute to the Euclidean distance. By the time you get to 1,000 spatial dimensions, the measured distance is more than six times the true distance.
Even for creatures like us who live their lives stuck in three-space, this observation might be more than just a mathematical curiosity. Lots of algorithms in machine learning, for example, measure distances between vectors in high-dimensional spaces. Some of those vectors may be closer than they appear.
]]>
- I am greatly indebted to Prof. Riesz for translating the present paper.
- I am indebted to Prof. Riesz for translating the preceding footnote.
- I am indebted to Prof. Riesz for translating the preceding footnote.
Why stop at three? Littlewood explains: “However little French I know I am capable of copying a French sentence.”
I thought of this incident the other day when I received a letter from Medicare. At the top of the single sheet of paper was the heading “A Message About Medicare Premiums,” followed by a few paragraphs of text, and at the bottom this boldface note:
The information is printed in Spanish on the back
Naturally, I turned the page over. I found the heading “Un mensaje sobre las primas de Medicare,” followed by a few paragraphs of Spanish text, and then this in boldface:
La información en español está impresa al dorso
The line is a faithful translation of the English text from the other side of the sheet. (O el inglés es una traducción fiel del español.) But in this case neither copying nor faithful translation quite suffices. It seems we have fallen into the wrong symmetry group. The statement “This sentence is not in Spanish” is true, but its translation into Spanish, “Esta frase no está en español” is false. Apart from that self-referential tangle, if the two boldface notes in the letter are to be of any use to strictly monolingual readers, shouldn’t they be on opposite sides of the paper?
By the way, I had always thought the Littlewood three-footnote story referred to a real paper. But his account in A Mathematicians’s Miscellany suggests it was a prank he never had a chance to carry out. And in browsing the Comptes Rendus on Gallica, I find no evidence that Littlewood ever published there. [Please see comment below by Gerry Myerson.]
]]>A place where thousands of people suffered and died makes an uncomfortable tourist destination, yet looking away from the horror seems even worse than staring. And so, when Ros and I were driving from Prague to Dresden last month, we took a slight detour to visit Terezín, the Czech site that was the Theresienstadt concentration camp from late 1941 to mid 1945. We expected to be disturbed, but we stumbled onto something that was disturbing in an unexpected way.
Terezín was not built as a Nazi concentration camp. It began as a fortress, erected in the 1790s to defend the Austrian empire from Prussian threats. Earthen ramparts and bastions surround buildings that were originally the barracks and stables for a garrison of a few thousand troops. By the 20th century the fortress no longer served any military purpose. The troops withdrew, civilians moved in, and the place became a town with a population of about 7,000.
In 1941 the Gestapo and the SS siezed Terezín, expelled the Czech residents, and began the “resettlement” of Jews deported from Prague and elsewhere. In the next three years 150,000 prisoners passed through the camp. All but 18,000 perished before the end of the war.
Now Terezín is again a Czech town, as well as a museum and memorial to the holocaust victims. It seems a lonely place. A few boys kick a ball around on the old parade ground, the café has two or three customers, someone is holding a rummage sale—but the town’s population and economy have not recovered. The museum occupies parts of a dozen buildings, but many of the others appear to be vacant.
We looked at the museum exhibits, then wandered off the route of the self-guided tour. At the edge of town, near a construction site, a tunnel passed under the fortifications. Walking through, we came out into a grassy strip of land between the inner and outer ramparts. When we turned back to the tunnel, we noticed graffiti on the walls of the portal.
At first I assumed it was recent adolescent scribbling, but on looking closer we began to see dates in the 1940s, carved into the sandstone blocks. Could it be true? Could these incised names and drawings really be messages from the concentration-camp era? If so, who left them for us? Did the prisoners have access to this tunnel, or was it an SS guard post?
I was skeptical. Too good to be true, I thought. If the carvings were genuine, they would not have been left out here, exposed to the elements and unprotected against vandalism. They would be behind glass in one of the museum galleries. But if they were not genuine, what were they?
I took pictures. (The originals are on Flickr.)
Back home, some days later, my questions were answered. Googling for a few phrases I could read in the inscriptions turned up the website ghettospuren.de, which offers extensive documentation and interpretation (in Česky, Deutsch, and English). Briefly, the carvings are indeed authentic, as shown by photographs made in 1945 soon after the camp was liberated. The markings were made by members of the Ghettowache, the internal police force selected from the prison population. A dozen of the artists have been identified by name.
The website is the project of Uta Fischer, a city planner in Berlin, with the photographer Roland Wildberg and other German and Czech collaborators. They are working to preserve the carvings and several other artifacts discovered in Terezín in the past few years.
I offer a few notes and speculations on some of the inscriptions, drawing heavily on Fischer’s commentary and translations:
“Brána střežena stráží ghetta L.P. 1944.” Translation from ghettospuren.de: “The gate is being guarded by the ghetto guard, A.D. 1944.” This sign, given a prominent position at the entrance to the tunnel, reads like a territorial declaration. The date is interesting. Are we to infer that the gate was not guarded by the stráží ghetta before 1944? | |
“Pamatce na pobyt 1941–1944.” Translation from ghettospuren.de: “In remembrance of the stay 1941–1944.” Fischer remarks on the formality of the inscription, suggesting that this part of the south wall was created as “a collective place of remembrance.” The carving has been badly damaged since the first photos were made in 1945. | |
A floral arrangement is the most elaborate of all the carvings. Fischer identifies the artist as Karel Russ, a shopkeeper in the Bohemian town of Kyšperk (now Letohrad). Fischer writes: “In the top center there is still a recognizable outline of the Star of David that was already removed in a rough manner in 1945.” For what it’s worth, I’m not so sure that’s not another flower. The deep hole in the middle was not present in 1945 and is not explained. | |
Four caricatures of the same figure are lined up on a single sandstone block on the north wall, with a fifth squeezed into a narrow spot on the block below. Why the repetition? And who was the subject? The Italian legend “Il capitano della guardia” and the double stripe on the hat suggest a high-ranking Ghettowache official. Did he take these cartoonish portrayals with good humor? Or could the drawings possibly be selfies? | |
The menorah at the bottom left of this panel is the only explicitly Jewish iconography I have spotted in these images. (As noted above, Fischer believes the floral panel included a Star of David.) As far as I can tell, there are no Hebrew inscriptions. | |
Portraits of a man and a woman? That’s my best guess, but the carving is indistinct. The line above presumably reads “M.C. 1944,” but the “1″ has been gouged away. | |
Not all of the inscriptions come from the Second World War. This one, signed “Alchuz Jan,” is dated August 6, 1911. Another (not shown) claims to be from 1871. | |
It’s only to be expected that there are also later additions to the graffiti. Toward the bottom of this panel we have B.K. ♥ R.V. 1953. The white scrawl at top left is much more recent. On the other hand, the signature of “Waltuch Wilhelm” at upper right is from the war years. Fischer has identified him as the owner of a cinema in Vienna. Elsewhere he also signed his name in Cyrillic script. |
I am curious about the chronology of the Ghettowache inscriptions. Are we seeing an accumulation of work carried out over a period of years, or was all the carving done in a few weeks or months? The preponderance of items dated 1944 argues for the latter view. In particular, the inscription “In remembrance of the stay 1941–1944” could not have been written before 1944, and it suggests some foreknowledge that the stay would soon be over.
A lot was going on at Terezín in 1944. In June, the camp was cleaned up for a stage-managed, sham inspection by the Red Cross; to reduce overcrowding in preparation for this event, part of the population was deported to Auschwitz. Later that summer, the SS produced a propaganda film portraying Theresienstadt as a pleasant retreat and retirement village for Jewish families; the film wasn’t really titled “The Führer Gives a Village to the Jews,” but it might as well have been. As soon as the filming was done, thousands more of the residents were sent to the death camps, including most of those who had acted in the movie. In the fall, with the war going badly for Germany, the SS decided to close the camp and transport everyone to the East. Perhaps that is when some of the inscriptions with a tone of finality were carved—but I’m only guessing about this.
As it happens, the liquidation of the ghetto was never completed, and in the spring of 1945 the flow of prisoners was reversed. Trains brought survivors back from the extermination camps in Poland, which were about to be overrun by the Red Army. When Terezín was liberated by the Soviets in early May, there were several thousand inmates. But the tunnel has no inscriptions dated 1945.
Graffiti is a varied genre. It encompasses scatological scribbling in the toilet stall, romantic declarations carved on tree trunks, the existential yawps of spray-paint taggers, dissident political slogans on city walls, religious ranting, sports fanaticism, and much else. It’s often provocative, sometimes indecent, imflammatory, insulting, or funny. The tunnel carvings at Terezín evoke a quite different set of adjectives: poignant, elegiac, calm, tender. It’s not surprising that we see no overtly political or accusatory statements—no strident “Let my people go,” no outing of torturers or collaborators. After all, these messages were written under the noses of a Nazi administration that wielded absolute and arbitrary power of life and death. Even so—even considering the circumstances—there’s an extraordinary emotional restraint on exhibit here.
What audience were the tunnel elegists addressing? I have to believe it was us, an unknown posterity who might wander by in some unimaginable future.
When Ros and I wandered by, the fact that we had discovered the place by pure chance, as if it were a treasure newly unearthed, made the experience all the more moving. Seeing the stones in a museum exhibit—curated, annotated, preserved—would have had less impact. Nevertheless, that is unquestionably where they belong. Uta Fischer and her colleagues are working to make that happen. I hope they succeed in time.
]]>