MPFR does have a cache to avoid recomputing Pi, but Sage doesn’t use it (the cache doesn’t play well with Control-C). And as far as I can tell the cache starts out empty.

]]>Here are some timings recorded in Mathematica 10.0, as the program computed (or not?!) 10^*i* digits of pi for 0 ≤ *i* ≤ 8:

Table[{10^i, Timing[N[Pi, 10^i]][[1]]}, {i, 0, 8}] {{1, 0.000053}, {10, 0.000021}, {100, 0.000033}, {1000, 0.000105}, {10000, 0.001292}, {100000, 0.026601}, {1000000, 0.430528}, {10000000, 6.249267}, {100000000, 98.862143}}

For fewer than about 1,000 digits, the results look like noise; there’s no clear correlation between precision and time. That outcome would be consistent with your hypothesis that we’re just doing table lookup in this range. But it’s also consistent with the hypothesis that the computation is so fast that the overall time is dominated by OS and UI events, or other overhead. Without access to the source code, it’s hard to know which.

We *do* have the source code for Sage, the open-source mathematics system. Here’s what I get running the equivalent command there:

for i in range(9): [(10**i, '{:.6f}'.format(timeit('n(pi, digits='+str(10**i)+')', seconds=True, number=1)))] [(1, '0.000019')] [(10, '0.000021')] [(100, '0.000024')] [(1000, '0.000105')] [(10000, '0.004632')] [(100000, '0.245613')] [(1000000, '3.692356')] [(10000000, '55.570486')] [(100000000, '1012.650426')]

The times for large numbers of digits are about an order of magnitude slower, but for fewer than 1,000 digits the results look very similar to those of Mathematica. In Sage I’m pretty sure there’s no precomputed value being looked up. Sage apparently uses the mpmath package for computing constants and elementary functions. The mpmath source code (see libelefun.py) suggests there’s some caching or memoizing of results as they are computed, but not lookup from a pre-baked table.

As a point of interest, here’s the mpmath docstring on pi computations:

For computation of pi, we use the Chudnovsky series: oo ___ k 1 \ (-1) (6 k)! (A + B k) ----- = ) ----------------------- 12 pi /___ 3 3k+3/2 (3 k)! (k!) C k = 0 where A, B, and C are certain integer constants. This series adds roughly 14 digits per term. Note that C^(3/2) can be extracted so that the series contains only rational terms. This makes binary splitting very efficient. The recurrence formulas for the binary splitting were taken from ftp://ftp.gmplib.org/pub/src/gmp-chudnovsky.c Previously, Machin's formula was used at low precision and the AGM iteration was used at high precision. However, the Chudnovsky series is essentially as fast as the Machin formula at low precision and in practice about 3x faster than the AGM at high precision (despite theoretically having a worse asymptotic complexity), so there is no reason not to use it in all cases.

(Disclaimer: I don’t know my way around the Sage or mpmath source files, and I may have misunderstood what’s going on.)

]]>There are lots of sites with info about this. Just do a search for “computing pi”.

]]>Or from below, but when looking straight up from street level, you get a rather odd distortion of the straight edges of the buildings, presumably from the GoogleCam (the building edges appear straight to me when I am standing at 42nd & Lex). ;-)

]]>See for yourself:

https://www.google.com/maps/@40.7516203,-73.975335,177m/data=!3m1!1e3

Looking forward to the new edition of Infrastructure. Two questions from the UK: when’s it out here, and have you managed to squeeze in any more European/global content?

]]>Not that I’m uninterested in other kinds of landmarksâ€”and your comment inspired me to go have a look. I’m sad to report that the Taj Mahal remains flat as a pancake in both of the mapping programs, but the Chrysler Building is pretty spectacular. In Google Maps you can fly right through it.

]]>Presumably it’s not a coincidence that your illustrations here are not the Taj Mahal or the Chrysler building, but the less glamorous supporting constructions that you explained and described in the wonderful Infrastructure book?

]]>