## A Glitch in the Maptrix

We must always be on the lookout for glitches in the Matrix—anomalies that give us a fleeting glimpse into the algorithms and data structures of the computer simulation that we call Reality. But there’s also the Maptrix, the alternative reality supplied by online mapping services. I spend a fair amount of time exploring that digital terrain, and lately I’ve noticed a few glitches. Exhibit 1 is a tank farm in Bayonne, New Jersey, near where Kill Van Kull enters New York Bay:

I’ve seen a lot of oil tanks over the years, but never before have I encountered such ragged, faceted approximations to cylindrical form. These lumpy, polyhedral tanks suggest that in this little corner of industrial New Jersey, π has a value somewhat smaller than 3.14.

But the π peculiarity is ephemeral. The image above was captured from a laptop screen at the instant the landscape was first rendered in the Apple Maps program. Moments later most of the defects were magically healed, and the illusion of solid, circular reality reasserted itself:

Here’s another example, from a tank farm in Carteret, New Jersey, just across the Arthur Kill from Staten Island. This time we’re looking down on the tanks from directly overhead. The image at left was captured as soon as the scene appeared on screen; at right is the rounded-out version that emerged a few second later. The software, again, is Apple Maps.

It’s not just Apple’s version of reality that has such anomalies. Here’s a sample from another source:

The mangled petroleum tanks in this image are in Kearny, New Jersey, a few miles north of Bayonne along the Passaic River. In this case the picture was taken by Google’s eye in the sky, not by Apple’s. The distortion is different, but no less disturbing. Now it’s not just the geometry of the cylinders that has gone goofy but also the topology. Some of those tanks won’t hold oil (or any other fluid); they have holes in them. And notice how the spill-containment walls surrounding the tanks also look moth-eaten.

Finally, returning to Apple Maps, and scrolling just half a mile northwest from the Carteret tanks, we cross the Rahway River into Linden, New Jersey, where we come upon this alarming scene:

Toward the right side of the image we see more cylindrical tanks, some with faint, remnant traces of polyhedral approximation. But when your glance wanders to the upper left, you find that the world suddenly loses all depth. The tank farm over there, and the water treatment plant at the top of the frame, are merely painted on the landscape—trompe l’oeil structures that don’t trompe anyone.

This image offers another view of the same Linden landscape, looking obliquely to the west or northwest. The road that runs through the scene from foreground to background, crossing the New Jersey Turnpike at the very top of the frame, is Tremley Point Road. Suppose you were driving west along that road. Just beyond the row of lumpy trees that extends from the left edge of the image toward the road, you would cross a mysterious boundary, leaving behind the pop-up 3D world and entering flatland. What would happen to you there? Would you be pancaked like those tanks, reduced to a two-dimensional object painted on the pavement, with a painted shadow to accompany you?

Leaving behind tank farms but still poking around in the same general neighborhood of northern New Jersey, I was able to record four stages in the “construction” of the Bayonne Bridge, which crosses Kill Van Kull between Bayonne and Staten Island. These are images from Google Maps, the first three captured at intervals of about a second, the last after a delay of a few seconds more:

In calling attention to these oddities in online map imagery, my aim is not to mock or belittle. To me, these maps are one of the marvels of the age. A century ago, it was a huge novelty and liberation to soar above the earth’s surface for the first time, and see the landscape spread out below as if it were a map. It’s no less remarkable that we have now transformed the experience of looking at a map into something like flying an airplane.

The first “satellite view” maps were just that: montages of images made from hundreds of miles above the earth’s surface, looking straight down. They portrayed the territory as an array of pixels, assigning an RGB value to every (lat, lon) pair on the surface of a sphere.

The next step was to add a digital elevation model, giving each point on the surface a z value as well as a color. This scheme allows us to gaze obliquely across the landscape and see a realistic rendering of mountains, river valleys, and other natural landforms. It works well as long as you don’t try to get too close: the model is well-suited to forests, but not to trees. And it doesn’t work well at all for manmade artifacts.

In representing engineered structures, one weakness of elevation maps is that any reasonable scheme for interpolating between sample points will tend to round over corners and sharp edges, so that buildings become lumpish mounds. Symmetries are also lost: the sampling fails to preserve the rectilinearity or circularity of the various objects we strew around the inhabited patches of the world. And the biggest problem with an elevation map is that it’s a mapping in the mathematical as well as the cartographic sense. The surface of the planet is defined by a smooth, single-valued function, assigning a unique elevation z to every (lat, lon) point on the sphere. Any line radiating from the center of the earth must cross that surface in exactly one point. As a result, there can be no vertical cliffs and no overhangs. Also no bridges. The surface defined by the elevation model can go under a bridge or over it, but not both.

The latest mapping programs are apparently addressing these issues by building explicit three-dimensional models of selected landscape features. I see evidence of several different techniques. The Bayonne Bridge model that assembles itself in the four frames above is clearly based on a triangulated mesh: all the surfaces making up of the envelope of the structure are decomposed into elementary triangles. The cylindrical tanks in the Apple Maps images seem to grow their circular form through an Archimedean process, in which a circle is defined as the limit of an n-gon as n goes to infinity. Elsewhere, I think we may be seeing some kind of spline curves or patches.

Having constructed the model and embedded it in the landscape, the next trick is to project the pixel pattern from a photograph onto the model surface, as a texture. This process too has a certain comical potential:

What we’re seeing here is apparently a tugboat nudging a barge equipped with a crane. The matching of surface pattern to three-dimensional form has gone badly awry, which gives the whole scene a toylike quality. I find it charming. In future years, when the Maptrix has become a hyperrealistic, real-time virtual world with day and night, weather and seasons—maybe with inhabitants who wave back at you—we’ll wax nostalgic over such quaint foibles.

Posted in computing, modern life, photography | 9 Comments

## Cosmic eggshells

In the creation myths of certain cultures, the world began as an egg, which broke open to form the earth and the sky. What if some intrepid explorer mounted an expedition to the ends of the earth and came back with fragments of the original eggshell, or maybe feathers from the bird that laid it? That’s what happened last week. Thanks to a group called BICEP2, which took a telescope to the South Pole and pointed it at a single patch of sky through three long Antarctic winters, we now have bits of eggshell providing strong evidence for our modern creation myth: the inflationary universe.

In referring to cosmic inflation as a creation myth, I don’t mean to be dismissive—not in the least. It’s just that when you listen to this narrative of how the world began, the story is every bit as fantastical as any tale of hatching eggs or turtles-upon-turtles. Last Tuesday I heard Alan Guth, who dreamed up cosmic inflation in 1981, tell the story to an overflow crowd at Harvard. The video of the colloquium is on the net. Here’s my paraphrase of a few highlights:

Once upon a time, the entire universe we know today was packed into a tiny dot, much smaller than an atom—only a millionth of a billionth the size of a proton. With so much matter and energy confined in such a small space, gravity was a very powerful force, which you might think would hold everything together forever. But one day something odd happened. Inside the microdot universe, gravity suddenly changed its sign; it became a repulsive force instead of an attractive one. That repulsion is what put the bang in the big bang. The universe doubled in size in 10–37 second. Then it doubled again, and again. After about 100 doublings, the dot had expanded by a factor of 1028, and our universe had a diameter of 1 centimeter—the size of a marble. It was all over in a trillionth of a trillionth of a trillionth of a second, or thereabouts. Afterwards, gravity resumed its familiar, convivial, let’s-all-stick-together personality, and the rest of the history of the universe unfolded at a more stately pace over the next 13.8 billion years.

Three decades ago, when I first read about (and wrote about) the inflationary universe, physics itself was in a bubbly, inflationary mood. Theorists, it seemed, couldn’t dream up an idea too outlandish for experimentalists to confirm. Let’s have three quarks for Muster Mark, decreed Murray Gell-Mann; sure enough, dense little quarky pits were found inside the proton and the neutron. We need a fourth quark—and let this one have charm—said Sheldon Glashow and his friends; and there it was in the J/ψ particle, discovered in 1974. Give us supermassive analogs of the massless photon, said Steven Weinberg and Abdus Salam; nature obliged with the W± and the Z0. Now and then, the theorists had to scramble to catch up with experiments, as when Leon Lederman found evidence for a fifth quark. On the other hand, a few theoretical predictions were way out in front of experimental prowess: Finding the Higgs boson took 50 years.

Not every flight of fancy landed in Stockholm. Theorists asked experimenters to keep an eye out for decaying protons; they spent years looking and saw nothing. The magnetic monopole is still missing in action. Supersymmetry has not checked in yet. String theory remains… well, a theory. My own sentimental favorite was the preon or rishon model, a fantasy of recursive descent: If the proton is made of quarks, the quarks must be made of something still smaller. If only it were so.

In that giddy era, Guth’s inflation conjecture was far from the wildest or weirdest idea to come along, but it did seem to be among the proposals that would be most difficult to test. Yet here we are with solid observational evidence.

Guth’s talk on Tuesday was the first half of a double feature organized by the Harvard physics department. The second speaker was John Kovac, leader of the BICEP2 collaboration—the group that brought back the eggshells from the South Pole. Written on those eggshells was a distinctive pattern of polarization in the cosmic microwave background (CMB) radiation.

The peculiar distribution of the CMB was one of the puzzles that led to the inflationary hypothesis in the first place. The CMB is al­most perfectly uniform across the entire sky, suggesting that whatever emitted the radiation must have been at a uniform temperature every­where. In a non-inflationary cosmology, that widespread thermal equilibrium seems mysterious. Distant regions of a very young universe are causally dis­connected: They have not yet had time to communicate with one another, even by signals moving at the speed of light. Some sort of miracle is needed to make all of these isolated regions so nearly uniform in temperature. Inflation dispenses with the miracle by squeezing the early universe into a much smaller volume, where all the parts can become throughly mixed.

Inflation predicts a CMB that is nearly uniform, but not quite. Quantum fluctuations introduce variations in density and temperature, which become slightly warmer and cooler spots in the CMB radiation field. (Later in the history of the universe, the denser spots evolve into clusters of galaxies.) A series of satellite observatories has mapped the CMB with high precision, finding splotchy patterns with an angular size scale of about 1 degree, and temperature differences of 1 part in 100,000. These results are in good agreement with the predictions of inflation. This is encouraging news for the theory; on the other hand, inflation was invented to explain these very characteristics of the CMB, so there’s some circularity in the argument.

Inflation makes another prediction. Quantum fluctuations in the high-density plasma of the inflating universe should give rise to powerful gravitational waves. These are propagating disturbances in the gravitational field, just as light waves are disturbances in the electromagnetic field. We are too late to detect the gravitational waves directly, but they should have left a distinctive signature in the microwave background—a swirly pattern of polarization, or wave alignment.

If the inflationary scenario is correct, two modes of polarization should co-exist in the CMB; they are called the E-mode and the B-mode, after the E and B vector fields of electromagnetism. In the argot of vector calculus, the E field has divergence but no curl: Think of electric lines of force streaming radially away from a charged particle. The B field has curl but zero divergence, like the magnetic lines of force that wrap around a current-carrying conductor. E-mode polarization comes from a conventional process of photon-electron scattering. B-mode polarization is the mark of gravitational waves.

The search for CMB polarization takes us to a rough neighborhood of the electromagnetic spectrum: the millimeter band, in the no man’s land between radio and optical wavelengths. Nearly anything can emit and absorb at these wavelengths, so it’s a noisy place. But astronomical signals are very faint because moisture in the atmosphere soaks up millimeter waves and not much gets through. I gather that the best place on earth to do millimeter-wave astronomy is the South Pole, which has exceptionally dry air and dark skies. In the past decade half a dozen millimeter-wave telescopes have been set up at the Amundsen-Scott South Pole Station. E-mode polarization was first detected in 2002 by one of these instruments, called DASI. BICEP2 was installed in 2009 to look for B-mode signals.

The BICEP2 telescope was operated at the Amundsen-Scott South Pole Station, on a platform located some 800 meters from the geographic pole. The telescope is inside the conical shield on the roof of the building; the shield blocks stray radiation from nearby sources. The instrument at right is the South Pole Telescope, which also operates at millimeter wavelengths. Just visible in the background at left is the IceCube neutrino telescope. Photograph by Steffen Richter, who spent three “nights” at the station (each night being six months long). Reproduced courtesy BICEP2 collaboration.

The BICEP2 instrument is a curious hybrid of optical and radio technologies. The telescope is a refractor, with objective and eyepiece lenses—a design known to Galileo. But the lenses are not ground from optical glass; they are molded of high-density polyethylene (the material used to make plastic milk jugs). And the detector in the focal plane of the instrument is not the kind of imaging device seen in optical instruments; it is an array of tiny crossed antennas, etched into silicon wafers. The whole instrument is evacuated and cooled to liquid helium temperature. The detector is colder still: about 0.25 Kelvin.

The telescope was installed at the South Pole in December of 2009 and recorded data almost continuously until December of 2012. (It was shut down briefly every three days to replenish the helium and brush off any snow, and there were slightly longer interruptions in summer.) Most of the observing time was spent repeatedly scanning the same patch of sky, an area known as the Southern Hole, which is unusually free of “nearby” dust—that is, schmutz from our own galaxy. In a sense, the entire three-year observing campaign was an effort to take a single picture of that patch of sky. Here it is:

Polarization maps of the target area for BICEP2 reveal both E-mode and B-mode signals. E-mode polarization had been seen in earlier observations, but this is the first evidence of B-mode fluctuations, thought to result from gravitational waves in the inflationary era. Note that the intensity scale at right is six times larger for the E-mode map. Graphic reproduced courtesy BICEP2 collaboration.

The important features of these vector maps are the pinwheels in the B signal panel. They are interpreted as the telltale remnant of gravitational waves.

The BICEP2 group calculates that the B-mode signal is statistically significant at the 5σ level, which pretty much rules out the possibility of being fooled by mere random noise. More worrisome is the risk of some systematic error, such as a bias in the instrument, or polarization caused by non-cosmological sources, such as galactic dust. Kovac devoted much of his Harvard talk to these issues, and there is even more detailed discussion in two papers released on the BICEP web site. As far as I can tell, the checks for errors were very thorough and the evidence is very solid. In any case, we won’t have to wait another 30 years to see the result confirmed (or not). Another telescope, the Keck Array, is taking data now at the South Pole, and BICEP3 is being readied for installation.

In case it’s not obvious already, let me say that I’m really wowed by this story. I think it’s a big deal. The discovery of the CMB in 1964 transformed the big bang model from a plausible possibility to a prevailing theory, and the discovery of B-mode polarization will surely do the same for inflation.

To me it’s amazing that we can have such detailed knowledge of our most remote origins. A hundred years ago we knew nothing about the early universe. We didn’t even know that it had a beginning. We didn’t know how big it was or what it was made of. We had none of the conceptual tools that would prove essential to making sense of cosmic evolution: quantum field theory, general relativity, the nuclear chemistry that makes the stars shine. In one quick century it has all come together.

Or not quite all. We still don’t know what the universe is made of. Most of it is apparently “dark matter” and “dark energy” that we have yet to identify; all the stuff we can see is a minor constituent. And we still don’t have a satisfactory quantum field theory for gravity. Yet, somehow, we can reconstruct the dynamics of the exotic, singular event that created this place, and supply enough quantitative detail to detect the evidence 13.8 billion years later. It’s magnificent. It’s ridiculous.

Posted in physics, science | 3 Comments

## Net pests (not)

Last night I posted this note:

Since Sunday afternoon bit-player.org has been under some sort of mysterious DDoS attack, with a rotating suite of IP numbers repeatedly downloading the same PDF files several times a second. For the time being, I’ve taken most PDFs offline. If you urgently need something and you get a permission-denied error, please send me an email. Sorry for the inconvenience. Back soon, I hope.

Turns out it was not net pests. It was my 15 minutes of fame. A link to an old story of mine found its way to the front page of Hacker News, and I misinterpreted the resulting net traffic jam. A couple of hours later (and after a couple of messages from helpful HN readers), I realized there was nothing malicious going on, and I put the files back on line.

I’m an amateur in all things, but I’m especially inept as a sysadmin. Looking at the server logs this morning, I’m still unsure of exactly what I’m seeing, but I understand a little more than I did 12 hours ago.

What set me off in the first place was seeing long lists of requests like these, all from the same IP number:

16/Mar/2014:14:17:25 "GET /AmSci-2005-11-Hayes-NewOrleans.pdf HTTP/1.1" 200 79640
16/Mar/2014:14:17:26 "GET /AmSci-2005-11-Hayes-NewOrleans.pdf HTTP/1.1" 206 65886
16/Mar/2014:14:17:26 "GET /AmSci-2005-11-Hayes-NewOrleans.pdf HTTP/1.1" 206 8781
16/Mar/2014:14:17:27 "GET /AmSci-2005-11-Hayes-NewOrleans.pdf HTTP/1.1" 206 65891
16/Mar/2014:14:17:27 "GET /AmSci-2005-11-Hayes-NewOrleans.pdf HTTP/1.1" 206 65892
16/Mar/2014:14:17:27 "GET /AmSci-2005-11-Hayes-NewOrleans.pdf HTTP/1.1" 206 65893

Good grief, I thought: Somebody is downloading the same PDF file six times within three seconds. I failed to notice (or appreciate the significance of) the response codes near the end of each line. The “200” code on the first line is the normal HTTP “OK” signal, but the “206” on the next five lines signifies “partial content.” What’s going on here—if I now understand correctly—is not one person downloading the same file six times; it’s one person downloading a file in six pieces. (The size of the file in question is 270,570 bytes. The byte counts at the ends of the six lines above add up to 272,343. I can’t account for the discrepancy; I’m still an amateur.)

The file mentioned in the six requests above is not the one linked to by Hacker News. That’s another reason for my confusion: I was seeing wholesale downloading of hundreds of different files. Apparently, when some people find an item that interests them on a web site, they wget -r the whole site. And I guess I understand why: If you don’t grab it immediately, the skittish site owner is likely to panic and take it offline. (But wouldn’t it be polite to throttle the request rate?)

The story that started all this fuss is a bit of whimsy I wrote almost 30 years ago for Computer Language, a magazine long defunct. In the past 18 hours the PDF has been successfully downloaded almost 12,000 times, which may be greater than the circulation of Computer Language.

Posted in meta | 4 Comments

## Notes from the JMM

The annual Joint Mathematics Meetings are one of those vast smörgåsbord meals where you can’t possibly eat everything. I managed to sample 1 percent of the menu this year. By my count, the four days of meetings included 2,743 scheduled events (mostly talks, but also panel discussions, cocktail parties, poetry readings, film showings, and other diversions); I attended 28 of them. There are $\binom{2743}{28} \approx 5 \times 10^{66}$ ways to choose such a sampling, so it’s unlikely any of the other 6,424 participants experienced exactly the same meeting I did. Technicalities: I came up with the number 2,743 by scraping the online program and counting all <li> tags that do not have a <ul> as one of their children. Counting links to abstracts gives an estimate for the number of talks alone: 2,569. Also, I should note that the binomial calculation exag­gerates a little: Some of those 1066 sets of 28 events are forbidden because of conflicts between simul­taneous sessions.

As a journalist and a generalist, I try to maintain a balanced mathematical diet, but this year I pigged out on number theory. A third of the talks I heard were in that area. There’s been a a lot of buzz lately about progress toward a proof of the twin-primes conjecture, starting with a surprise announcement last spring by Yitang Zhang of the University of New Hampshire. Twin primes, such as $$5$$ and $$7$$ or $$101111$$ and $$101113$$, are primes as closely spaced as two odd primes can be. The twin-primes conjecture says there are infinitely many of these pairs. Zhang didn’t prove there are infinitely many primes $$p$$ and $$q$$ with $$q-p=2$$, but he came really close. He proved there are infinitely many primes $$p$$ and $$q$$ with $$q-p \le 70\,000\,000$$. A few months later James Maynard, a postdoc at the Université de Montreal, brought the constant down from $$70\,000\,000$$ to $$600$$. (Maynard was inspired by Zhang’s work but used different methods. Terry Tao got the same result at the same time.) Then a Polymath collaboration organized by Tao made further progress. At last report, the minimum gap for which we can say with certainty there are infinitely many primes is 270.

For more about the mathematics behind this work, I recommend two excellent articles by Erica Klarreich, published in the Simons Foundation’s Quanta magazine:

I’d like to add a few thoughts about the human side of Zhang’s story, as best I can piece that story together from published sources.

According to a profile by Virginia Stuart in UNH Magazine, Zhang was born in China in 1955 and was a boy when the Great Proletarian Cultural Revolution and the Red Guards movement convulsed the nation. With the rest of his family he was sent to the countryside for “re-education” through labor. The result of re-education was not much education, but he made up for lost time and in 1978 (at age 23) entered Peking University, earning bachelor’s and master’s degrees in mathematics.

In 1984 S. S. Chern took several American mathematicians to Beijing, where they spent a summer working with Chinese graduate students. Zhang was one of the students, and the following year he came to the U.S. to study at Purdue with T. T. Moh, who had been a member of the Chern delegation. Moh has recently written a short memoir on Zhang’s period at Purdue, from which I quote at length:

When he arrived, we had a cordial talk. Yitang expressed his desire to work in the field of Algebraic Geometry…. Yitang also mentioned that he wanted to study under my guidance…. I was surprised by Yitang’s next request of working on the Jacobian conjecture as his thesis topic. I felt it was odd to select such a difficult task.

The Jacobian conjecture is one of Steve Smale’s “problems for the 21st century,” which Smale describes as follows:

Suppose $$f : \mathbb{C}^n \to \mathbb{C}^n$$ is a polynomial map with the property that the derivative at each point is non-singular. Then must $$f$$ be one-to-one?

I can say nothing more about this question except that it remains open, and that Moh is the author of a long 1983 paper with computational results for all two-variable polynomials of degree less than 100. Moh continues:

Yitang spent all of his free time thinking of mathematics. After years, Yitang started to believe that he might have gotten a solution, one independent of my paper, to the Jacobian conjecture. As a gatekeeper of the palace of the Jacobian conjecture, I did my duty of examining every claim presented to me and denied the entrance of anybody (even if the claim has nothing to do with my work) if the proof was invalid. “Maybe the Jacobian conjecture is a problem for the future”, I thought.

By 1991 Zhang was approaching the seven-year limit for Ph.D. candidates at Purdue. He wrote a brief thesis on a specialization of the Jacobian conjecture, which earned him his degree. Then he took off. Moh writes:

Sometimes I regreted not fixing him a job. But really, who could tell whether it was a good decision or not? Maybe it was his destiny to endure and turn out to be great…. When I looked into his eyes, I found a disturbing soul, a burning bush, an explorer who wanted to reach the north pole, a mountaineer who determined to scale Mt. Everest, and a traveler who would brave thunders and lightnings to reach his destination. Yitang never came back to me requesting recommendation letters. Apparently, he did not seek a job.

An interview with Zhang, conducted by Michael Segal and published in Nautilus, puts a somewhat different spin on the matter of recommendations. In the interview Zhang remarks: “During that period it was difficult to find a job in academics. That was a job market problem. Also, my advisor did not write me letters of recommendation.”

Zhang spent most of the next decade without an academic position. Here’s the Wikipedia account of that period: “Prior to getting back to academia, he worked for several years as an accountant and a delivery worker for a New York City restaurant. He also worked in a motel in Kentucky and in a Subway sandwich shop.” The article by Virginia Stuart makes clear that the job as an accountant and the job at Subway were the same job. “In a pinch, he would help out behind the counter, a fact that has been exaggerated in the press and has inspired online banter about a mathematical genius making sandwiches for a living.”

In 1999 Zhang was hired as a lecturer in the department of mathematics and statistics at UNH. I haven’t been able to learn exactly how this came about. The chairman of the department of the time was Kenneth Appel, who had his own history of breakthrough proofs—in his case, the four-color theorem. Appel died just a few days after learning of Zhang’s triumph.

UNH saved Zhang from a life of making footlong sandwiches, but for 15 years his job was a nontenured lectureship, on a year-to-year contract. He apparently taught a lot of first-year calculus—and got rave reviews on ratemyprofessors.com. One review says, in its entirety: “HOT!!!” The UNH web site still lists him as a lecturer, but news reports this month say he has been promoted to full professor.

At the JMM, Andrew Granville of Montreal gave a big tutorial talk summing up the recent and ongoing work on bounded gaps between primes. But the main event happened a day earlier, when both Maynard and Zhang spoke in the same session. The room was jammed. Maynard went first, and gave a lucid and precise account of his own work. When he finished, Zhang stepped forward with a sheaf of transparencies, which he’d been scribbling on with a Sharpie pen during some of the earlier talks in the session. There was a brief delay while the video projector was set aside and the overhead projector was moved into place and switched on. Still more people flowed into the room from the doors at the rear. (I had arrived early to get a seat up front.) The hubbub continued until Granville rose and bellowed out “Quiet!” Then Zhang turned to face the audience and announced: “I know you all expect me to talk about twin primes, as the abstract says, but I’ve decided to talk about something else.” I was mildly shocked. I don’t think I’ve ever seen anyone pull such a switcheroo at a major conference.

The JMM still provides an overhead projector in every lecture room. Among the talks I attended this year, Zhang was the only speaker who made use of this quaint device.

“Something else” turned out to be the Goldbach conjecture—the assertion that every even integer greater than 2 can be expressed as the sum of two prime numbers. Zhang wasn’t claiming to have proved the Goldbach conjecture; he wasn’t even laying out a clear pathway to a proof; but he did vaguely hint that he has some idea of how to proceed. We’ll see, I guess. For the moment he has a healthy balance of credibility.

An hour after he gave his talk, Zhang was awarded a share of the Frank Nelson Cole Prize in Number Theory.

Nonmathmatical geographic/autobiographic/photographic addendum. The JMM was held in Baltimore this year, a city I lived in from age 0 to age 4 and again from age 20 through 23. On this trip I spent my first few hours in town strolling around taking pictures, and belatedly discovered that my camera was set to one of the cute “illustration” modes that modern digital technology has to offer. I was shocked at this switcheroo, too, but after a few days the cartoonish pictures grew on me.

The image above shows the Morris A. Mechanic Theater, which was the centerpiece of an urban redevelopment project in the early 1970s, when I was a young man about town. The theater was supposed to bring Broadway tryouts and other forms of stagecraft back to the heart of the city. It closed in 2004, and now even the posters promising another phase of redevelopment are looking tattered.

As for the Hotel Junker/Envy Hotel, below, I didn’t stay there, but it was just around the corner. Echt Baltimore.

Posted in mathematics | 4 Comments

## The rhodometer

Years ago I lived in a house with a rhododendron outside the kitchen window, which served as our family thermometer on winter mornings. If the leaves were tightly rolled, you bundled up in gloves and scarf and ski cap; if not, just a jacket would do.

Rhododendron leaves with the temperature at 35º Fahrenheit (left) and 17º Fahrenheit (right).

Where I live now, we have a rhododendron by the front porch, and I’ve been keeping an eye on it for the past two winters. The qualitative relation between temperature and leaf-rolling is just as I remembered it, but I’ve been trying to add some quantitative precision. Can a rhododendron bush serve as a reliable temperature-measuring instru­ment? To find out, I bought a set of calipers, and I’ve been making daily measurements whenever the early-morning temperature is near or below freezing.

Here’s what the data look like (right). Each dot gives the diameter of a leaf, measured at its widest point, as a function of ambient temper­ature. The four colors distinguish measurements of different individual leaves. Last year I followed a single leaf (green dots); this winter I’ve been tracking three leaves I marked by wrapping twist-ties around the stems (tan, blue, magenta). If you’d like a look at the raw data, grab the csv file.

The overall shape of the relation is clear from the graph: Measured diameter increases monotonically with tempera­ture, as expected. But there’s a fair amount of noise, as well as a sharp kink in the curve. If I had continued measurements to higher temperatures, we would surely see a second bend in the trend line. The full curve must be S-shaped, because the diameter of a rolled leaf has a finite maximum and minimum. No matter how high the temperature, the diameter can’t be greater than the width of the flattened leaf, and no matter how low the temperature, the diameter is bounded above zero.

Given the sigmoid shape, the right model for this data set is presumably a logistic curve, but I have done something simpler. I’ve partitioned the data into cold and warm subsets and fitted two piece­wise linear functions. In the figure at left the data points are recolored to indi­cate which points go with which line. The crossover between the two functions is at 24.8 degrees, which max­imizes the total R2 goodness-of-fit for the two lines.

The kink in the curve is quite sharp. It remains visible after a log transformation, or even log log. Per­haps there’s a real, physical discontinuity there. After all, it’s not unreasonable to suppose that the freezing point of rhododendron sap might be somewhere around 25°F—although I don’t actually know that a phase transition has anything to do with the behavior of the leaves.

As a practical matter, the nonlinearity is an inconvenience if we want to use the leaves as a thermometer. It’s probably best to regard the rhododendron as a simple binary indicator: Either it’s cold out, or it’s not.

My hiking friend Dinosaur has told me about a wilderness school for teenagers with a rule that the kids have to stay in camp whenever the temperature is below 16°F. When this rule was announced, the camp leader complained: “We’re in the wilderness; we don’t have a thermometer.” The reply was: “Just use the rhododendrons.” That’s a wonderful suggestion. However, even with careful measurements of the leaves, it might be hard to gauge the temperature accurately in the range between 10° and 20°. And I suspect that a hiking group without a thermometer also lacks vernier calipers.

In the course of my morning measuring-the-rhodo rituals, I became curious about the mechanism of this temperature-sensitive leaf rolling. An analogy that immediately comes to mind is the bimetallic strip, often used as a temperature sensor in thermostats and oven thermometers. The idea is to bond layers of two metals (typically brass and steel) that have different coefficients of thermal expansion. With heating or cooling, one side expands or contracts more than the other, and so the strip curls up.

Maybe the rhododendron leaf works this way too, with materials on the upper and lower surfaces that differ in thermal coefficient. But there are problems with this hypothesis. In the first place, a bimetallic strip curls end to end, not side to side. In other words, the axis of curva­ture is parallel to the shorter dimension; indeed, the device is fab­ricated as an elongated strip to avoid compound curvature, which could lead to buckling or other complex dynamics. If the same mechanism were at work in the rhododendron leaf, it would curl up from tip to stem, but in fact the curling axis is parallel to the long dimension of the leaf. It rolls up like the wrapper of a cigar. Another objection to the bimetal analogy is that really hot weather ought to cause curling in the opposite direction, with the underside of the leaf outermost. I’ve never seen that happen. Still another issue: The amount of curvature implies a huge difference in coefficient of expansion. In the photo above, a dissected leaf in near-zero-degree weather is rolled up tightly enough that opposite edges have passed each other once and are almost meeting for the second time.

The magnitude of the curling effect suggests there may be something more than just thermal expansion and contraction going on. Likewise the kink in the temperature-diameter curve hints at some discontinuous process. And the axis of curvature has me wondering about the possibility of a structural asymmetry—something that inhibits lengthwise curling and favors crosswise.

In the search for asymmetry, I pulled out the microscope. The images above show the upper (left) and lower (right) surfaces of a leaf at 30 times magnification. I see no network of crosswise contractile fibers. The only distinctively asymmetric structure is the leaf’s stout central vein—a continuation of the stem—but maybe that’s all we need. The central vein might be strong enough to inhibit lengthwise curling in the initial stages of the process, at temperatures near freezing, when the curling forces and the resulting curvature are slight. Later, the crosswise curling creates a tube, which offers mechanical resistance to lengthwise bending as the curling continues. But all this is just speculation.

A little casual Googling turns up lots of stuff on the folklore of rhododendrons as weather indicators, but the only scholarly discussion I’ve stumbled on is a 1990 article by Erik Tallak Nilsen of Virginia Polytechnic in Blacksburg (where lots of rhodos grow). Nilsen doesn’t have much to say about how the leaves curl; he’s interested in why. The widely prevalent theory—often stated as established fact—argues that leaf-rolling helps prevent water loss by covering the pores (called stomata) on the underside of the leaf. Nilsen doesn’t think much of this idea. He points out that the stomata close in cold weather, sealing off the moist internal milieu. And he backs up this assertion with his own experimental results.

Among several alternative explanations, Nilsen favors the following idea: Rhododendrons live in the understory of hardwood forests, where they are shaded in summer but exposed to full sunlight after the trees drop their leaves in winter. The photosynthetic membranes of shade plants are susceptible to damage from ultraviolet light, especially in freezing weather. Rolling the leaves reduces the exposed area.

Posted in modern life, science | 2 Comments

## My First 1000000 Years

Limites
Hay una línea de Verlaine que no volveré a recordar,
Hay una calle próxima que está vedada a mis pasos,
Hay un espejo que me ha visto por última vez,
Hay una puerta que he cerrado hasta el fin del mundo.
Entre los libros de mi biblioteca (estoy viéndolos)
Hay alguno que ya nunca abriré.
Este verano cumpliré cincuenta años:
La muerte me desgasta, incesante.
—Jorge Luis Borges, 1923


Passing half a century is a landmark for those who count on 10 fingers, but as a bit-player I celebrate powers of 2. Today I turn 26, which is a mega-milestone: 10000002 years. It’s a special occasion in several ways. Need I point out that it’s the last power of 2 I’ll ever see? Also, other than 0 and 1, it’s the only sixth power I’ll visit in my lifespan, and hence it is the only age that’s both a square and a cube.

As a young man—well shy of 50—I admired that poem by Borges, but as I grow older I find its mood of lugubrious foreboding less attractive. Yes, it’s true: There is a line of Borges I’ll never read again. But if my time is so limited, I shouldn’t be spending too much of it stuck in a tight loop, rereading the same lamentations over and over. Let the last time pass; I’ll move on to something new—something I haven’t yet done for the first time.

Life is all too full of never-to-be-repeated moments, right from the outset. Even the silly numerology of birthdays makes this apparent. Long before I reached my last power of 2, I celebrated my last Bell birthday (52), my last Catalan birthday (42), my last perfect birthday (28), my last factorial birthday (24), my last pair of birthdays satisfying $$x^m – y^n = 1$$ ($$2^3 = 8, 3^2 = 9$$). Never again will my age be an even prime number, a multiplicative identity, an additive identity. But I can live with that!

And I still have a few treats to look forward to. In a couple of years I come to my next triangular birthday—and that one may not be my last. Then of course there’s $$3^4 = 9^2$$. At the end of my eighties there’s a Fibonacci waiting for me, if I can make it. And, nearer at hand, I have marked my calendar for May 22, 2018, when I will celebrate my 25,000th spin around the earth’s axis. Maybe I should try to spend the day in orbit.

Posted in mathematics, modern life | 7 Comments

## Boldface names

Strolling across Killian Court, the only sizeable patch of public grass on the MIT campus, I glanced up at the entabulature pictured above. Chiseled into the marble façade are the names of eleven celestial mechanics: astronomers and mathematicians who figured out how the solar system works. I could attach identities to nine of those names—or thirteen of them if you count all the astronomical Herschels (William, Caroline, John, Alexander). Two names stumped me. Do you know who Adams was? How about Hill?

In this age of Google and Wikipedia, it doesn’t take long to fill in the blanks. But tracking down Hill did require two searches because my first source—a 1922 article in The Tech, the MIT student newspaper—omitted that name. Maybe the Tech author was stumped, too.

Apart from Newton, who stands alone, the names are in birth order (assuming that Herschel is either Caroline or John). It’s not really surprising that the two names I didn’t know are among the later ones. When the buildings around Killian Court were erected in 1916, Galileo and Kepler had been heroes of science for more than 300 years; they had already passed the test of time. But Adams, Hill, and Poincaré were figures of the very recent past. Carving their names in stone amounted to making a bet on the judgment of posterity.

In the case of Henri Poincaré, the bet paid off. His reputation is soaring even higher now than it was at his death in 1912. (There’s a hefty recent biography by Jeremy Gray that I mean to read one of these days.)

Adams is a name that I suppose I should have recognized, because I’ve read his story several times, going back to childhood. But the bell didn’t ring. John Couch Adams (1819–1892) was one of two astronomers who predicted the existence of a planet beyond Uranus; the other was Urbain Le Verrier, who published first and whose work led directly to the discovery of Neptune. In choosing Adams and excluding Le Verrier, MIT was taking sides in a priority dispute. Some recent historical scholarship suggests they took the wrong side.

As for George William Hill (1838–1914), the MacTutor biography suggests he was an interesting character, and I am glad to have finally made his acquaintaince. Apparently he spent his entire career—a lifetime, really—wrestling in solitude with the three-body problem. He didn’t win the match.

I don’t know enough about the history of celestial mechanics to say with any authority that Adams and Hill don’t merit a place on this list. On the other hand, I can state with firm conviction that there’s a scandalous omission from the MIT pantheon. We have Euler (or EVLER, in monumental capitals), but where oh where is Gauss? By modern measures he ranks among the very greatest mathematicians, and he first achieved fame for work in celestial mechanics—predicting the position of the planetoid Ceres. Yet he is not listed here, nor does his name appear on any of the nine other honor-roll tablets surrounding Killian Court.

(Another conspicuous absence in the photo above has an easy explanation. Along with Tycho, Galileo and Kepler, surely one should mention Copernicus, no? In fact he has a place in the court of honor. Like Newton, Copernicus is on the A-list. He gets sole possession of an architrave, with his name writ in double-size letters.)

The MIT archives have published a few of the letters from MIT faculty suggesting names for these tablets. I detect no evidence that any of the nominators were bearing in mind that their selections would still be on exhibit a century hence. No one suggested leaving a few blank spaces where future generations might express their own preferences. I guess that would go against the whole spirit of stone inscription, where the idea is to make a commitment, stake out a position, proclaim eternal truths. You just have to be bold—and foolish.

If the same exercise were carried out today, the roster of names would be different. For one thing, we’d want to make room for people who were still living—or not yet born—in 1916: Einstein, Hubble, Turing, Gödel, Bohr, Dirac, Crick…. And it’s not just the names that would change; the categories have shifted, too. Celestial mechanics does not figure quite so prominently in modern thought as it once did, and the close ties between mathematics and astronomy have frayed somewhat. New fields are being cultivated, such as computation and information theory, as well as nuclear and quantum physics.

My question is: Would the faculty today acknowledge that the measure of greatness is certain to change again over the next 100 years, in ways we can’t predict? Maybe they would be shrewd enough to choose some medium more fluid than stone—perhaps a plasma display with a Twitter feed showing how various scientists are trending, moment by moment.

In the meantime, nature has overgrown and obscured some of those early 20th century judgments. Trees planted 75 years ago around the perimeter of Killian Court are now taller than the buildings, with the result that some of the panels can be seen only through the bare boughs of winter. In the photo below another list of names is glimpsed through an opening in the leafy canopy. The thread binding together the members of this group seems to be mechanical engineering and invention. Could you pass a quiz on those dozen names?

Posted in mathematics, modern life, science | 1 Comment

## Try a little tendrilness

I was in Florida for the New Horizons in Science briefings of the Council for the Advancement of Science Writing. On my way home I took a detour to hike a few miles through the Green Swamp, west of Orlando. The place lived up to its name: There’s a lot of chlorophyll out there. There are plants, and plants that live on the plants, and plants that live on the plants that live on the plants…. I had the feeling that I had better keep moving or the vines and epiphytes might grow over me, too.

Among the epiphytes, the one everybody knows is Spanish moss (Tillandsia usneoides). In the photo above the long gray beards of Spanish moss droop from the limbs of a roadside live oak tree. Spanish moss is not really moss. (It’s not Spanish, either.) It’s an angiosperm—a flowering plant—in the bromeliad family. As parasites go, it seems to be reasonably well-behaved, draping itself from the major limbs of a tree but seldom blocking the foliage. The parasites have no roots and take nothing from the host but mechanical support.

The air potato, left and above, is a relative of the yam that propagates by means of “aerial tubers.”

Vines are another matter. For these guys it’s all about upward mobility, and they’ll step on anybody to get up into the sunshine zone. The bright green leaves in the foreground of the image above belong to a vine called the air potato (Dioscorea bulbifera). Strictly speaking it’s a yam rather than a potato, but it does produce above-ground spuds (see the detail above at right). A student I met at the CASW conference in Gainesville told me there’s a spring-loaded mechanism that launches the potatoes into the treetops. I’ve not found any confirmation of that story in published sources; perhaps the student was having some fun with a city boy from up north. D. bulbifera is native to Asia and Africa and is not well liked in Florida; the student also told me about air potato roundups in Gainesville, and that part of the story is definitely true.

The inch-thick, cable-like vine in the photo above is not the stem of the air potato plant. Rather, it is an older, woody vine—probably poison ivy—that was once attached to the tree trunk at right. (The scar is still visible on the bark.) The air potato vine is slender, green and herbaceous. It twines counterclockwise around the larger vine, thereby parasitizing another parasite.

The plant in the photo above is a passionflower vine, Passiflora suberosa. I first noticed this species infiltrating a hedge on the University of Florida campus in Gainesville; in the Green Swamp I found it everywhere I looked. Like the air potato, the passionflower vine makes a living by climbing toward the light on the backs of other plants, but it goes about it in a different way. Instead of spiralling around a stem, it puts out tendrils that lash themselves to whatever supporting structure they happen to touch. The photo above shows several new tendrils at the growing tip of a vine. Note that a few of them have gotten tangled up among themselves; I’ll have more to say about this phenomenon.

In the photo below, a P. suberosa vine has successfully seized upon a twig of a nearby shrub. The tip of the tendril has tied itself into a knot resembling a clove hitch, and the rest of the tendril has become kinky, reducing its length and drawing the vine closer to the host plant. Other nearby tendrils may also latch onto the twig, strengthening the attachment.

Before we continue with this subtropical expedition, please allow me a digression. Last year Edward O. Wilson published a grumpy essay arguing that mathematics is of little use to many scientists, especially in Wilson’s own field of biology. Members of the mathematical community took umbrage, of course. (The Notices of the American Mathematical Society reprinted Wilson’s essay with a rebuttal by Edward Frenkel.) As a nonmathematician and nonbiologist I don’t have standing in this contest, but I have an opinion all the same. It strikes me that learning mathematics not only supplies tools for getting answers but also cultivates habits of mind that can lead to interesting questions. Take this equation:

$z = \frac{a + b}{a - b}$

If you are the least bit math-minded, you are already asking yourself, “What about the case where $$a = b$$?” Similar issues come up in computer programming. When you write a procedure to select two items at random from a sequence, you need to think about the possibility that the two items are in fact the same item.

What does all this have to do with Florida vines and tendrils? Let’s plunge deeper into the jungle.

I was walking a section of the General James A. Van Fleet State Trail, a former rail line that cuts through the swamp straight as a laser beam. (It was once part of the Seaboard Air Line Railroad—“Air Line” meaning “as the crow flies.”) Along the way, I stopped to admire the occasional butterfly. I saw birds, too, but I didn’t get the camera out in time. I spoke with other walkers, and I dodged bicycles. I wandered off the trail, got my feet wet, picked up a cargo of burs in my pant cuffs.

Left: Gulf fritillary (Agraulis vanillae). Right: zebra heliconian (Heliconius charitonius). As it happens, both of these butterflies spend their caterpillarhood munching on P. suberosa.

All the while I was puzzling over that vine whose modus vivendi is to send up a shoot, wave around in the breeze, and blindly grab hold of the first thing it touches. What bothered me about this strategy for cadging a free ride to the forest canopy was the hazard of grabbing hold of yourself—or of another vine of the same species, just as wimpy as you are. This is a case where if $$a$$ leans on $$b$$ for support, it’s important that $$a \ne b$$. To put it another way, self-reliance is not a virtue when you’re unreliable.

Young P. suberosa vines tend to sprout in fairly dense clusters; I saw lots of them with stems a meter long and only a few centimeters apart at ground level. Very likely the stems are connected by roots or runners and are all parts of a single individual plant, or a clone. For many of the shoots, their closest neighbors were other P. suberosa vines. Under the circumstances, it appeared that the likeliest outcome is that all the vines would glom onto one another and collapse in a tangled heap.

I wondered if the plant might have some means of self-recognition that prevents or inhibits such an outcome. Animals put a lot of effort into identifying individuals, relatives, and conspecifics. We have both nervous systems and immune systems for the purpose. Why couldn’t plants do the same thing? All it would take is a substance in the cuticle of the stem that inhibits the grabbing action of the tendrils.

I can’t rule out the existence of such a mechanism. However, after some more poking around in the underbrush, I found evidence that the vines do indeed attack their own kind. In the photo below, two vines cross, and one of them has seized hold of a leaf stem on the other. Several dried and broken tendrils suggest this was not the first encounter between the two vines. I found other examples where three or more P. suberosa vines were all lashed together.

Time for a new theory, eh? One possibility is that I’m just plain wrong about the futility of vines clinging to one another. Each vine has many tendrils, and each tendril has its own chance of finding a ticket to the sky. Maybe all the vines in a connected cluster can benefit if any one of them hits the jackpot. That is, if one member finds a path upward, all the other members can follow along as well. That would totally alter the calculus of fitness. In the simplest model we might assume that each vine makes $$n$$ contacts with other plants, and each contact has probability $$\alpha$$ of connecting to an upward path. Further assume that all of the probabilities are independent. Then the probability that everyone gets to heaven is $$1 – (1 – \alpha)^n$$, which converges to 1 as $$n$$ increases, no matter how small $$\alpha$$ might be.

This scheme may well explain what’s going on with P. suberosa, but I’m not entirely satisfied with it. In the first place, the assumption that all events are independent is surely bogus. Plants are not like atoms in an ideal gas, where you might reasonably assume that any two atoms have the same probability of colliding. Plants have roots. They don’t roam at random.

Second, in the probability calculation above I have implicitly assumed that $$a \ne b$$, and this assumption is unjustified. A single vine can vainly try to support itself by attaching itself to itself. I submit my final photograph, documenting an act of botanical suicide in which the tendrils of a P. suberosa plant strangle one of the plant’s own leaves:

Posted in biology, mathematics, photography | 1 Comment

## The friendlier skies

It’s not the best picture I’ve ever snapped while climbing out of Logan Airport, but it’s the first one in years that I haven’t taken furtively, palming the camera to conceal it from the cabin crew. And what if the interference theory were right, and the emanations of iPods and Kindles and digital cameras could send us all plunging into Boston Harbor? How would I feel then?

This morning I was on an early flight out—apparently one of the first to adopt the new FAA rules allowing gate-to-gate electronics. The camera was my only way of celebrating this new freedom. I didn’t have anything like an iPod or a Kindle, and laptops are still forbidden below 10,000 feet, presumably because of their ballistic rather than their electronic hazards. (So I passed much of the takeoff and landing patterns reading in Reiffel and Polak’s Gentle Introduction to Quantum Computing, which packs at least twice the wallop of a Macbook Air.)

Since I’m blogging window-seat pictures, I’ll add this one:

Mouseover to magnify.

About an hour into the flight I spotted a glorious glory—a halo of bright rainbow iridescence—tagging along with us on the cloud bank below. (Actually an inside-out-rainbow: Not ROY G BIV but VIB G YOR.)

Posted in modern life, photography | 1 Comment

## The keys to the keydom

Your security and privacy on the Internet (to the extent that such quaint notions still have meaning) depend on the difficulty of factoring certain large numbers. For example, take this 1,024-bit integer:

X = 123784517654557044204572030015647643260197571566202790882488143432336664289
53013160757127360381500856200680250007894557646372684708763884626821078230649
285613963510276802208436872101227718450860737080502115462982139570265498874808
3875440199841915223400907734192655713097895866822706517949507993107455319103401

The number printed above in squinty type is the product of two 512-bit prime factors. If you set out to find those factors, the project might well keep you busy for many megayears. But I can make the job much easier by giving you a second number of the same size:

Y = 139752806258570179719657334941265463008205415047073349942370461270597321020
717639292879992151626413610247750429267916230424010955054750502833517070395986
2897242371124108160005581486237854115688455171463034213840635250918248983182261
75234193815950597041627518140906384889218054867887058429444934835873139133193

Factoring both X and Y would appear to be twice as much work, but in fact you can do it lickety-split. On my laptop it took roughly 200 microseconds. From millions of years to millionths of a second—that’s quite a speedup!

There’s a trick, of course. Both X and Y are products of two large primes, but it so happens that one of the primes is a shared factor of both numbers. For finding that shared factor, we can rely on a very old, very famous, very simple and very efficient algorithm: Euclid’s algorithm for the greatest common divisor. In Python it looks like this:

def gcd(a, b):
if b == 0:
return a
else:
return gcd(b, a % b)


(The ‘%’ in line 5 is Python’s modulo or remainder operator.) When this function is applied to X and Y, the recursion is invoked 297 times before returning the common factor:

F = 1070467931937606706425630145948715022696962191248959648262850980092208031819
9635726117009340189103336170841315900354200725312700639146605265442630619090531

You don’t have to take my word for it that F divides both X and Y. Do the division: In that way you will also learn the co-factors of X and Y.

If X and Y were components of public keys in the RSA cryptosystem, their shared factor would create a huge hole in the security fence. And the problem is particularly insidious in that each of the two keys, when examined in isolation, looks perfectly sound; the weakness only becomes apparent when you have both members of the pair.

This potential vulnerability of factoring-based encryption methods has been known for decades, but it seemed there was no reason to worry because coincidentally shared factors are so utterly unlikely. A couple of weeks ago I heard an eye-opening talk by Nadia Heninger, a member of a group that has searched for such unlikely coincidences in the wild. They found 64,000 of them. Reason to worry.

Heninger and her colleagues polled every public IPv4 address in the known universe, requesting a connection on the ports commonly used for two secure communication protocols, TLS and SSH. For every address that responded to queries on those ports, they collected the server’s public encryption key, then closed the connection. Here I am going to discuss only the TLS servers with RSA keys; there were vulnerabilities in other cryptosystems as well, but the issues are slightly different.

Before telling the rest of this story, I have to pause here. For those of you in the born-digital generation, pinging every address on the Internet may sound like a routine walk around the block on a sunny afternoon, but I confess that I never would have dared to try anything so audacious. It’s like knocking on every door in America, or calling every possible telephone number—a task that’s not feasible for individuals of ordinary means, and that also seems unforgiveably rude. But standards of feasibility and rudeness are different in the world of machine-to-machine communication. Computers don’t care if you make four billion hangup calls (although some system administrators might frown on the practice). And, after all, the encryption keys being collected are by definition public.

Back to Heninger’s story. They ran their scan of IP addresses from Amazon’s Elastic Compute Cloud service, where the data-collection phase of the project took a few days. Out of $$2^{32} \approx 4$$ billion addresses (less a few special-purpose or reserved areas) they found about 29 million servers accepting connections on the standard port for TLS, but only 12.8 million of those servers supplied public keys. Some 60 percent of the keys retrieved were not unique. Presumably, most of the duplicates are accounted for by organizations that have multiple servers all operating with the same cryptographic credentials, but there were also instances of apparently unaffiliated individuals sharing a key. This is rather like discovering that your house key also opens your neighbor’s front door. (And vice versa.)

After eliminating the duplicates, some 5.8 million distinct RSA keys needed to be tested for common factors. Even though Euclid’s GCD algorithm is highly efficient, running it on all possible pairings of keys would be a strain. There’s an ingenious shortcut, based on the observation that if $$Y$$ is relatively prime to each of $$X_1, X_2, \ldots, X_n$$, then it also has no factor in common with the product $$X_1 \times X_2 \times \dots \times X_n$$. Thus it’s possible to detect the presence of shared factors with just $$n$$ GCD operations, instead of $$n^2$$. A drawback of this approach is that the product of millions of RSA keys is a huge number, and intermediate results have to be swapped out to disk. Nevertheless, the processing was completed in an hour and a half on the Amazon cloud at a cost of \$5.

The output was a list of 64,081 compromised keys for TLS hosts, about 0.5 percent of all such keys collected. For obvious reasons, Heninger et al. are not publishing that list; they tried to contact the owners of vulnerable machines, and they offer a web lookup service where you can check to see if your key is on the list.

The good news is that none of the weak keys are guarding access to major web servers hosting bank accounts or medical records or stock markets or military installations. Most of them are found in embedded networked devices, such as routers and firewalls. That’s also the bad news. A programmer with malicious intent who can gain control of a well-placed router can make a lot of mischief.

Could the prevalence of common factors in RSA keys be explained as a product of pure bad luck? To answer this question we need to solve a birthday problem. The original version of this problem asks how many people you need to bring together before there’s a good chance that two or more of them will have the same birthday (assuming birthdays are distributed randomly over the 365 days of the year). An order-of-magnitude approximation is $$\sqrt{365}$$, or about 19. (The actual number is 23.) For the RSA variant of the problem, we ask how many 512-bit primes you need to generate—assuming you select them uniformly at random from the set of all such primes—before you have a good chance of seeing at least one prime twice. In this case we replace 365 with the number of 512-bit primes, which is in the neighborhood of $$10^{150}$$. Thus there’s scarcely any chance of a collision until the number of randomly generated primes approaches $$10^{75}$$. We’re only at $$10^{7}$$ so far. As Heninger said in her talk, we have enough 512-bit primes to assign a public encryption key to every atom in the universe, with little worry over possible duplicates.

According to this line of reasoning, it would be a colossal fluke to see even one duplicated RSA prime, and finding 64,000 of them is clear evidence that those primes are not being chosen uniformly at random. The blame apparently lies with pseudorandom number generators. It’s not that the algorithms are defective. In many cases, cryptographic keys are being generated immediately after a machine is booted, when it just can’t scrape together enough entropy to make a passable pseudorandom number.

#### Sources:

Heninger gave her talk at a birthday-party for Microsoft Research New England on October 9. Eventually, video may be available.

The paper describing the project is “Mining Your Ps and Qs: Detection of Widespread Weak Keys in Network Devices,” by Nadia Heninger, Zakir Durumeric, Eric Wustrow, and J. Alex Halderman, presented at the 2012 USENIX Security Symposium. Preprint.

At the Microsoft symposium Heninger also discussed at later study of other cryptographic weaknesses in certain smart cards. See “Factoring RSA keys from certified smart cards: Coppersmith in the wild,” by Daniel J. Bernstein, Yun-An Chang, Chen-Mou Cheng, Li-Ping Chou, Nadia Heninger, Tanja Lange, and Nicko van Someren. Preprint.

Arjen Lenstra and his colleagues have independently discovered and reported similar vulnerabilities. See “Ron was wrong, Whit is right,” by Arjen K. Lenstra, James P. Hughes, Maxime Augier, Joppe W. Bos, Thorsten Kleinjung, and Christophe Wachter. Preprint.

Open-source software developed by Zakir Durumeric, Eric Wustrow, and J. Alex Halderman at the University of Michigan for scanning large blocks of the public IP address space: ZMap. A descriptive paper from the 2013 USENIX Security Symposium.

The Michigan group’s web service for checking public keys against the collection of known factorable keys: factorable.net.

Posted in computing, mathematics | 4 Comments