Boldface names

names in stone: Newton, Tycho Brahe, Galileo, Kepler, Euler, D'Alembert, LaGrange, LaPlace, Herschel, Adams, Hill, Poincare

Strolling across Killian Court, the only sizeable patch of public grass on the MIT campus, I glanced up at the entabulature pictured above. Chiseled into the marble façade are the names of eleven celestial mechanics: astronomers and mathematicians who figured out how the solar system works. I could attach identities to nine of those names—or thirteen of them if you count all the astronomical Herschels (William, Caroline, John, Alexander). Two names stumped me. Do you know who Adams was? How about Hill?

In this age of Google and Wikipedia, it doesn’t take long to fill in the blanks. But tracking down Hill did require two searches because my first source—a 1922 article in The Tech, the MIT student newspaper—omitted that name. Maybe the Tech author was stumped, too.

Apart from Newton, who stands alone, the names are in birth order (assuming that Herschel is either Caroline or John). It’s not really surprising that the two names I didn’t know are among the later ones. When the buildings around Killian Court were erected in 1916, Galileo and Kepler had been heroes of science for more than 300 years; they had already passed the test of time. But Adams, Hill, and Poincaré were figures of the very recent past. Carving their names in stone amounted to making a bet on the judgment of posterity.

In the case of Henri Poincaré, the bet paid off. His reputation is soaring even higher now than it was at his death in 1912. (There’s a hefty recent biography by Jeremy Gray that I mean to read one of these days.)

Adams is a name that I suppose I should have recognized, because I’ve read his story several times, going back to childhood. But the bell didn’t ring. John Couch Adams (1819–1892) was one of two astronomers who predicted the existence of a planet beyond Uranus; the other was Urbain Le Verrier, who published first and whose work led directly to the discovery of Neptune. In choosing Adams and excluding Le Verrier, MIT was taking sides in a priority dispute. Some recent historical scholarship suggests they took the wrong side.

As for George William Hill (1838–1914), the MacTutor biography suggests he was an interesting character, and I am glad to have finally made his acquaintaince. Apparently he spent his entire career—a lifetime, really—wrestling in solitude with the three-body problem. He didn’t win the match.

I don’t know enough about the history of celestial mechanics to say with any authority that Adams and Hill don’t merit a place on this list. On the other hand, I can state with firm conviction that there’s a scandalous omission from the MIT pantheon. We have Euler (or EVLER, in monumental capitals), but where oh where is Gauss? By modern measures he ranks among the very greatest mathematicians, and he first achieved fame for work in celestial mechanics—predicting the position of the planetoid Ceres. Yet he is not listed here, nor does his name appear on any of the nine other honor-roll tablets surrounding Killian Court.

(Another conspicuous absence in the photo above has an easy explanation. Along with Tycho, Galileo and Kepler, surely one should mention Copernicus, no? In fact he has a place in the court of honor. Like Newton, Copernicus is on the A-list. He gets sole possession of an architrave, with his name writ in double-size letters.)


The MIT archives have published a few of the letters from MIT faculty suggesting names for these tablets. I detect no evidence that any of the nominators were bearing in mind that their selections would still be on exhibit a century hence. No one suggested leaving a few blank spaces where future generations might express their own preferences. I guess that would go against the whole spirit of stone inscription, where the idea is to make a commitment, stake out a position, proclaim eternal truths. You just have to be bold—and foolish.

If the same exercise were carried out today, the roster of names would be different. For one thing, we’d want to make room for people who were still living—or not yet born—in 1916: Einstein, Hubble, Turing, Gödel, Bohr, Dirac, Crick…. And it’s not just the names that would change; the categories have shifted, too. Celestial mechanics does not figure quite so prominently in modern thought as it once did, and the close ties between mathematics and astronomy have frayed somewhat. New fields are being cultivated, such as computation and information theory, as well as nuclear and quantum physics.

My question is: Would the faculty today acknowledge that the measure of greatness is certain to change again over the next 100 years, in ways we can’t predict? Maybe they would be shrewd enough to choose some medium more fluid than stone—perhaps a plasma display with a Twitter feed showing how various scientists are trending, moment by moment.

In the meantime, nature has overgrown and obscured some of those early 20th century judgments. Trees planted 75 years ago around the perimeter of Killian Court are now taller than the buildings, with the result that some of the panels can be seen only through the bare boughs of winter. In the photo below another list of names is glimpsed through an opening in the leafy canopy. The thread binding together the members of this group seems to be mechanical engineering and invention. Could you pass a quiz on those dozen names?

MIT panel listing inventors and mechanical engineers: Archimedes, Gutenberg, Watt, Arkwright, Whitney, Perkins, Fulton, Fairbairn, Froude, Otto, De Laval, Wright

Posted in mathematics, modern life, science | 1 Comment

Try a little tendrilness

I was in Florida for the New Horizons in Science briefings of the Council for the Advancement of Science Writing. On my way home I took a detour to hike a few miles through the Green Swamp, west of Orlando. The place lived up to its name: There’s a lot of chlorophyll out there. There are plants, and plants that live on the plants, and plants that live on the plants that live on the plants…. I had the feeling that I had better keep moving or the vines and epiphytes might grow over me, too.

Beards of Spanish moss in a live oak near the Mabel end of the Van Fleet Trail, Florida.

Among the epiphytes, the one everybody knows is Spanish moss (Tillandsia usneoides). In the photo above the long gray beards of Spanish moss droop from the limbs of a roadside live oak tree. Spanish moss is not really moss. (It’s not Spanish, either.) It’s an angiosperm—a flowering plant—in the bromeliad family. As parasites go, it seems to be reasonably well-behaved, draping itself from the major limbs of a tree but seldom blocking the foliage. The parasites have no roots and take nothing from the host but mechanical support.

detail of an aerial tuber

The air potato, left and above, is a relative of the yam that propagates by means of “aerial tubers.”

Air potato vine climbing on another vine

Vines are another matter. For these guys it’s all about upward mobility, and they’ll step on anybody to get up into the sunshine zone. The bright green leaves in the foreground of the image above belong to a vine called the air potato (Dioscorea bulbifera). Strictly speaking it’s a yam rather than a potato, but it does produce above-ground spuds (see the detail above at right). A student I met at the CASW conference in Gainesville told me there’s a spring-loaded mechanism that launches the potatoes into the treetops. I’ve not found any confirmation of that story in published sources; perhaps the student was having some fun with a city boy from up north. D. bulbifera is native to Asia and Africa and is not well liked in Florida; the student also told me about air potato roundups in Gainesville, and that part of the story is definitely true.

The inch-thick, cable-like vine in the photo above is not the stem of the air potato plant. Rather, it is an older, woody vine—probably poison ivy—that was once attached to the tree trunk at right. (The scar is still visible on the bark.) The air potato vine is slender, green and herbaceous. It twines counterclockwise around the larger vine, thereby parasitizing another parasite.

Passion vine tendrils

The plant in the photo above is a passionflower vine, Passiflora suberosa. I first noticed this species infiltrating a hedge on the University of Florida campus in Gainesville; in the Green Swamp I found it everywhere I looked. Like the air potato, the passionflower vine makes a living by climbing toward the light on the backs of other plants, but it goes about it in a different way. Instead of spiralling around a stem, it puts out tendrils that lash themselves to whatever supporting structure they happen to touch. The photo above shows several new tendrils at the growing tip of a vine. Note that a few of them have gotten tangled up among themselves; I’ll have more to say about this phenomenon.

In the photo below, a P. suberosa vine has successfully seized upon a twig of a nearby shrub. The tip of the tendril has tied itself into a knot resembling a clove hitch, and the rest of the tendril has become kinky, reducing its length and drawing the vine closer to the host plant. Other nearby tendrils may also latch onto the twig, strengthening the attachment.

Passiflora tendril lashed to twig


Before we continue with this subtropical expedition, please allow me a digression. Last year Edward O. Wilson published a grumpy essay arguing that mathematics is of little use to many scientists, especially in Wilson’s own field of biology. Members of the mathematical community took umbrage, of course. (The Notices of the American Mathematical Society reprinted Wilson’s essay with a rebuttal by Edward Frenkel.) As a nonmathematician and nonbiologist I don’t have standing in this contest, but I have an opinion all the same. It strikes me that learning mathematics not only supplies tools for getting answers but also cultivates habits of mind that can lead to interesting questions. Take this equation:

\[z = \frac{a + b}{a - b}\]

If you are the least bit math-minded, you are already asking yourself, “What about the case where \(a = b\)?” Similar issues come up in computer programming. When you write a procedure to select two items at random from a sequence, you need to think about the possibility that the two items are in fact the same item.

What does all this have to do with Florida vines and tendrils? Let’s plunge deeper into the jungle.


I was walking a section of the General James A. Van Fleet State Trail, a former rail line that cuts through the swamp straight as a laser beam. (It was once part of the Seaboard Air Line Railroad—“Air Line” meaning “as the crow flies.”) Along the way, I stopped to admire the occasional butterfly. I saw birds, too, but I didn’t get the camera out in time. I spoke with other walkers, and I dodged bicycles. I wandered off the trail, got my feet wet, picked up a cargo of burs in my pant cuffs.

Gulf fritillary and zebra heliconia butterflies

Left: Gulf fritillary (Agraulis vanillae). Right: zebra heliconian (Heliconius charitonius). As it happens, both of these butterflies spend their caterpillarhood munching on P. suberosa.

All the while I was puzzling over that vine whose modus vivendi is to send up a shoot, wave around in the breeze, and blindly grab hold of the first thing it touches. What bothered me about this strategy for cadging a free ride to the forest canopy was the hazard of grabbing hold of yourself—or of another vine of the same species, just as wimpy as you are. This is a case where if \(a\) leans on \(b\) for support, it’s important that \(a \ne b\). To put it another way, self-reliance is not a virtue when you’re unreliable.

Young P. suberosa vines tend to sprout in fairly dense clusters; I saw lots of them with stems a meter long and only a few centimeters apart at ground level. Very likely the stems are connected by roots or runners and are all parts of a single individual plant, or a clone. For many of the shoots, their closest neighbors were other P. suberosa vines. Under the circumstances, it appeared that the likeliest outcome is that all the vines would glom onto one another and collapse in a tangled heap.

I wondered if the plant might have some means of self-recognition that prevents or inhibits such an outcome. Animals put a lot of effort into identifying individuals, relatives, and conspecifics. We have both nervous systems and immune systems for the purpose. Why couldn’t plants do the same thing? All it would take is a substance in the cuticle of the stem that inhibits the grabbing action of the tendrils.

I can’t rule out the existence of such a mechanism. However, after some more poking around in the underbrush, I found evidence that the vines do indeed attack their own kind. In the photo below, two vines cross, and one of them has seized hold of a leaf stem on the other. Several dried and broken tendrils suggest this was not the first encounter between the two vines. I found other examples where three or more P. suberosa vines were all lashed together.

Self attack

Time for a new theory, eh? One possibility is that I’m just plain wrong about the futility of vines clinging to one another. Each vine has many tendrils, and each tendril has its own chance of finding a ticket to the sky. Maybe all the vines in a connected cluster can benefit if any one of them hits the jackpot. That is, if one member finds a path upward, all the other members can follow along as well. That would totally alter the calculus of fitness. In the simplest model we might assume that each vine makes \(n\) contacts with other plants, and each contact has probability \(\alpha\) of connecting to an upward path. Further assume that all of the probabilities are independent. Then the probability that everyone gets to heaven is \(1 – (1 – \alpha)^n\), which converges to 1 as \(n\) increases, no matter how small \(\alpha\) might be.

This scheme may well explain what’s going on with P. suberosa, but I’m not entirely satisfied with it. In the first place, the assumption that all events are independent is surely bogus. Plants are not like atoms in an ideal gas, where you might reasonably assume that any two atoms have the same probability of colliding. Plants have roots. They don’t roam at random.

Second, in the probability calculation above I have implicitly assumed that \(a \ne b\), and this assumption is unjustified. A single vine can vainly try to support itself by attaching itself to itself. I submit my final photograph, documenting an act of botanical suicide in which the tendrils of a P. suberosa plant strangle one of the plant’s own leaves:

Suicide vine

Posted in biology, mathematics, photography | 1 Comment

The friendlier skies

Boston from Logan Airport during climbout 2013 11 03

It’s not the best picture I’ve ever snapped while climbing out of Logan Airport, but it’s the first one in years that I haven’t taken furtively, palming the camera to conceal it from the cabin crew. And what if the interference theory were right, and the emanations of iPods and Kindles and digital cameras could send us all plunging into Boston Harbor? How would I feel then?

This morning I was on an early flight out—apparently one of the first to adopt the new FAA rules allowing gate-to-gate electronics. The camera was my only way of celebrating this new freedom. I didn’t have anything like an iPod or a Kindle, and laptops are still forbidden below 10,000 feet, presumably because of their ballistic rather than their electronic hazards. (So I passed much of the takeoff and landing patterns reading in Reiffel and Polak’s Gentle Introduction to Quantum Computing, which packs at least twice the wallop of a Macbook Air.)

Since I’m blogging window-seat pictures, I’ll add this one:

Mouseover to magnify.

Glory 2013 11 03 DSC3177 1280x853

About an hour into the flight I spotted a glorious glory—a halo of bright rainbow iridescence—tagging along with us on the cloud bank below. (Actually an inside-out-rainbow: Not ROY G BIV but VIB G YOR.)

Posted in modern life, photography | 1 Comment

The keys to the keydom

Your security and privacy on the Internet (to the extent that such quaint notions still have meaning) depend on the difficulty of factoring certain large numbers. For example, take this 1,024-bit integer:

X = 123784517654557044204572030015647643260197571566202790882488143432336664289
53013160757127360381500856200680250007894557646372684708763884626821078230649
285613963510276802208436872101227718450860737080502115462982139570265498874808
3875440199841915223400907734192655713097895866822706517949507993107455319103401

The number printed above in squinty type is the product of two 512-bit prime factors. If you set out to find those factors, the project might well keep you busy for many megayears. But I can make the job much easier by giving you a second number of the same size:

Y = 139752806258570179719657334941265463008205415047073349942370461270597321020
717639292879992151626413610247750429267916230424010955054750502833517070395986
2897242371124108160005581486237854115688455171463034213840635250918248983182261
75234193815950597041627518140906384889218054867887058429444934835873139133193

Factoring both X and Y would appear to be twice as much work, but in fact you can do it lickety-split. On my laptop it took roughly 200 microseconds. From millions of years to millionths of a second—that’s quite a speedup!

There’s a trick, of course. Both X and Y are products of two large primes, but it so happens that one of the primes is a shared factor of both numbers. For finding that shared factor, we can rely on a very old, very famous, very simple and very efficient algorithm: Euclid’s algorithm for the greatest common divisor. In Python it looks like this:

def gcd(a, b):
    if b == 0:
        return a
    else:
        return gcd(b, a % b)

(The ‘%’ in line 5 is Python’s modulo or remainder operator.) When this function is applied to X and Y, the recursion is invoked 297 times before returning the common factor:

F = 1070467931937606706425630145948715022696962191248959648262850980092208031819
9635726117009340189103336170841315900354200725312700639146605265442630619090531

You don’t have to take my word for it that F divides both X and Y. Do the division: In that way you will also learn the co-factors of X and Y.

If X and Y were components of public keys in the RSA cryptosystem, their shared factor would create a huge hole in the security fence. And the problem is particularly insidious in that each of the two keys, when examined in isolation, looks perfectly sound; the weakness only becomes apparent when you have both members of the pair.

This potential vulnerability of factoring-based encryption methods has been known for decades, but it seemed there was no reason to worry because coincidentally shared factors are so utterly unlikely. A couple of weeks ago I heard an eye-opening talk by Nadia Heninger, a member of a group that has searched for such unlikely coincidences in the wild. They found 64,000 of them. Reason to worry.


Heninger and her colleagues polled every public IPv4 address in the known universe, requesting a connection on the ports commonly used for two secure communication protocols, TLS and SSH. For every address that responded to queries on those ports, they collected the server’s public encryption key, then closed the connection. Here I am going to discuss only the TLS servers with RSA keys; there were vulnerabilities in other cryptosystems as well, but the issues are slightly different.

Before telling the rest of this story, I have to pause here. For those of you in the born-digital generation, pinging every address on the Internet may sound like a routine walk around the block on a sunny afternoon, but I confess that I never would have dared to try anything so audacious. It’s like knocking on every door in America, or calling every possible telephone number—a task that’s not feasible for individuals of ordinary means, and that also seems unforgiveably rude. But standards of feasibility and rudeness are different in the world of machine-to-machine communication. Computers don’t care if you make four billion hangup calls (although some system administrators might frown on the practice). And, after all, the encryption keys being collected are by definition public.

Back to Heninger’s story. They ran their scan of IP addresses from Amazon’s Elastic Compute Cloud service, where the data-collection phase of the project took a few days. Out of \(2^{32} \approx 4\) billion addresses (less a few special-purpose or reserved areas) they found about 29 million servers accepting connections on the standard port for TLS, but only 12.8 million of those servers supplied public keys. Some 60 percent of the keys retrieved were not unique. Presumably, most of the duplicates are accounted for by organizations that have multiple servers all operating with the same cryptographic credentials, but there were also instances of apparently unaffiliated individuals sharing a key. This is rather like discovering that your house key also opens your neighbor’s front door. (And vice versa.)

After eliminating the duplicates, some 5.8 million distinct RSA keys needed to be tested for common factors. Even though Euclid’s GCD algorithm is highly efficient, running it on all possible pairings of keys would be a strain. There’s an ingenious shortcut, based on the observation that if \(Y\) is relatively prime to each of \(X_1, X_2, \ldots, X_n\), then it also has no factor in common with the product \(X_1 \times X_2 \times \dots \times X_n\). Thus it’s possible to detect the presence of shared factors with just \(n\) GCD operations, instead of \(n^2\). A drawback of this approach is that the product of millions of RSA keys is a huge number, and intermediate results have to be swapped out to disk. Nevertheless, the processing was completed in an hour and a half on the Amazon cloud at a cost of $5.

The output was a list of 64,081 compromised keys for TLS hosts, about 0.5 percent of all such keys collected. For obvious reasons, Heninger et al. are not publishing that list; they tried to contact the owners of vulnerable machines, and they offer a web lookup service where you can check to see if your key is on the list.

The good news is that none of the weak keys are guarding access to major web servers hosting bank accounts or medical records or stock markets or military installations. Most of them are found in embedded networked devices, such as routers and firewalls. That’s also the bad news. A programmer with malicious intent who can gain control of a well-placed router can make a lot of mischief.


Could the prevalence of common factors in RSA keys be explained as a product of pure bad luck? To answer this question we need to solve a birthday problem. The original version of this problem asks how many people you need to bring together before there’s a good chance that two or more of them will have the same birthday (assuming birthdays are distributed randomly over the 365 days of the year). An order-of-magnitude approximation is \(\sqrt{365}\), or about 19. (The actual number is 23.) For the RSA variant of the problem, we ask how many 512-bit primes you need to generate—assuming you select them uniformly at random from the set of all such primes—before you have a good chance of seeing at least one prime twice. In this case we replace 365 with the number of 512-bit primes, which is in the neighborhood of \(10^{150}\). Thus there’s scarcely any chance of a collision until the number of randomly generated primes approaches \(10^{75}\). We’re only at \(10^{7}\) so far. As Heninger said in her talk, we have enough 512-bit primes to assign a public encryption key to every atom in the universe, with little worry over possible duplicates.

According to this line of reasoning, it would be a colossal fluke to see even one duplicated RSA prime, and finding 64,000 of them is clear evidence that those primes are not being chosen uniformly at random. The blame apparently lies with pseudorandom number generators. It’s not that the algorithms are defective. In many cases, cryptographic keys are being generated immediately after a machine is booted, when it just can’t scrape together enough entropy to make a passable pseudorandom number.


Sources:

Heninger gave her talk at a birthday-party for Microsoft Research New England on October 9. Eventually, video may be available.

The paper describing the project is “Mining Your Ps and Qs: Detection of Widespread Weak Keys in Network Devices,” by Nadia Heninger, Zakir Durumeric, Eric Wustrow, and J. Alex Halderman, presented at the 2012 USENIX Security Symposium. Preprint.

At the Microsoft symposium Heninger also discussed at later study of other cryptographic weaknesses in certain smart cards. See “Factoring RSA keys from certified smart cards: Coppersmith in the wild,” by Daniel J. Bernstein, Yun-An Chang, Chen-Mou Cheng, Li-Ping Chou, Nadia Heninger, Tanja Lange, and Nicko van Someren. Preprint.

Arjen Lenstra and his colleagues have independently discovered and reported similar vulnerabilities. See “Ron was wrong, Whit is right,” by Arjen K. Lenstra, James P. Hughes, Maxime Augier, Joppe W. Bos, Thorsten Kleinjung, and Christophe Wachter. Preprint.

Open-source software developed by Zakir Durumeric, Eric Wustrow, and J. Alex Halderman at the University of Michigan for scanning large blocks of the public IP address space: ZMap. A descriptive paper from the 2013 USENIX Security Symposium.

The Michigan group’s web service for checking public keys against the collection of known factorable keys: factorable.net.

Posted in computing, mathematics | 4 Comments

Tetrahedra with a twist


A year ago, when I was playing with Geomag models of sphere clusters, I stumbled upon this curious object, a helix assembled from tetrahedra linked face to face:

Geomags model of a tetrahelix

I certainly wasn’t the first to discover it. A. H. Boerdijk described the structure in 1952; notable later contributions and commentaries came from J. D. Bernal and H. S. M. Coxeter. (See references below.) Buckminster Fuller also had something to say on the subject, and he suggested a name: the tetrahelix.

As far as I can tell, the tetrahelix was not known to the ancients, although there is nothing about its geometry they could not have understood. Perhaps they overlooked it because Aristotle believed that regular tetrahedra could be packed together to fill all of three-dimensional space—a feat that would make this mere linear packing rather unimpressive by comparison. But Ari was wrong about tetrahedral tesselation; it can’t be done. The closest you can come is a packing of alternating tetrahedra and octahedra, as in the Geomag-like roof truss in the photograph below.

Roof truss in the atrium of the Washington Wardman Hotel, photographed during the 2009 Joint Mathematics Meetings.

Open metal frame of a roof truss at the Marriott Wardman Hotel in Washington.

It turns out there’s a direct connection between the roof-truss structure and the tetrahelix. One way to construct a tetrahelix is to start with a line of alternating tetrahedra and half-octahedra (the latter having exposed square faces). Then you squeeze and twist each square face until it becomes a rhombus that can be spanned by a new strut (red in the lower panel below), creating two new triangular faces. Presto! The tetrahelix appears.

A truss made of alternating tetrahedra and half-octahedra twists to become a tetrahelix.

Tetrahelix in cylinder 156x620

Remarkably enough, after all this twisting and smushing, the helix still extends along a straight line. All the vertices of all the tetrahedra lie on the surface of a right circular cylinder. The center line of that cylinder does not pass through the centers of the individual tetrahedra, but it does act as an axis of symmetry for the helix as a whole.

Tetrahelix seen end onThe symmetry in question is a screwy one: a translation combined with a rotation. Suppose you have an infinitely long tetrahelix assembled from tetrahedra of side length 1. Then if you slide the entire structure along its axis by a distance of \(1/\sqrt{10}\) (\(\approx 0.316\)), and also rotate it around the same axis through an angle of \(\arccos -2/3\) (\(\approx 131.8\) degrees), the new state is indistinguishable from the starting configuration. Note that the rotation angle is irrational. As a consequence, the helix is aperiodic. When you look along the bore of the helix, you will never see two vertices lined up one behind the other.

The tetrahelix is a chiral structure: It comes in left-handed and right-handed variants, as shown in the photograph below.

left-handed and right-handed tetrahelices

Playing with these two Geomag models, I began to wonder whether they could be joined end-to-end—or in other words whether a tetrahelix can change its handedness in midstream. The answer, as shown below, is yes. A helix peacefully curling to the left can abruptly shift to a right-handed way of life—but the price of becoming ambidextrous is a big kink in the chain.

An ambidextrous tetrahelix 2992

You can even create a tetrahelix that changes direction every third tetrahedron (the smallest possible interval). The result (below) is no longer a helix at all but a weird sort of bridge with an arched spine and two zigzag rails. If you extend the arc further, will the two ends meet to form a closed loop? I don’t have enough Geomags to answer that question.

Zigzag helix bridge


Down through the ages, quite a lot of mystificatory cruft has attached itself to the Platonic solids. Plato himself argued that the universe is built of regular polyhedra, which he identified with the classical elements. (The tetrahedron was fire, because of the sharp points.) Then there was Kepler’s Mysterium Cosmographicum, where he tried to explain the orbital diameters of the planets in terms of nested polyhedral shells.

the squeeze-purse animation: an octahedron becomes a cluster of three tetrahedra

Mouseover to animate. Failing that, click to open in a new tab or window.

Bucky Fuller contributed his own share of overwrought polyhedrolatry. In Synergetics he describes the transformation of an octahedron into a chain of three tetrahedra, as in the animation at right. His account of the geometry is straighforward, explicit and accurate. But he doesn’t stop with geometry. He attributes to this rearrangement of edges and vertices a much larger significance:

Symmetric matter has been entropically transformed into asymmetrical and directionally focused radiation: one quantum of energy has seemingly disappeared. When the radiation impinges interferingly with any other energy event in Universe, precession recurs and the three-quantum electromagnetic wave retransforms syntropically into the four-quantum octahedron of energy-as-matter. And vice versa. Q.E.D.

A few paragraphs later, Fuller introduces the tetrahelix with another inscrutable rhapsody on geometry and physics:

With this fourth, invisible tetrahedral addition the overall triple-bonded tetrahedral array becomes either rightwardly or leftwardly spiraled to produce a four-tetrahedron tetrahelix, which is a potential, event embryo, electromagnetic-circuitry gap closer. Transmission may thereafter be activated as a connected chain of the inherently four-membered, individual-link continuity. This may explain the dilemma of the wave vs the particle.

I think I understand the temptation to see deep meaning or even magic in the way the most symmetrical solids arrange themselves in space. The building blocks are simple, the patterns they form wonderfully intricate and precise. It looks as if there’s something very special about all the dimensions and angles in these structures, so that the slightest change would ruin everything—would tear the very fabric of the universe.

Well, maybe. But in the case of the tetrahelix I’d like to argue that this object—however handsome and intriguing it might be—is not a freak of nature or a miracle. Its existence does not depend on some exquisitely refined and fragile property of the tetrahedron. In support of this notion I offer the exhibit below.

helix formed from pentagonal bipyramids

Let’s call this structure a pentahelix. If you focus on the three strands of red links, they appear to follow the same paths as those in a left-handed tetrahelix. Thus you might guess that what we have here is just a tetrahelix “decorated” with some extra bumps along the perimeter. But that’s not what you’re looking at; there are no tetrahedra at all in this construction. Instead, the component parts are pentagonal dipyramids. This polyhedron looks at first glance as if it might consist of five regular tetrahedra arranged around a central axis—and in Aristotle’s world it might indeed be built that way. In our universe, however, the lengths and angles aren’t quite right. To fill 360 degrees with five equal wedges takes 72 degrees per wedge; the angle between faces in a regular tetrahedron falls short by about a degree and a half. Thus the pentagonal dipyramid is a structure fundamentally different from anything built of tetrahedra. The vertex coordinates are full of \(\sqrt{5}\), whereas the tetrahedron is all about \(\sqrt{3}\). Yet, in spite of these differences, the pentagonal shapes can be knitted together to make a fine-looking helix—slightly less symmetrical than the true tetrahelix, but a recognizable member of the same family.

Drawing of the collagen triple helix by David Goodsell

The three-stranded helix of the collagen protein. Drawing courtesy David S. Goodsell and the RCSB Protein Data Bank.

The fact is, helices seem to be very robust and easy to produce. They’re everywhere in nature. For example, the collagen protein (illustration at right), is a three-stranded helix much like both the tetrahelix and the pentahelix. Collagen genes have accumulated thousands of mutations while still producing a functional molecule; thus an immense number of different amino acid sequences (and slightly different geometries) all give rise to very similar triple helices.

Funny world we live in. If you want to design an object that will tile all of three-dimensional space, the requirements are stringent. But helices aren’t so finicky. When you stack symmetrical objects along one axis, it seems hard not to make a helix.


Update 2013-11-25: I am late to the party with this note, but for the benefit of those who have not been reading the comments, let me point out that the question I raised about the “arched bridge”—Can it be extended to form a closed ring?—was asked and answered long ago, but that fact hasn’t stopped ingenious folks from having fun with it recently.

As explained in a comment below by Stan Wagon, the question was posed in 1957 by Hugo Steinhaus in the problem section of the Polish journal Colloquium Mathematicum. (A notation there suggests the matter may have been discussed a year earlier in The New Scottish Book.) A proof that no such closed ring of tetrahedra can exist was published two years later by Stanisław Świerczkowski, also in Colloquium Mathematicum.

Not to be deterred by a mere proof, Michael Elgersma and Wagon have produced a lovely tetrahedral torus, printed in genuine 3D by Shapeways:

3d-printed tetrahedral torus by Michael Elgersma and Stan Wagon

For the secret behind this magic trick, see the not-yet-published paper Closing a Platonic Gap by Elgersma and Wagon. Comments below by Nicholas Mecholsky and Robert Mathieson include links to other constructions.


Read more about it:

Bernal, J. D. 1964. The structure of liquids. (The Bakerian Lecture, 1962.) Proceedings of the Royal Society of London, Series A, Mathematical and Physical Sciences 280(1382):299-322.

Boerdijk, A. H. 1952. Some remarks concerning close-packing of equal spheres. Philips Research Reports 7(4):303-313.

Coxeter, H. S. M. 1985. The simplicial helix and the equation tan n theta = n tan theta. Canadian Mathematical Bulletin 28 (4):385–393. PDF

Gray, R. W. Undated web page; consulted 2013-10-19. Tetrahelix data. http://www.rwgrayprojects.com/rbfnotes/helix/helix01.html

Lagarias, Jeffrey C., and Chuanming Zong. 2012. Mysteries in packing regular tetrahedra. Notices of the American Mathematical Society 59(11):1540-1549. PDF

Lord, E. A., and S. Ranganathan. 2004. The γ-brass structure and the Boerdijk–Coxeter helix. Journal of Noncrystalline Solids 334 and 335:121-125.

Sadler, Garrett, Fang Fang, Julio Kovacs, and Klee Irwin. 2013 (preprint). Periodic modification of the Boerdijk-Coxeter helix (tetrahelix). arXiv 1302.1174.

Steinhaus, H. 1957. Problem 175. Colloquium Mathematicum 4:243.

Świerczkowski, S. 1959. On chains of regular tetrahedra. Colloquium Mathematicum 6:9–10.

Zheng, Chong, Roald Hoffmann, and David R. Nelson. 1990. A helical face-sharing tetrahedron chain with irrational twist, stella quadrangula, and related matters. Journal of the American Chemical Society 112:3784-3791.

Posted in mathematics | 7 Comments

Advances in Obfuscation

Richard M. Stallman, the gnuru of the free software movement, has launched a campaign to liberate JavaScript. His call to action begins: “You may be running nonfree programs on your computer every day without realizing it—through your web browser.”

My first reaction to this announcement was bafflement. Why pick on JavaScript? I had been thinking that one of JavaScript’s notable virtues is openness. If you include a JavaScript program in a web page, then anyone who can run that program can also inspect the source code. Just use the browser’s “View Source” command, and all will be revealed.

Reading on in Stallman’s complaint, I began to see more clearly what irks him:

Google Docs downloads into your machine a JavaScript program which measures half a megabyte, in a compacted form that we could call Obfuscript because it has no comments and hardly any whitespace, and the method names are one letter long. The . . . compacted code is not source code, and the real source code of this program is not available to the user.

I have to agree that obfuscated JavaScript is annoying; indeed, I have griped about it myself. Although there’s an innocent reason for distributing the code in this “minified” form—it loads faster—some software authors doubtless don’t mind that the compression also inhibits reverse engineering.


The very idea of software obfuscation turns out to have a fascinating theoretical background, as I learned last week at a talk by Craig Gentry of IBM Research. Gentry is also the inventor of a scheme of homomorphic encryption (computing with enciphered data) that I described in a column last year. The two topics are closely related.

Suppose you have a nifty new way to compute some interesting function—factoring large integers, solving differential equations, pricing financial derivatives. You want to let the world try it out, but you’re not ready to reveal how it works. One solution is to put the program on a web server, so that users can submit inputs and get back the program’s output, but they learn nothing about how the computation is carried out. This is called oracle access, since it is rather like interacting with the oracles of ancient Greek tradition, who gave answers but no explanations.

A drawback of running the program on your own server is that you must supply the computing capacity for all visitors to the site. If you could wrap your function up as a downloadable JavaScript program, it would run on the visitor’s own computer. But then the source code would be open to inspection by all. What you want is to let people have a copy of the program but in an incomprehensible form.

The kind of obfuscation provided by a JavaScript minifier is of no help in this context. The result of the minification process is garbled enough to peeve Richard Stallman, but it would not deter any serious attempt to understand a program. The debugging facilities in browsers as well as widely available tools such as jsbeautifier offer help in reformatting the minified program and tracing through its logic. The reconstruction may be tedious and difficult, but it can be done; there is no cryptographic security in minification.

Secure obfuscation is clearly a difficult task. In ordinary cryptography the goal is to make a text totally unintelligible, so that not even a single bit of the original message can be recovered without knowledge of the secret key. Software obfuscation has the same security goal, but with a further severe constraint: The obfuscated text must still be a valid, executable program, and indeed it must compute exactly the same function as the original.

In a way, program obfuscation is the mirror image of homomorphic encryption. In homomorphic encryption you give an untrusted adversary \(A\) an encrypted data file; \(A\) applies some computational function to the data and returns the encrypted result, without ever learning anything about the content of the file. In the case of obfuscation you give \(A\) a program, which \(A\) can then apply to any data of his or her choosing, but \(A\) never learns anything about how the program works.

For many years fully homomorphic encryption was widely considered impossible, but Gentry showed otherwise in 2009. His scheme was not of practical use because the ability to compute with encrypted data came at an enormous cost in efficiency, but he and others have been refining the algorithms. During his talk at the Harvard School of Engineering and Applied Sciences last week Gentry mentioned that the time penalty has been reduced to 107. Thus a one-minute computation on plaintext takes about 200 years with encrypted data.

Will the problem of secure obfuscation also turn out to be feasible? There is a bigger hurdle in this case: not just a widespread feeling that the task is beyond our reach but an actual impossibility proof. The proof appears in a 2012 paper by seven authors: Boaz Barak, Oded Goldreich, Russell Impagliazzo, Steven Rudich, Amit Sahai, Salil Vadhan, and Ke Yang (BGIRSVY). Ref: On the (im)possibility of obfuscating programs. Journal of the ACM 59(2):Article 6. Most of the work described in the 2012 article was actually done a dozen years earlier and was reported at CRYPTO 2001. Preprint. They give the following informal definition of program obfuscation:

An obfuscator \(\mathcal{O}\) is an (efficient, probabilistic) “compiler” that takes as input a program (or circuit) \(P\) and produces a new program \(\mathcal{O}(P)\) satisfying the following two conditions.

— Functionality: \(\mathcal{O}(P)\) computes the same function as \(P\).

— “Virtual black box” property: Anything that can be efficiently computed from \(\mathcal{O}(P)\) can be efficiently computed given oracle access to \(P\).

In other words, having direct access to the obfuscated source text doesn’t tell you anything you couldn’t have learned just by submitting inputs to the function and examining the resulting outputs.

BGIRSVY then prove that the black-box obfuscator \(\mathcal{O}\) cannot exist. They do not prove that no program \(P\) can be obfuscated according to the terms of their definition. What they prove is that no obfuscator \(\mathcal{O}\) can obfuscate all possible programs in a certain class. The proof succeeds if it can point to a single unobfuscatable program—which BGIRSVY then proceed to construct. I won’t attempt to give the details, but the construction involves the kind of self-referential loopiness that has been a hallmark of theoretical computer science since before there were computers, harking back to Cantor and Russell and Turing. Think of programs whose output is their own source code (although it’s actually more complicated than that).

The BGIRSVY result looks like bad news for obfuscation, but Gentry has outlined a possible path forward. Ref: 2013 preprint. Candidate indis­tinguishability obfuscation and func­tional encryption for all circuits. Preprint. His work is a collaboration with five co-authors: Sanjam Garg, Shai Halevi, Mariana Raykova, Amit Sahai, and Brent Waters (GGHRSW). They adopt a different definition of obfuscation, called indistinguishability obfuscation. The criterion for success is that an adversary who is given obfuscated versions of two distinct but equivalent programs—they both compute the same function—can’t tell which is which. This is a much weaker notion of obfuscation, but it seems to be the best that’s attainable given the impossibility of the black-box definition. An obfuscator of this kind conceals as much information about the source text as any other feasible method would.

GGHRSW introduce a “candidate” construction for an indistinguishability obfuscator, which works for all programs in an important complexity class called \(NC^{1}\). The algorithm relies on a transformation introduced almost 30 years ago by David Barrington. Ref: Barrington, David A. 1986. Bounded-width polynomial-size branching programs recognize exactly those languages in \(NC^{1}\). In Proceedings of the Eighteenth Annual ACM Symposium on Theory of Computing, STOC ’86, pp. 1–5. Preprint. The class \(NC^{1}\) is defined in terms of networks of Boolean gates, but Barrington showed that every such network is equivalent to a “branching program” where the successive bits of the input determine which of two branches is chosen. The entire computation can then be expressed as the product of \(5 \times 5\) permutation matrices; the essence of the obfuscation step is to replace those matrices with random matrices that have the same product and therefore encode the same computation. There are exponentially many choices for the random matrices, and so any given program has many possible obfuscations.

Gentry calls the scheme a “candidate” because the security guarantee depends on some rather elaborate assumptions that have not yet been thoroughly checked. The team is working to put the method on a firmer foundation.

GGHRSW write that “it is not immediately clear how useful indistinguishability obfuscators would be,” but they offer some possibilities. Software vendors might use the technique for “secure patching”: fixing a flaw without revealing what’s being fixed. Or obfuscation could help in producing “crippleware”—try-out versions of a program that have certain functions disabled. In these applications indistinguishability obfuscation would make it impossible to identify the differences between two versions of a program. Watermarking, which tags each copy of a program with a unique identifier, is another likely use.

In his Harvard talk Gentry also looked a little further into the future to suggest that obfuscation might be valuable when it comes time to upload your brain to the Internet. The idea, presumably, is that you could allow others to borrow (or rent?) your cognitive abilities without disclosing all your private knowledge and personal memories. Since I already have only erratic access to my own memory, I’m not sure any further obfuscation is needed.


Two further comments.

“Use less big word.” Grafito, Russell Field, Cambridge, MA. Photographed 2013-09-20. (Mouseover to magnify.)graphito reading "use less Big word"

First, I find both homomorphic encryption and indistinguishability obfuscation to be fascinating topics, but linguistically they are somewhat awkward. The word sesquipedalian comes to mind. In the case of indis­tinguish­ability ob­fus­cation we are just two syllables short of super­cali­frag­ilistic­expiali­docious. I’m hoping that when Gentry makes his next breakthrough, he’ll chose a term that’s a little pithier. (The GGHRSW paper sometimes refers to garbled rather than obfuscated programs, which seems like a step in the right direction.)

Second, returning to the everyday world of JavaScript programs in web pages, I have to note that Richard Stallman will probably not applaud advances in obfuscation, particularly when they are applied to crippleware or watermarking. I can’t match Stallman’s idealogical zeal for open software, but I do prefer to make my own code available without artificial impediments. For example, the bit of JavaScript that creates a magnifying loupe for the photograph above is available without obfuscation. (I have not included a copyleft license, but there’s a comment reading “Please feel free to use or improve.”)

Posted in computing | 5 Comments

Who was Aleph Null?

Thirty years ago this month Scientific American launched a new column called “Computer Recreations.” I wrote the first six installments, then passed the baton to A. K. Dewdney.

Soon after the first column appeared, we received a letter pointing out that the title “Computer Recreations” had already been used by the journal Software—Practice and Experience. I was embarrassed by this revelation, not because of any worries about legal conflicts but because I should have known what other publications were doing in the same genre. I went off to the library to look at some back issues of SP&E. Here’s what I found in Vol. 1, No. 1, published a dozen years earlier, in 1971:

heading of the first

Through the rest of 1971 and into 1972 there were half a dozen more “Computer Recreations” columns attributed to \(\aleph_0\!\), followed by a couple of guest columns (the first by Paul Bratley and Jean Millo of the University of Montreal, the second by Andrew Tanenbaum of the Vrije Universiteit in Amsterdam). After that, “Computer Recreations” disappeared from the pages of Software—Practice and Experience, never to be mentioned again.

Recently I had occasion to take a fresh look at the SP&E columns, and I realized that I have never learned the true identity of the mysterious \(\aleph_0\!\). Maybe one of my readers can help unmask the author. Or maybe the real Aleph Null will step forward now to take credit for his or her work. I’m also curious about a figure called Archimedes, who appears in several of the columns as a mentor or sidekick of \(\aleph_0\!\). In one column Archimedes is said to be “a group of mathematicians.”

For a while I thought that Aleph Null and Archimedes might have been members of the Bell Labs gang that was so lively and prolific in the late 60s and early 70s—Dennis Ritchie, Ken Thompson, Brian Kernighan, Doug McIlroy, Michael Lesk and others—the crew who gave us C and Unix and so much else. But that can’t be right. One of the SP&E columns discusses Darwin, a game in which programs compete for space and survival in a computer’s memory. (Dewdney later wrote about a similar system called Core War.) Darwin was invented at Bell Labs, but the SP&E column describes the program from an outsider’s perspective: \(\aleph_0\!\) and Archimedes talk to the Bell Labs group and then build their own implementation of the system.

Jacket of A. G. Bell's 1972 book In addition to seven “Computer Recreations” columns, \(\aleph_0\!\) is also listed as the author of a 1973 book review in SP&E. The book under review is Games Playing with Computers, by A. G. Bell. For a few days this summer I entertained the notion that \(\aleph_0\!\) = A. G. Bell. The review of Bell’s book is rather harsh, but that didn’t dissuade me; after all, every author is his own worst critic. A glance at the book would surely settle the matter, I thought, but none of the libraries within easy reach of Boston could supply a copy. Eventually I bought one online (for $3.44, plus shipping). I’m afraid I found no support for my cute self-reviewing hypothesis. Stylistically, Aleph Null and A. G. Bell have nothing in common.

Software—Practice and Experience is a publication with British roots. The first editors were C. W. Barron of the University of Southampton and C. A. Lang of Cambridge. The journal’s Cambridge connections were particularly strong in those early days, with a board of editors that included Maurice V. Wilkes, who had headed the Cambridge Computer Laboratory since 1945. I now incline to the view that Aleph Null and Archimedes were part of this milieu—almost surely from Britain and probably from Cambridge. But I have made no further progress in identifying them.

If you would like to join in the guessing game, here are a few clues to consider, culled from the seven published \(\aleph_0\!\) columns:

  • In the first column Aleph Null writes of himself or herself: “I confess to a personal interest in artificial intelligence.”
  • In another column the author mentions “visiting the northern parts of the British Isles,” from which I infer that Aleph Null is not native to those northern parts.
  • Archimedes is described in one column as “myopic” and in another as “cryptomnesiac.” Who knows what that’s about?
  • There’s a PDP-7 in the picture. Archimedes seems to be adept at hacking on it. (This was a factor in my initial guess about Bell Labs, since the first version of Unix was created there on a PDP-7. But the Cambridge Computer Lab had one as well.)
  • Most of the code in the columns is Fortran. There’s also some talk of BCPL (developed at Cambridge) and POP-2 (developed at Edinburgh). Plus lots of assembler. No hint of Algol.

Regrettably, the columns themselves are impounded behind a John Wiley paywall. Here are the topics:

1971 (1). The Military Game, or Napoleon.

1971 (2). MOO (also known as Bulls and Cows; similar to Master Mind).

1971 (3). The African board game kalah.

1971 (4). Space-filling curves with a plotter.

1972 (1). Darwin.

1972 (2). “April Foolery.” Puzzles on topics such as algorithms for card shuffling.

1972 (3). More curves with a plotter.

1972 (4). [By Bratley and Millo]. Self-reproducing programs.

1973 (4). [By Tanenbaum]. The game of JOTTO.


Update 2013-09-06: Commenter Shecky R has apparently solved the mystery. Using a nifty new technology called Google, he tracked down a bibliographic entry presented as follows:

Aleph Null (C. A. Lang). “Computer Recreations: Darwin.” Software: Practice and Experience v2, Jan-Mar 1972, p93-96.

Using another nifty technology called a library, I’ve looked at the book in which this entry appears (Analyzing Computer Security: A Threat/Vulnerability/Countermeasure Approach, by Charles P. Pfleeger and Shari Lawrence Pfleeger). There’s no hint in the book of how the Pfleegers know that Aleph Null = C. A. Lang, but I have no reason to doubt the identification.

With a little more sleuthing—it’s quite a marvel, this Google thing—I have learned that C. A. Lang is Charles Lang, who went on to a distinguished career in computer-aided design and three-dimensional modeling. From David E. Weisberg:

Charles Lang graduated from Cambridge University in 1960 with a degree in engineering. After several years at Mullard Research in England, Lang enrolled as a graduate student at MIT in 1963 and worked at Project MAC for more than 18 months with Doug Ross and Steven Coons. Lang was then recruited back to Cambridge University’s Computer laboratory by Professor Maurice Wilkes.

The British Government arranged funds and resources that underwrote the activities of the Cambridge CAD Group in 1965 — the group founded by Wilkes and headed by Lang until 1975. In 1969, the group started developing solid modeling software on a Digital PDP-7 system using assembly language programming. “Unfortunately no one told us it was impossible,” says Lang. “But then Ian Braid turned up.”

Now, what about this Archimedes bloke?


Update 2013-09-08: Shecky R solved the mystery and now Barry Cipra has unsolved it, with a 1971 document that includes the statement: “It should be noted that Aleph-Null and C. A. Lang are not the same person.”

Posted in computing, meta | 9 Comments

Black and white in one dimension

At the 50th anniversary of the March on Washington I find myself writing again about racial segregation, and specifically about Thomas C. Schelling’s mathematical model of how people wind up living in monochrome neighborhoods. I have mentioned Schelling’s model several times before (e.g., 2004, 2006, 2010). Now it’s the subject of my new “Computing Science” column in the September-October American Scientist.

My reason for returning to the subject yet again is not just that the issue never goes away. It turns out there’s a new theoretical development, which to some extent alters the interpretation of the model.

In Schelling’s model individuals have a fixed intolerance threshold, τ, between 0 and 1. If the fraction of like-colored neighbors is less than τ, the resident will seek a new home elsewhere. Most often, τ is set equal to 1/2, and so each person’s preference is simply not to be in a minority. Thus a totally integrated housing pattern, in which everyone has equal numbers of black and white neighbors, would be acceptable to all. But that’s not the outcome the model predicts.

In one of my earlier articles, I described Schelling’s findings as follows:

The model suggests that extreme segregation can arise spontaneously even when no one in the population desires that outcome…. Neither race seeks to escape or exclude the other; the only racial preference is of the mildest sort—a desire not to be entirely cut off from people of one’s own group. But that desire is enough to cause the two populations to separate like oil and water.

There may be versions of the model for which those statements are true, but it’s now clear that my description is too strongly worded when applied to the specific models that Schelling studied in his first publications on the subject, circa 1970.

Schelling began with one-dimensional models, where individuals are arranged along a line. In a 1971 paper he presented his results in “typewriter diagrams,” representing the races by ‘+’ and ‘o’ symbols, with dots to mark those individuals who are unhappy with the racial composition of their neighborhood. After repeatedly applying his movement procedure (which I’ll explain below), everyone is happy but the races have separated into homogeneous blocks.

Schelling typewriter model 1971

Schelling also mentioned experiments in which the model becomes a game played with coins. Again, a random initial state evolves into a stable, somewhat segregated final state:

Schelling coins initial 900x184
Schelling coins solution900x191

The new theoretical discoveries concern one-dimensional models much like Schelling’s original. The first results were reported last year at the Symposium on the Theory of Computing by a group I shall identify as BIKK: Christina Brandt, Nicole Immorlica, Gautam Kamath and Robert Kleinberg (arXiv preprint). The findings are based on analytical methods rather than computer simulations.

The BIKK model is one-dimensional, but with the ends of the line joined to form a ring. Three parameters define an instance of the model: The population size N, the intolerance threshold τ and the neighborhood radius w, which specifies how far to the left and right each agent looks when assessing neighborhood composition. Each agent’s neighborhood includes 2w + 1 sites, including the agent itself. The BIKK analysis was confined to models with τ = 1/2, with N large and w \(\ll\) N. The system evolves through an exchange process: At each step, two unhappy agents of opposite color are chosen at random and made to switch places; with τ = 1/2, the exchange is guaranteed to make them both happy. The moves continue until no more qualifying pairs can be found.

The BIKK group was able to prove that this process always terminates; it can’t get stuck in an endless cycle where the same agents keep shuffling from place to place. Furthermore, they were able to characterize the degree of segregation in the final state: The mean length of the monochromatic runs is a polynomial function of w, the neighborhood size; it is not a function of N, the population size. This is an important distinction. It means the model with τ = 1/2 cannot account for the kind of metropolitan-scale segregation seen in many large American cities. Instead it predicts smaller, neighborhood-size clusters of each race.

The key idea behind this result is the observation that whenever a monochromatic cluster reaches a size of w + 1, it can never be invaded or broken up by the opposite color. The BIKK workers prove that many such “firewall” clusters will almost surely evolve in any large-N system, and so they cannot grow very large before they bump into one another. As Kleinberg put it in an email: “What prevents segregation from happening on a massive scale is that smaller-scale segregation happens first.” For a little more on how the proof works, see my column; for a lot more, see the BIKK paper.

As noted above, the BIKK result considers only the case of τ = 1/2. In a subsequent paper, George Barmpalias, Richard Elwes and Andy Lewis-Pye analyzed the same family of models across the entire spectrum of τ values, from 0 to 1. They discovered an intriguing phase diagram, which includes regions of more severe segregation (i.e., longer monochromatic runs) both for higher values of τ and also for certain ranges of lower values. The latter result seems paradoxical: A greater tolerance for racial diversity leads to a more segregated society. But the explanation is not so hard to fathom: Agents willing to accept minority status are less likely to assemble into monochromatic “firewalls”; with a lower concentration of firewalls, the few that do develop have room to grow larger before they collide.

What does all this tell us about race relations among real people in real cities? The mismatch between the predicted and the observed scale of segregated neighborhoods calls for some explanation. Maybe the prevalence of huge, city-scale racial ghettos means we are much more intolerant than we choose to believe—that the true value of τ in the American population is significantly greater than 1/2. Or, following the result of Barmpalias et al., maybe τ is somewhat less than 1/2, putting us into the “paradoxical” regime. Another possible explanation lies in the one-dimensionality of the solved models; two-dimensional versions of the Schelling model may have significantly different behavior. (Extending the analytic methods to the two-dimensional case looks difficult.)

I can pile on one more hypothesis: The specific models analyzed in these two papers are “zero temperature” models, without any sort of thermodynamic noise. Two agents can exchange places only if the move is strictly beneficial to both of them; unfavorable or even neutral moves never take place. The zero temperature assumption may not be the best approximation to the real world, where people have imperfect knowledge and sometimes act outside the bounds of strict rationality. Perturbed models that admit some small probability of unfavorable actions seem to predict quite different global behavior. But, again, including nonzero temperature effects makes analysis harder.

Finally, one might also entertain the possibility that an emotionally frought and historically complex social institution is unlikely to yield all its secrets to a very simple, abtract, mathematical model.


There’s one further point to make about the model itself. The prediction of self-limiting segregation in the τ = 1/2 case came as a surprise to me, but it should not have. A glance at Schelling’s own diagrams (including the one reproduced above) shows that the small scale of the racially divided neighborhoods could have been guessed as early as 1971—but Schelling did not call attention to this fact. It was apparently first noted about five years ago by Dejan Vinkovic and Alan Kerman and then by Dietrich Stauffer and S. Solomon.

The BIKK model is very similar to Schelling’s original scheme, but not quite identical. The main difference is that instead of exchanging pairs of unhappy agents, Schelling insert diagramSchelling moved each unhappy agent to the nearest position where it would be happy. The displaced agent intruded itself between two other sites, with all the intervening agents shifting left or right to make room. Sometimes such a move would alter the status of other agents in the old or the new neighborhood, so that a previously happy individual would become discontented, or an unhappy one would become satisfied without having to move. As a model of residential segregation, this mechanism is rather peculiar; households don’t elbow their way in between neighboring houses. Nevertheless, the model has one important property that distinguishes it from all the later variants: It is wholly deterministic. Nothing about the model is random except for the initial distribution of the agents. Given a specific initial configuration, the evolution of the system can be predicted with certainty.

As far as I know, Schelling’s deterministic model has not been studied much since he worked on it more than 40 years ago. (One reason might be that although it’s easy to implement the model with pencil and paper or with coins, it becomes quite a mess when you try to encode the rules in a computer program.) I was curious about whether this model also predicted self-limiting segregation—in other words, whether the mean run length is a function of w or of N. Some experiments provide pretty clear evidence:

For each value of w from 1 through 30 the dots give the mean length of monochromatic runs averaged over 1,000 repetitions. In all cases the population size N is 1,000 × w.

mean length of monochromatic runs as a function of w

The mean run length appears to have a simple linear relation to w. This is consistent with the BIKK group’s findings for their random-exchange version of the model. They have proved that the mean run length of monochromatic blocks grows no faster than \(w^2\), and they believe the true rate of growth is probably linear in \(w\).

Posted in mathematics, social science | 3 Comments

The Mother Ditch

I was in Santa Fe for a couple of weeks. When I went out for my morning run, I followed a paved pathway along the “river” (a parched, sandy channel, even in the monsoon season). I found my way back to town on another path, the Acequia Madre Trail, which follows the course of what I took to be a small creek (also dry). Later I learned that the Acequia Madre is not a creek but an irrigation ditch. Indeed, the name means “mother ditch”: It is the primary artery of an irrigation network built 400 years ago when the Spanish first came to this valley.

The Acequia Madre de Santa Fe. Pronounced “ah-SAY-keyah.” As my wife Ros explains: The stress is always on the next-to-last syllable, but in this case the last syllable has two syllables.

The Acequia Madre de Santa Fe, near the rail yards and the School for the Deaf

Cris Moore, one of my hosts at the Santa Fe Institute, told me that water is run through the ditch at least once in a year in a legal ritual that maintains the right of landowners to claim a share of the flow. (Cris learned this while “serving two consecutive four-year sentences” on the Santa Fe City Council.)

onions at the Crawford stand at the Santa Fe Farmer's Market, 2013-08-17

I learned more about acequias from Stanley Crawford, who grows various kinds of alliums and writes various kinds of books on a small farm northeast of Santa Fe, halfway to Taos. I met Stanley and his wife Rose Mary at the Santa Fe farmer’s market when I stopped to photograph a heap of their immaculate onions—as hefty, round and polished as bocce balls. I left their booth carrying away a couple of Stanley’s books. One is a novel, which I remember seeing but not reading in its original 1972 edition: Log of the S.S. The Mrs. Unguentine. The other book, which I began reading on the flight home to Boston, is Mayordomo: Chronicle of an Acequia in Northern New Mexico.

The book recounts one year of Crawford’s tenure as mayordomo of the Acequia de la Jara, a ditch serving two dozen small farms in the foothills of the Sangre de Cristo mountains. (This is a work of nonfiction, but names of people and places are altered.) The year begins with the spring cleaning, when all the parciantes—those entitled to a share of the water—must either show up to help clear the ditch of silt, sand and vegetation or else pay a worker to take their place. With a crew of 17, the process takes two days, plus part of a third day to “bring the water down” from a diversion dam upriver. The mayordomo awaits the stream’s arrival on his own land:

Pitchfork in hand, I walk up the winding channel to meet the descending water. By the time it gets here the flow will be pushing a large roll of debris, and I will walk it back down through my place and into the next if need be, to fork out as much as I can…. All is quiet. This is a strange wait, as if for a train on tracks that curve out of sight behind a hill…. I am about to walk a little further up when a brown and grey tongue slips into view around a bend and rolls toward me, its dry leaves hissing softly, twigs snapping, approaching like some creature, a giant snake that has assumed a somewhat disorderly disguise.

For the rest of the season, gravity does most of the work, but there are floods and droughts to cope with, plus muskrats and beavers, leaks and plugged channels. Most of all there are the landowners to cope with, for the acequia is not only a technological artifact; it is a social and political and cultural institution as well. The affairs of the ditch are governed by a three-member commission, elected by all the parciantes; the commissioners in turn appoint the mayordomo. The mayordomo is responsible not just for keeping the water flowing but also for equitably distributing it among the parciantes. And what is equitable? At the Acequia de la Jara a fair share is not to be defined by statute or contract but rather by history, by unwritten customs and habits, and by ongoing face-to-face negotiations. The process is not friction-free, but it seems to work.


The water in an irrigation ditch is a shared resource, like the unfenced common pastures that so famously became so tragic in early-modern England. In fact, the irrigation ditch is even more vulnerable to selfish exploitation than a pasture that’s equally available to all: Those on the upper reaches of the ditch have it within their power to divert all the flow into their own fields, depriving those below. Yet they choose not to do so. What explains their forbearance?

At a zanja near Heber, CA, headgates regulate flow into farmers' fields.

At a zanja near Heber, CA, on the Mexican border, headgates regulate flow into farmers’ fields.

Some years ago I spent a few days exploring the irrigation works of the Imperial Valley in southern California, another place where water is distributed through public ditches—though out there they call the ditches zanjas rather than acequias, and the ditch boss is not a mayordomo but a zanjero. The Imperial Valley is a site of intensive, industrialized, large-scale agriculture; the growers have big bucks at stake. And water is the limiting resource: Every drop must be allocated and accounted for. The headgate controlling flow into each field is opened and closed on a precise schedule. Padlocks on the gates discourage cheating. In this social setting, the zanjero becomes a referee and an enforcer—an agent of a higher power. The rules of the game are rigidly legalistic.

Back on the Acequia de la Jara, attitudes are more casual—or at least they were in the era Crawford is describing, circa 1985. There are no corporate farms here; indeed, some of the growers are really gardeners rather than farmers. Most years, there’s more than enough water to go around. And even when some form of rationing is needed, the arrangement is informal. Neighbors meet, discuss, compromise. They “divide the ditch,” giving all the water to the upper farms during half the week, then shifting the flow to the lower farms. Even in the driest periods, everybody gets “a little water for the chickens.”

The continuing success of this small-scale self-organizing system—which has apparently been flourishing in the back country of New Mexico for hundreds of years—gives me hope for the future of democracy. The community’s problem-solving ability is especially impressive given the recent ethnic diversification, as long-tenured Hispanics have had to absorb an influx of undocumented Anglos, with concomitant differences along dimensions of language, religion, wealth, and education.

A whole school of history associates irrigated agriculture with totalitarian governments. The underlying notion is that organizing the labor to tame a river requires a large-scale bureaucracy and a top-down power structure, often imposing slavery or peonage on part of the population. The historian Karl Wittfogel wrote of “oriental despotism” in the irrigation-based civilizations of Mesopotamia, Egypt and the Indus valley; according to his “hydraulic theory of the state,” irrigation and repression are causally linked. The early history of the acequias in northern New Mexico may well have followed the same pattern; it would be surprising if the Spanish conquistadors and missionaries did not rely on the forced labor of the indigenous Pueblo communities to dig the first ditches. Nevertheless, the little band of farmers (and writers) along the Acequia de la Jara has made something more cheering from this dark legacy: a social collaboration that appears to meet their own needs, along with some beautiful onions and garlic.

Posted in off-topic, social science, technology | 2 Comments

The wisterium of memory

huge droopy pinkish purplish blooms at the Davis Square T station in Somerville MA, 2011-05-21

Every spring it’s the same story. Those pendulous clusters of pinkish purplish blossoms come droopling off a pergola in a little park around the corner, and I spend the next half hour desperately searching my memory for the word. I know it’s in there, if only because I’ve been through this before, last year and the year before and the year before that. It’s in there but it’s stuck in one of the crevices of my cerebral cortex, and I can’t pry it loose. What is the name of that flower?

This year I took notes—looking over my own shoulder as I ransacked my mind for the missing word. The first term that came to me was hydrangea: obviously wrong but at least hitting the right general category of flowering plants. Verbena is a little closer—the flowers notes recording the struggle to find the missing word of verbena are approximately the right color—but the strange thing is, I didn’t know that (or I didn’t know that I knew that) until I googled it just now. I do know what forsythia looks like, and I know it has almost nothing in common with the plant I’m trying to name, but my brain offers it up anyway.

The next entry in the list, purple flower, reveals the futility of trying to force the issue. That’s not my brain talking to me; that’s me talking to my brain, ordering it to give me the word I want. Vine may be in the same category, although the initial v could be significant (see below).

There’s a whole campfire story behind the next entry. Three boy scouts, their troop leader and two weeks’ worth of camping gear are crammed into a Volkswagen Beetle, circa 1962. As we putter down a back-country road, the scoutmaster leans his head out the window, inhales with exaggerated gusto, and exclaims, “Smell that honeysuckle!” All through the summer camp session we mocked him mercilessly for this excess of enthusiasm, but he has had his revenge: The phrase is tattooed on my left temporal lobe. It shows up every year when I’m entangled in this crazy quest to name the flower that’s not a hydrangea, not a verbena, not a forsythia, and not a honeysuckle either.

Southern: Me getting bossy with my brain again. When I lived in the Carolinas, I had a tree out front that was overgrown by the purple-flowered vine I can’t name.

v, v, v… Suddenly I’m sure the word starts with v, and this is the first moment when I feel like I’m making progress, that I have the scent, I’m on the trail. A v on the tip of my tongue, but not verbena, not vine

Not vertiginous either.

And hydrangea—what are you doing here again? I already told you to go home; you’re not wanted.

Wissahickon Creek runs along the northwest boundary of the city of Philadelphia, toward Germantown and Chestnut Hill. There was a period in my youth when the road that winds along the creek was the route I took to visit a girlfriend. Late one rainy night I drove my 59 Chevy through deep water covering the road and soon had a white-knuckle moment when I discovered I had no brakes. It’s not surprising that I would remember all this, but what could it have to do with my unnamed flower?

And then, all of a sudden, the neuron finally fires. Wissahickon is not about the creek or the road or the girlfriend or the wet brakes; it’s all about the word itself. Not v but wwis, wis, wis… I’ve got it now: wisteria! What a relief. (Until next spring.)


What is it about this genus Wisteria that so troubles my mind? There are scads of other plants I can’t identify, but that’s because I truly don’t know their names; no amount of rummaging in the attic of memory will ever succeed in retrieving what I never possessed. On the other hand, there’s a whole garden full of flowers—hydrangea, verbena, forsythia—that come effortlessly to mind, even when they are not wanted. Wisteria is different: something I own but can’t command.

Freud would doubtless opine that I must have suppressed the memory for some dark reason. I can play at that game. Wisteria is supposedly named for Caspar Wistar, notable Philadelphia physician circa 1800. He was inspired to study medicine after tending the wounded at the Battle of Germantown in 1777—an event that took place within cannon range of Wissahickon Creek. (Surely that can’t be a coincidence!) Dr. Wistar is also commemorated in the Wistar Institute of Anatomy and Biology in Philadelphia, a place I used to visit when I was a boy of 12 or 13. In that benighted age, impressionable children were allowed to wander freely through the Wistar’s collection of anatomical specimens, where various pickled body parts were sealed in gallon jars. What formaldehyde-soaked vision of horror am I trying so hard not to remember?

The trouble with this story is that until a few days ago I knew nothing of the connection between wisteria and Wistar. Not that Dr. Freud would let me off the hook for such a flimsy reason as that.

Of course wisteria is not the only word that my mind seems to be censoring. There’s that actress whose name I can never remember, although I have no trouble recalling that she’s the sister of Warren Beatty. And, along the same lines, the actor I can’t name, even though I can tell you he is the brother of James Arness. Can I find a childhood trauma to account for this distinctive histrionic sibling suppression complex?

Francis Crick wrote of our attempts to understand the brain: “We are deceived at every level by our introspection.” I believe it. I don’t think that looking over my own shoulder and taking notes as I free associate on the theme of pendulous purple flowers is going to resolve the mystery of memory. I’m fascinated by the process all the same.

In my latest American Scientist column I write on some other approaches to the question of what’s in Brian’s brain.

Posted in science | 6 Comments