Divisive factorials!

The other day I was derailed by this tweet from Fermat’s Library:

Inverse factorial tweet

The moment I saw it, I had to stop in my tracks, grab a scratch pad, and check out the formula. The result made sense in a rough-and-ready sort of way. Since the multiplicative version of \(n!\) goes to infinity as \(n\) increases, the “divisive” version should go to zero. And \(\frac{n^2}{n!}\) does exactly that; the polynomial function \(n^2\) grows slower than the exponential function \(n!\) for large enough \(n\):

\[\frac{1}{1}, \frac{4}{2}, \frac{9}{6}, \frac{16}{24}, \frac{25}{120}, \frac{36}{720}, \frac{49}{5040}, \frac{64}{40320}, \frac{81}{362880}, \frac{100}{3628800}.\]

But why does the quotient take the particular form \(\frac{n^2}{n!}\)? Where does the \(n^2\) come from?

To answer that question, I had to revisit the long-ago trauma of learning to divide fractions, but I pushed through the pain. Proceeding from left to right through the formula in the tweet, we first get \(\frac{n}{n-1}\). Then, dividing that quantity by \(n-2\) yields

\[\cfrac{\frac{n}{n-1}}{n-2} = \frac{n}{(n-1)(n-2)}.\]

Continuing in the same way, we ultimately arrive at:

\[n \mathbin{/} (n-1) \mathbin{/} (n-2) \mathbin{/} (n-3) \mathbin{/} \cdots \mathbin{/} 1 = \frac{n}{(n-1) (n-2) (n-3) \cdots 1} = \frac{n}{(n-1)!}\]

To recover the tweet’s stated result of \(\frac{n^2}{n!}\), just multiply numerator and denominator by \(n\). (To my taste, however, \(\frac{n}{(n-1)!}\) is the more perspicuous expression.)


I am a card-carrying factorial fanboy. You can keep your fancy Fibonaccis; this is my favorite function. Every time I try out a new programming language, my first exercise is to write a few routines for calculating factorials. Over the years I have pondered several variations on the theme, such as replacing \(\times\) with \(+\) in the definition (which produces triangular numbers). But I don’t think I’ve ever before considered substituting \(\mathbin{/}\) for \(\times\). It’s messy. Because multiplication is commutative and associative, you can define \(n!\) simply as the product of all the integers from \(1\) through \(n\), without worrying about the order of the operations. With division, order can’t be ignored. In general, \(x \mathbin{/} y \ne y \mathbin{/}x\), and \((x \mathbin{/} y) \mathbin{/} z \ne x \mathbin{/} (y \mathbin{/} z)\).

The Fermat’s Library tweet puts the factors in descending order: \(n, n-1, n-2, \ldots, 1\). The most obvious alternative is the ascending sequence \(1, 2, 3, \ldots, n\). What happens if we define the divisive factorial as \(1 \mathbin{/} 2 \mathbin{/} 3 \mathbin{/} \cdots \mathbin{/} n\)? Another visit to the schoolroom algorithm for dividing fractions yields this simple answer:

\[1 \mathbin{/} 2 \mathbin{/} 3 \mathbin{/} \cdots \mathbin{/} n = \frac{1}{2 \times 3 \times 4 \times \cdots \times n} = \frac{1}{n!}.\]

In other words, when we repeatedly divide while counting up from \(1\) to \(n\), the final quotient is the reciprocal of \(n!\). (I wish I could put an exclamation point at the end of that sentence!) If you’re looking for a canonical answer to the question, “What do you get if you divide instead of multiplying in \(n!\)?” I would argue that \(\frac{1}{n!}\) is a better candidate than \(\frac{n}{(n - 1)!}\). Why not embrace the symmetry between \(n!\) and its inverse?

Of course there are many other ways to arrange the n integers in the set \(\{1 \ldots n\}\). How many ways? As it happens, \(n!\) of them! Thus it would seem there are \(n!\) distinct ways to define the divisive \(n!\) function. However, looking at the answers for the two permutations discussed above suggests there’s a simpler pattern at work. Whatever element of the sequence happens to come first winds up in the numerator of a big fraction, and the denominator is the product of all the other elements. As a result, there are really only \(n\) different outcomes—assuming we stick to performing the division operations from left to right. For any integer \(k\) between \(1\) and \(n\), putting \(k\) at the head of the queue creates a divisive \(n!\) equal to \(k\) divided by all the other factors. We can write this out as:

\[\cfrac{k}{\frac{n!}{k}}, \text{ which can be rearranged as } \frac{k^2}{n!}.\]

And thus we also solve the minor mystery of how \(\frac{n}{(n-1)!}\) became \(\frac{n^2}{n!}\) in the tweet.

It’s worth noting that all of these functions converge to zero as \(n\) goes to infinity. Asymptotically speaking, \(\frac{1^2}{n!}, \frac{2^2}{n!}, \ldots, \frac{n^2}{n!}\) are all alike.


Ta dah! Mission accomplished. Problem solved. Done and dusted. Now we know everything there is to know about divisive factorials, right?

Well, maybe there’s one more question. What does the computer say? If you take your favorite factorial algorithm, and do as the tweet suggests, replacing any appearance of the \(\times\) (or *) operator with /, what happens? Which of the \(n\) variants of divisive \(n!\) does the program produce?

Here’s my favorite algorithm for computing factorials, in the form of a Julia program:

function mul!(n)
    if n == 1
        return 1
    else
        return n * mul!(n - 1)
    end
end

This is the algorithm that has introduced generations of nerds to the concept of recursion. In narrative form it says: If \(n\) is \(1\), then \(mul!(n)\) is \(1\). Otherwise, evaluate the function \(mul!(n-1)\), then multiply the result by \(n\). You might ask what happens if \(n\) is zero or negative. You might ask, but please don’t. For present purposes, \(n \in \mathbb{N}\).Starting with any positive \(n\), the sequence of recursive calls must eventually bottom out with \(n = 1\).

The function can be written more tersely using Julia’s one-liner style of definition:.

mul!(n)  =  n == 1 ? 1 : n * mul!(n - 1)

The right side of the assignment statement is a conditional expression, or ternary operator, which has the form a ? b : c. Here a is a boolean test clause, which must return a value of either true or false. If a is true, clause b is evaluated, and the result becomes the value of the entire expression. Otherwise clause c is evaluated.

Just to be sure I’ve got this right, here are the first 10 factorials, as calculated by this program:

[mul!(n) for n in 1:10]
10-element Array{Int64,1}:
       1
       2
       6
      24
     120
     720
    5040
   40320
  362880
 3628800

Now let’s edit that definition and convert the single occurence of * to a /, leaving everything else (except the name of the function) unchanged.

div!(n)  =  n == 1 ? 1 : n / div!(n - 1)

And here’s what comes back when we run the program for values of \(n\) from \(1\) through \(20\):

[div!(n) for n in 1:20]
20-element Array{Real,1}:
 1                 
 2.0               
 1.5               
 2.6666666666666665
 1.875             
 3.2               
 2.1875            
 3.657142857142857 
 2.4609375         
 4.063492063492063 
 2.70703125        
 4.432900432900433 
 2.9326171875      
 4.773892773892774 
 3.14208984375     
 5.092152292152292 
 3.338470458984375 
 5.391690662278897 
 3.523941040039063 
 5.675463855030418 

Huh? That sure doesn’t look like it’s converging to zero—not as \(\frac{1}{n!}\) or as \(\frac{n}{n - 1}\). As a matter of fact, it doesn’t look like it’s going to converge at all. The graph below suggests the sequence is made up of two alternating components, both of which appear to be slowly growing toward infinity as well as diverging from one another.

Div

In trying to make sense of what we’re seeing here, it helps to change the output type of the div! function. Instead of applying the division operator /, which returns the quotient as a floating-point number, we can substitute the // operator, which returns an exact rational quotient, reduced to lowest terms.

div!(n)  =  n == 1 ? 1 : n // div!(n - 1)

Here’s the sequence of values for n in 1:20:

20-element Array{Real,1}:
       1      
      2//1    
      3//2    
      8//3    
     15//8    
     16//5    
     35//16   
    128//35   
    315//128  
    256//63   
    693//256  
   1024//231  
   3003//1024 
   2048//429  
   6435//2048 
  32768//6435 
 109395//32768
  65536//12155
 230945//65536
 262144//46189 

The list is full of curious patterns. It’s a double helix, with even numbers and odd numbers zigzagging in complementary strands. The even numbers are not just even; they are all powers of \(2\). Also, they appear in pairs—first in the numerator, then in the denominator—and their sequence is nondecreasing. But there are gaps; not all powers of \(2\) are present. The odd strand looks even more complicated, with various small prime factors flitting in and out of the numbers. (The primes have to be small—smaller than \(n\), anyway.)

This outcome took me by surprise. I had really expected to see a much tamer sequence, like those I worked out with pencil and paper. All those jagged, jitterbuggy ups and downs made no sense. Nor did the overall trend of unbounded growth in the ratio. How could you keep dividing and dividing, and wind up with bigger and bigger numbers?

At this point you may want to pause before reading on, and try to work out your own theory of where these zigzag numbers are coming from. If you need a hint, you can get a strong one—almost a spoiler—by looking up the sequence of numerators or the sequence of denominators in the Online Encyclopedia of Integer Sequences.


Here’s another hint. A small edit to the div! program completely transforms the output. Just flip the final clause, changing n // div!(n - 1) into div!(n - 1) // n.

div!(n)  =  n == 1 ? 1 : div!(n - 1) // n

Now the results look like this:

10-element Array{Real,1}:
  1                    
 1//2                  
 1//6                  
 1//24                 
 1//120                
 1//720                
 1//5040               
 1//40320              
 1//362880             
 1//3628800

This is the inverse factorial function we’ve already seen, the series of quotients generated when you march left to right through an ascending sequence of divisors \(1 \mathbin{/} 2 \mathbin{/} 3 \mathbin{/} \cdots \mathbin{/} n\).

It’s no surprise that flipping the final clause in the procedure alters the outcome. After all, we know that division is not commutative or associative. What’s not so easy to see is why the sequence of quotients generated by the original program takes that weird zigzag form. What mechanism is giving rise to those paired powers of 2 and the alternation of odd and even?

I have found that it’s easier to explain what’s going on in the zigzag sequence when I describe an iterative version of the procedure, rather than the recursive one. (This is an embarrassing admission for someone who has argued that recursive definitions are easier to reason about, but there you have it.) Here’s the program:

function div!_iter(n)
    q = 1
    for i in 1:n
        q = i // q
    end
    return q
end

I submit that this looping procedure is operationally identical to the recursive function, in the sense that if div!(n) and div!_iter(n) both return a result for some positive integer n, it will always be the same result. Here’s my evidence:

[div!(n) for n in 1:20]    [div!_iter(n) for n in 1:20]
            1                         1//1    
           2//1                       2//1    
           3//2                       3//2    
           8//3                       8//3    
          15//8                      15//8    
          16//5                      16//5    
          35//16                     35//16   
         128//35                    128//35   
         315//128                   315//128  
         256//63                    256//63   
         693//256                   693//256  
        1024//231                  1024//231  
        3003//1024                 3003//1024 
        2048//429                  2048//429  
        6435//2048                 6435//2048 
       32768//6435                32768//6435 
      109395//32768              109395//32768
       65536//12155               65536//12155
      230945//65536              230945//65536
      262144//46189              262144//46189

To understand the process that gives rise to these numbers, consider the successive values of the variables \(i\) and \(q\) each time the loop is executed. Initially, \(i\) and \(q\) are both set to \(1\); hence, after the first passage through the loop, the statement q = i // q gives \(q\) the value \(\frac{1}{1}\). Next time around, \(i = 2\) and \(q = \frac{1}{1}\), so \(q\)’s new value is \(\frac{2}{1}\). On the third iteration, \(i = 3\) and \(q = \frac{2}{1}\), yielding \(\frac{i}{q} \rightarrow \frac{3}{2}\). If this is still confusing, try thinking of \(\frac{i}{q}\) as \(i \times \frac{1}{q}\). The crucial observation is that on every passage through the loop, \(q\) is inverted, becoming \(\frac{1}{q}\).

If you unwind these operations, and look at the multiplications and divisions that go into each element of the series, a pattern emerges:

\[\frac{1}{1}, \quad \frac{2}{1}, \quad \frac{1 \cdot 3}{2}, \quad \frac{2 \cdot 4}{1 \cdot 3}, \quad \frac{1 \cdot 3 \cdot 5}{2 \cdot 4} \quad \frac{2 \cdot 4 \cdot 6}{1 \cdot 3 \cdot 5}\]

The general form is:

\[\frac{1 \cdot 3 \cdot 5 \cdot \cdots \cdot n}{2 \cdot 4 \cdot \cdots \cdot (n-1)} \quad (\text{odd } n) \qquad \frac{2 \cdot 4 \cdot 6 \cdot \cdots \cdot n}{1 \cdot 3 \cdot 5 \cdot \cdots \cdot (n-1)} \quad (\text{even } n).
\]


The functions \(1 \cdot 3 \cdot 5 \cdot \cdots \cdot n\) for odd \(n\) and \(2 \cdot 4 \cdot 6 \cdot \cdots \cdot n\) for even \(n\) have a name! They are known as double factorials, with the notation \(n!!\). Terrible terminology, no? Better to have named them “semi-factorials.” And if I didn’t know better, I would read \(n!!\) as “the factorial of the factorial.” The double factorial of n is defined as the product of n and all smaller positive integers of the same parity. Thus our peculiar sequence of zigzag quotients is simply \(\frac{n!!}{(n-1)!!}\).

A 2012 article by Henry W. Gould and Jocelyn Quaintance (behind a paywall, regrettably) surveys the applications of double factorials. They turn up more often than you might guess. In the middle of the 17th century John Wallis came up with this identity:

\[\frac{\pi}{2} = \frac{2 \cdot 2 \cdot 4 \cdot 4 \cdot 6 \cdot 6 \cdots}{1 \cdot 3 \cdot 3 \cdot 5 \cdot 5 \cdot 7 \cdots} = \lim_{n \rightarrow \infty} \frac{((2n)!!)^2}{(2n + 1)!!(2n - 1)!!}\]

An even weirder series, involving the cube of a quotient of double factorials, sums to \(\frac{2}{\pi}\). That one was discovered by (who else?) Srinivasa Ramanujan.

Gould and Quaintance also discuss the double factorial counterpart of binomial coefficients. The standard binomial coefficient is defined as:

\[\binom{n}{k} = \frac{n!}{k! (n-k)!}.\]

The double version is:

\[\left(\!\binom{n}{k}\!\right) = \frac{n!!}{k!! (n-k)!!}.\]

Note that our zigzag numbers fit this description and therefore qualify as double factorial binomial coefficients. Specifically, they are the numbers:

\[\left(\!\binom{n}{1}\!\right) = \left(\!\binom{n}{n - 1}\!\right) = \frac{n!!}{1!! (n-1)!!}.\]

The regular binomial \(\binom{n}{1}\) is not very interesting; it is simply equal to \(n\). But the doubled version \(\left(\!\binom{n}{1}\!\right)\), as we’ve seen, dances a livelier jig. And, unlike the single binomial, it is not always an integer. (The only integer values are \(1\) and \(2\).)

Seeing the zigzag numbers as ratios of double factorials explains quite a few of their properties, starting with the alternation of evens and odds. We can also see why all the even numbers in the sequence are powers of 2. Consider the case of \(n = 6\). The numerator of this fraction is \(2 \cdot 4 \cdot 6 = 48\), which acquires a factor of \(3\) from the \(6\). But the denominator is \(1 \cdot 3 \cdot 5 = 15\). The \(3\)s above and below cancel, leaving \(\frac{16}{5}\). Such cancelations will happen in every case. Whenever an odd factor \(m\) enters the even sequence, it must do so in the form \(2 \cdot m\), but at that point \(m\) itself must already be present in the odd sequence.


Is the sequence of zigzag numbers a reasonable answer to the question, “What happens when you divide instead of multiply in \(n!\)?” Or is the computer program that generates them just a buggy algorithm? My personal judgment is that \(\frac{1}{n!}\) is a more intuitive answer, but \(\frac{n!!}{(n - 1)!!}\) is more interesting.

Furthermore, the mere existence of the zigzag sequence broadens our horizons. As noted above, if you insist that the division algorithm must always chug along the list of \(n\) factors in order, at each stop dividing the number on the left by the number on the right, then there are only \(n\) possible outcomes, and they all look much alike. But the zigzag solution suggests wilder possibilities. We can formulate the task as follows. Take the set of factors \(\{1 \dots n\}\), select a subset, and invert all the elements of that subset; now multiply all the factors, both the inverted and the upright ones. If the inverted subset is empty, the result is the ordinary factorial \(n!\). If all of the factors are inverted, we get the inverse \(\frac{1}{n!}\). And if every second factor is inverted, starting with \(n - 1\), the result is an element of the zigzag sequence.

These are only a few among the many possible choices; in total there are \(2^n\) subsets of \(n\) items. For example, you might invert every number that is prime or a power of a prime \((2, 3, 4, 5, 7, 8, 9, 11, \dots)\). For small \(n\), the result jumps around but remains consistently less than \(1\):

Prime powers

If I were to continue this plot to larger \(n\), however, it would take off for the stratosphere. Prime powers get sparse farther out on the number line.


Here’s a question. We’ve seen factorial variants that go to zero as \(n\) goes to infinity, such as \(1/n!\). We’ve seen other variants grow without bound as \(n\) increases, including \(n!\) itself, and the zigzag numbers. Are there any versions of the factorial process that converge to a finite bound other than zero?

My first thought was this algorithm:

function greedy_balance(n)
    q = 1
    while n > 0
        q = q > 1 ? q /= n : q *= n
        n -= 1
    end
    return q
end

We loop through the integers from \(n\) down to \(1\), calculating the running product/quotient \(q\) as we go. At each step, if the current value of \(q\) is greater than \(1\), we divide by the next factor; otherwise, we multiply. This scheme implements a kind of feedback control or target-seeking behavior. If \(q\) gets too large, we reduce it; too small and we increase it. I conjectured that as \(n\) goes to infinity, \(q\) would settle into an ever-narrower range of values near \(1\).

Running the experiment gave me another surprise:

Greedy balance linear

That sawtooth wave is not quite what I expected. One minor peculiarity is that the curve is not symmetric around \(1\); the excursions above have higher amplitude than those below. But this distortion is more visual than mathematical. Because \(q\) is a ratio, the distance from \(1\) to \(10\) is the same as the distance from \(1\) to \(\frac{1}{10}\), but it doesn’t look that way on a linear scale. The remedy is to plot the log of the ratio:

Greedy balance

Now the graph is symmetric, or at least approximately so, centered on \(0\), which is the logarithm of \(1\). But a larger mystery remains. The sawtooth waveform is very regular, with a period of \(4\), and it shows no obvious signs of shrinking toward the expected limiting value of \(\log q = 0\). Numerical evidence suggests that as \(n\) goes to infinity the peaks of this curve converge on a value just above \(q = \frac{5}{3}\), and the troughs approach a value just below \(q = \frac{3}{5}\). (The corresponding base-\(10\) logarithms are roughly \(\pm0.222\). I have not worked out why this should be so. Perhaps someone will explain it to me.

The failure of this greedy algorithm doesn’t mean we can’t find a divisive factorial that converges to \(q = 1\). If we work with the logarithms of the factors, this procedure becomes an instance of a well-known compu­tational problem called the number partitioning problem. You are given a set of real numbers and asked to divide it into two sets whose sums are equal, or as close to equal as possible. It’s a certifiably hard problem, but it has also been called (PDF) “the easiest hard problem.”For any given \(n\), we might find that inverting some other subset of the factors gives a better approximation to \(n! = 1\). For small \(n\), we can solve the problem by brute force: Just look at all \(2^n\) subsets and pick the best one.

I have computed the optimal partitionings up to \(n = 30\), where there are a billion possibilities to choose from.

Optimum balance graph

The graph is clearly flatlining. You could use the same method to force convergence to any other value between \(0\) and \(n!\).

And thus we have yet another answer to the question in the tweet that launched this adventure. What happens when you divide instead of multiply in n!? Anything you want.

Posted in computing, mathematics | 5 Comments

A Room with a View

Greene St station overview 900px rectified 6314

On my visit to Baltimore for the Joint Mathematics Meetings a couple of weeks ago, I managed to score a hotel room with a spectacular scenic view. My seventh-floor perch overlooked the Greene Street substation of the Baltimore Gas and Electric Company, just around the corner from the Camden Yards baseball stadium.

Some years ago, writing about such technological landscapes, I argued that you can understand what you’re looking at if you’re willing to invest a little effort:

At first glance, a substation is a bewildering array of hulking steel machines whose function is far from obvious. Ponderous tanklike or boxlike objects are lined up in rows. Some of them have cooling fins or fans; many have fluted porcelain insulators poking out in all directions…. If you look closer, you will find there is a logic to this mélange of equipment. You can make sense of it. The substation has inputs and outputs, and with a little study you can trace the pathways between them.

If I were writing that passage now, I would hedge or soften my claim that an electrical substation will yield its secrets to casual observation. Each morning in Baltimore I spent a few minutes peering into the Greene Street enclosure. I was able to identify all the major pieces of equipment in the open-air part of the station, and I know their basic functions. But making sense of the circuitry, finding the logic in the arrangement of devices, tracing the pathways from inputs to outputs—I have to confess, with a generous measure of chagrin, that I failed to solve the puzzle. I think I have the answers now, but finding them took more than eyeballing the hardware.


Basics first. A substation is not a generating plant. BGE does not “make” electricity here. The substation receives electric power in bulk from distant plants and repackages it for retail delivery. Transformer voltage plateAt Greene Street the incoming supply is at 115,000 volts (or 115 kV). The output voltage is about a tenth of that: 13.8 kV. How do I know the voltages? Not through some ingenious calculation based on the size of the insulators or the spacing between conductors. In an enlargement of one of my photos I found an identify­ing plate with the blurry and partially obscured but still legible notation “115/13.8 KV.”

The biggest hunks of machinery in the yard are the transformers (photo below), which do the voltage conversion. Each transformer is housed in a steel tank filled with oil, which serves as both insulator and coolant. Immersed in the oil bath are coils of wire wrapped around a massive iron core. Stacks of radiator panels, with fans mounted underneath, help cool the oil when the system is under heavy load. A bed of crushed stone under the transformer is meant to soak up any oil leaks and reduce fire hazards.

Greene Street transformer 6321

Electricity enters and leaves the transformer through the ribbed gray posts, called bushings, mounted atop the casing. A bushing is an insulator with a conducting path through the middle. It works like the rubber grommet that protects the power cord of an appliance where it passes through the steel chassis. The high-voltage inputs attach to the three tallest bushings, with red caps; the low-voltage bushings, with dark gray caps, are shorter and more closely spaced. Notice that each high-voltage input travels over a single slender wire, whereas each low-voltage output has three stout conductors. That’s because reducing the voltage to one-tenth increases the current tenfold.

What about the three slender gray posts just to the left of the high-voltage bushings? They are lightning arresters, shunting sudden voltage surges into the earth to protect the transformer from damage.

Perhaps the most distinctive feature of this particular substation is what’s not to be seen. There are no tall towers carrying high-voltage transmission lines to the station. PotheadsClearing a right of way for overhead lines would be difficult and destructive in an urban center, so the high-voltage “feeders” run under­ground. In the photo at right, near the bottom left corner, a bundle of three metal-sheathed cables emerges from the earth. Each cable, about as thick as a human forearm, has a copper or aluminum conductor running down the middle, surrounded by insulation. I suspect these cables are insulated with layers of paper impregnated with oil under pressure; some of the other feeders entering the station may be of a newer design, with solid plastic insulation. Each cable plugs into the bottom of a ceramic bushing, which carries the current to a copper wire at the top. (You can tell the wire is copper because of the green patina.)

Busbars 6318 Edit

Connecting the feeder input to the transformer is a set of three hollow aluminum conductors called bus bars, held high overhead on steel stanchions and ceramic insulators. At both ends of the bus bars are mechanical switches that open like hinged doors to break the circuit. I don’t know whether these switches can be opened when the system is under power or whether they are just used to isolate components for maintenance after a feeder has been shut down. Beyond the bus bars, and hidden behind a concrete barrier, we can glimpse the bushings atop a different kind of switch, which I’ll return to below.

Three phase waveformAt this point you might be asking, why does everything come in sets of three—the bus bars, the feeder cables, the terminals on the transformer? It’s because electric power is distributed as three-phase alternating current. Each conductor carries a voltage oscillating at 60 Hertz, with the three waves offset by one-third of a cycle. If you recorded the voltage between each of the three pairs of conductors (AB, AC, BC), you’d see a waveform like the one above at left.

At the other end of the conducting pathway, connected to three more bus bars on the low-voltage side of the transformer, is an odd-looking stack of three large drums. These

Choke coils 6323

are current-limiting reactors (no connection with nuclear reactors). They are coils of thick conductors wound on a stout concrete armature. Under normal operating conditions they have little effect on the transmission of power, but in the milliseconds following a short circuit, the sudden rush of current generates a strong magnetic field in the coils, absorbing the energy of the fault current and preventing damage to other equipment.


So those are the main elements of the substation I was able to spot from my hotel window. They all made sense individually, and yet I realized over the course of a few days that I didn’t really understand how it all works together. My doubts are easiest to explain with the help of a bird’s eye view of the substation layout, cribbed from Google Maps:

Google Maps view of Greene Street substation

My window vista was from off to the right, beyond the eastern edge of the compound. In the Google Maps view, the underground 115 kV feeders enter at the bottom or southern edge, and power flows northward through the transformers and the reactor coils, finally entering the building that occupies the northeast corner of the lot. Neither Google nor I can see inside this windowless building, but I know what’s in there, in a general way. That’s where the low-voltage (13.8 kV) distribution lines go underground and fan out to their various destinations in the neighborhood.

Let’s look more closely at the outdoor equipment. There are four high-voltage feeders, four transformers, and four sets of reactor coils. Apart from minor differences in geometry (and one newer-looking, less rusty transformer), these four parallel pathways all look alike. It’s a symmetric four-lane highway. Thus my first hypothesis was that four independent 115 kV feeders supply power to the station, presumably bringing it from larger substations and higher-voltage transmission lines outside the city.

However, something about the layout continued to bother me. If we label the four lanes of the highway from left to right, then on the high-voltage side, toward the bottom of the map view, it Metalclad MO 2alooks like there’s something connecting lanes 1 and 2 and, and there’s a similar link between lanes 3 and 4. From my hotel window the view of this device is blocked by a concrete barricade, and unfortunately the Google Maps image does not show it clearly either. (If you zoom in for a closer view, the goofy Google compression algorithm will turn the scene into a dreamscape where all the components have been draped in Saran Wrap.) Nevertheless, I’m quite sure of what I’m looking at. The device connecting the pairs of feeders is a high-voltage three-phase switch, or circuit breaker, something like the ones seen in the image at right (photographed at another substation, in Missouri.) The function of this device is essentially the same as that of a circuit breaker in your home electrical panel. You can turn it off manually to shut down a circuit, but it may also “trip” automatically in response to an overload or a short circuit. The concrete barriers flanking the two high-voltage breakers at Greene Street hint at one of the problems with such switches. Interrupting a current of hundreds of amperes at more than 100,000 volts is like stopping a runaway truck: It requires absorbing a lot of energy. The switch does not always survive the experience.

When I first looked into the Greene Street substation, I was puzzled by the absence of breakers at the input end of each main circuit. I expected to see them there to protect the transformers and other components from overloads or lightning strikes. I think there are breakers on the low-voltage side, tucked in just behind the transformers and thus not clearly visible from my window. But there’s nothing on the high side. I could only guess that such protection is provided by breakers near the output of the next substation upstream, the one that sends the 115 kV feeders into Greene Street.

That leaves the question of why pairs of circuits within the substation are cross-linked by breakers. I drew a simplified diagram of how things are wired up:

Circuit sketch

Two adjacent 115 kV circuits run from bottom to top; the breaker between them connects corresponding conductors—left to left, middle to middle, right to right. But what’s the point of doing so?

I had some ideas. If one transformer were out of commission, the pathway through the breaker could allow power to be rerouted through the remaining transformer (assuming it could handle the extra load). Indeed, maybe the entire design simply reflects a high level of redundancy. There are four incoming feeders and four transformers, but perhaps only two are expected to operate at any given time. The breaker provides a means of switching between them, so that you could lose a circuit (or maybe two) and still keep all the lights on. After all, this is a substation supplying power to many large facilities—the convention center (where the math meetings were held), a major hospital, large hotels, the ball park, theaters, museums, high-rise office buildings. Reliability is important here.

After further thought, however, this scheme seemed highly implausible. There are other substation layouts that would allow any of the four feeders to power any of the four transformers, allowing much greater flexibility in handling failures and making more efficient use of all the equipment. Linking the incoming feeders in pairs made no sense.


I would love to be able to say that I solved this puzzle on my own, just by dint of analysis and deduction, but it’s not true. When I got home and began looking at the photographs, I was still baffled. The answer eventually came via Google, though it wasn’t easy to find. Before revealing where I went wrong, I’ll give a couple of hints, which might be enough for you to guess the answer.

Hint 1. I was led astray by a biased sample. I am much more familiar with substations out in the suburbs or the countryside, partly because they’re easier to see into. Most of them are surrounded by a chain-link fence rather than a brick wall. But country infrastructure differs from the urban stuff.

Hint 2. I was also fooled by geometry when I should have been thinking about topology. To understand what you’re seeing in the Greene Street compound, you have to get beyond individual components and think about how it’s all connected to the rest of the network.

The web offers marvelous resources for the student of infrastructure, but finding them can be a challenge. You might suppose that the BGE website would have a list of the company’s facilities, and maybe a basic tutorial on where Baltimore’s electricity comes from. There’s nothing of the sort (although the utility’s parent company does offer thumbnail descriptions of some of their generating plants). Baltimore City websites were a little more helpful—not that they explained any details of substation operation, but they did report various legal and regulatory filings concerned with proposals for new or updated facilities. From those reports I learned the names of several BGE installations, which I could take back to Google to use as search terms.

One avenue I pursued was figuring out where the high-voltage feeders entering Greene Street come from. I discovered a substation called Pumphrey about five miles south of the city, near BWI airport, which seemed to be a major nexus of transmission lines. Balto Resco 1783In particular, four 115 kV feeders travel north from Pumphrey to a substation in the Westport neighborhood, which is about a mile south of downtown. The Pumphrey-Westport feeders are overhead lines, and I had seen them already. Their right of way parallels the light rail route I had taken into town from the airport. Beyond the Westport substation, which is next to a light rail stop of the same name, the towers disappear. An obvious hypothesis is that the four feeders dive underground at Westport and come up at Greene Street. This guess was partly correct: Power does reach Greene Street from Westport, but not exclusively.

At Westport BGE has recently built a small, gas-fired generating plant, to help meet peak demands. The substation is also near the Baltimore RESCO waste-to-energy power plant (photo above), which has become a local landmark. (It’s the only garbage burner I know that turns up on postcards sold in tourist shops.) Power from both of these sources could also make its way to the Greene Street substation, via Westport.

I finally began to make sense of the city’s wiring diagram when I stumbled upon some documents published by the PJM Interconnection, the administrator and coordinator of the power “pool” in the mid-Atlantic region. PJM stands for Pennsylvania–New Jersey–Maryland, but it covers a broader territory, including Delaware, Ohio, West Virginia, most of Virginia, and parts of Kentucky, Indiana, Michigan, and Illinois. Connecting to such a pool has important advantages for a utility. If an equipment failure means you can’t meet your customers’ demands for electricity, you can import power from elsewhere in the pool to make up the shortage; conversely, if you have excess generation, you can sell the power to another utility. The PJM supervises the market for such exchanges.

The idea behind power pooling is that neighbors can prop each other up in times of trouble; however, they can also knock each other down. As a condition of membership in the pool, utilities have to maintain various standards for engineering and reliability. PJM committees review plans for changes or additions to a utility’s network. It was a set of Powerpoint slides prepared for one such committee that first alerted me to my fundamental misconception. One of the slides included the map below, tracing the routes of 115 kV feeders (green lines) in and around downtown Baltimore.

Baltimore 115 kV ring from PJM map

I had been assuming—even though I should have known better—that the distribution network is essentially treelike, with lines radiating from each node to other nodes but never coming back together. For low-voltage distribution lines in sparsely settled areas, this assumption is generally correct. If you live in the suburbs or in a small town, there is one power line that runs from the local substation to your neighborhood; if a tree falls on it, you’re in the dark until the problem is fixed. There is no alternative route of supply. But that is not the topology of higher-voltage circuits. The Baltimore network consists of rings, where power can reach most nodes by following either of two pathways.

In the map we can see the four 115 kV feeders linking Pumphrey to Westport. From Westport, two lines run due north to Greene Street, then make a right turn to another station named Concord Street. As far as I can tell, there is no Concord Street in Baltimore. There’s a Concord Road, but it’s miles away in the northwest corner of the city. The substation is actually at 750 East Pratt Street, occupying the lower floors of an 18-story office tower.They continue east to Monument Street, then north again to Erdman, where the ring receives additional power from other lines coming down from the north. The ring then continues west to Center Street and returns to Westport, closing the loop. The arrangement has some clear advantages for reliability. You can break any one link in a ring without cutting power to any of the substations; the power simply flows around the ring in the other direction.

This double-ring architecture calls for a total reinterpretation of how the Greene Street substation works. I had imagined the four 115 kV inputs as four lanes of one-way traffic, all pouring into the substation and dead-ending in the four transformers. In reality we have just two roadways, both of which enter the substation and then leave again, continuing on to further destinations. And they are not one-way; they can both carry traffic in either direction. The transformers are like exit ramps that siphon off a portion of the traffic while the main stream passes by.

At Greene Street, two of the underground lines entering the compound come from Westport, but the other two proceed to Concord Street, the next station around the ring. What about the breakers that sit between the incoming and outgoing branches of each circuit? They open up the ring to isolate any section that experiences a serious failure. For example, a short circuit in one of the cables running between Greene Street and Concord Street would cause breakers at both of those stations to open up, but both stations would continue to receive power coming around the other branch of the loop.

This revised interpretation was confirmed by another document made available by PJM, this one written by BGE engineers as an account of their engineering practices for transmission lines and substations. It includes a schematic diagram of a typical downtown Baltimore substation. The diagram makes no attempt to reproduce the geometric layout of the components; it rearranges them to make the topology clearer.

Typical downtown Baltimore dist substation

The two 115 kV feeders that run through the substation are shown as horizontal lines; the solid black squares in the middle are the breakers that join the pairs of feeders and thereby close the two rings that run through all the downtown substations. The transformers are the W-shaped symbols at the ends of the branch lines.

A mystery remains. The symbol Switch represents a disconnect switch, a rather simple mechanical device that generally cannot be operated when the power line is under load. The Circuit switcher symbol is identified in the BGE document as a circuit switcher, a more elaborate device capable of interrupting a heavy current. In the Greene Street photos, however, the switches at the two ends of the high-voltage bus bars appear almost identical. I’m not seeing any circuit switchers there. But, as should be obvious by now, I’m capable of misinterpreting what I see.

Posted in technology | 1 Comment

Glauber’s dynamics

Roy J. Glauber, Harvard physics professor for 65 years, longtime Keeper of the Broom at the annual Ig Nobel ceremony, and winner of a non-Ig Nobel, has died at age 93. Glauber is known for his work in quantum optics; roughly speaking, he developed a mathematical theory of the laser at about the same time that device was invented, circa 1960. His two main papers on the subject, published in Physical Review in 1963, did not meet with instant acclaim; the Nobel committee’s recognition of their worth came more than 40 years later, in 2005. A third paper from 1963, titled “Time-dependent statistics of the Ising model,” also had a delayed impact. It is the basis of a modeling algorithm now called Glauber dynamics, which is well known in the cloistered community of statistical mechanics but deserves wider recognition.

Before digging into the dynamics, however, let us pause for a few words about the man himself, drawn largely from the obituaries in the New York Times and the Harvard Crimson.

Glauber was a member of the first class to graduate from the Bronx High School of Science, in 1941. From there he went to Harvard, but left in his sophomore year, at age 18, to work in the theory division at Los Alamos, where he helped calculate the critical mass of fissile material needed for a bomb. After the war he finished his degree at Harvard and went on to complete a PhD under Julian Schwinger. After a few brief adventures in Princeton and Pasadena, he was back at Harvard in 1952 and never left. A poignant aspect of his life is mentioned briefly in a 2009 interview, where Glauber discusses the challenge of sustaining an academic career while raising two children as a single parent.


Here’s a glimpse of Glauber dynamics in action. Click the Go button, then try fiddling with the slider.

3.00

In the computer program that drives this animation, the slider controls a variable representing temperature. At high temperature (slide the control all the way to the right), you’ll see a roiling, seething mass of colored squares, switching rapidly and randomly between light and dark shades. There are no large-scale or long-lived structures. Occasionally the end point is not a monochromatic field. Instead the panel is divided into broad stripes—horizontal, vertical, or diagonal. This is an artifact of the finite size of the lattice and the use of wraparound boundary conditions. On an infinite lattice, the stripes would not occur.At low temperature (slide to the left), the tableau congeals into a few writhing blobs of contrasting color. Then the minority blobs are likely to evaporate, and you’ll be left with an unchanging, monochromatic panel. Between these extremes there’s some interesting behavior. Adjust the slider to a temperature near 2.27 and you can expect to see persistent fluctuations at all possible scales, from isolated individual blocks to patterns that span the entire array.

What we’re looking at here is a simulation of a model of a ferromagnet—the kind of magnet that sticks to the refrigerator. The model was introduced almost 100 years ago by Wilhelm Lenz and his student Ernst Ising. They were trying to understand the thermal behavior of ferromagnetic materials such as iron. If you heat a block of magnetized iron above a certain temperature, called the Curie point, it loses all traces of magnetization. Slow cooling below the Curie point allows it to spontaneously magnetize again, perhaps with the poles in a different orientation. The onset of ferromagnetism at the Curie point is an abrupt phase transition.

Lenz and Ising created a stripped-down model of a ferromagnet. In the two-dimensional version shown here, each of the small squares represents the spin vector of an unpaired electron in an iron atom. The vector can point in either of two directions, conventionally called up and down, which for graphic convenience are represented by two contrasting colors. There are \(100 \times 100 = 10{,}000\) spins in the array. This would be a minute sample of a real ferromagnet. On the other hand, the system has \(2^{10{,}000}\) possible states—quite an enormous number.

The essence of ferromagnetism is that adjacent spins “prefer” to point in the same direction. To put that more formally: The energy of neighboring spins is lower when they are parallel, rather than antiparallel. four possible orientations of two spins (up up, up down, down up, down down), with their corresponding energiesFor the array as a whole, the energy is minimized if all the spins point the same way, either up or down. Each spin contributes a tiny magnetic moment. When the spins are parallel, all the moments add up and the system is fully magnetized.

If energy were the only consideration, the Ising model would always settle into a magnetized configuration, but there is a countervailing influence: Heat tends to randomize the spin directions. At infinite temperature, thermal fluctuations completely overwhelm the spins’ tendency to align, and all states are equally likely. Because the vast majority of those \(2^{10{,}000}\) configurations have nearly equal numbers of up and down spins, the magnetization is negligible. At zero temperature, nothing prevents the system from condensing into the fully magnetized state. The interval between these limits is a battleground where energy and entropy contend for supremacy. Clearly, there must be a transition of some kind. For Lenz and Ising in the 1920s, the crucial question was whether the transition comes at a sharply defined critical temperature, as it does in real ferromagnets. A more gradual progression from one regime to the other would signal the model’s failure to capture important aspects of ferromagnet physics.

In his doctoral dissertation Ising investigated the one-dimensional version of the model—a chain or ring of spins, each one holding hands with its two nearest neighbors. The result was a disappointment: He found no abrupt phase transition. And he speculated that the negative result would also hold in higher dimensions. The Ising model seemed to be dead on arrival.

It was revived a decade later by Rudolf Peierls, who gave suggestive evidence for a sharp transition in the two-dimensional lattice. Then in 1944 Lars Onsager “solved” the two-dimensional model, showing that the phase transition does exist. The phase diagram looks like this:

Graph of the gagnetization v temperature phase diagram for the Ising model.

As the system cools, the salt-and-pepper chaos of infinite temperature evolves into a structure with larger blobs of color, but the up and down spins remain balanced on average (implying zero magnetization) down to the critical temperature \(T_C\). At that point there is a sudden bifurcation, and the system will follow one branch or the other to full magnetization at zero temperature.


If a model is classified as solved, is there anything more to say about it? In this case, I believe the answer is yes. The solution to the two-dimensional Ising model gives us a prescription for calculating the probability of seeing any given configuration at any given temperature. That’s a major accomplishment, and yet it leaves much of the model’s behavior unspecified. The solution defines the probability distribution at equilibrium—after the system has had time to settle into a statistically stable configuration. It doesn’t tell us anything about how the lattice of spins reaches that equilibrium when it starts from an arbitrary initial state, or how the system evolves when the temperature changes rapidly.

It’s not just the solution to the model that has a few vague spots. When you look at the finer details of how spins interact, the model itself leaves much to the imagination. When a spin reacts to the influence of its nearest neighbors, and those neighbors are also reacting to one another, does everything happen all at once? Suppose two antiparallel spins both decide to flip at the same time; they will be left in a configuration that is still antiparallel. It’s hard to see how they’ll escape repeating the same dance over and over, like people who meet head-on in a corridor and keep making mirror-image evasive maneuvers. This kind of standoff can be avoided if the spins act sequentially rather than simultaneously. But if they take turns, how do they decide who goes first?

Within the intellectual traditions of physics and mathematics, these questions can be dismissed as foolish or misguided. After all, when we look at the procession of the planets orbiting the sun, or at the colliding molecules in a gas, we don’t ask who takes the first step; the bodies are all in continuous and simultaneous motion. Newton gave us a tool, calculus, for understanding such situations. If you make the steps small enough, you don’t have to worry so much about the sequence of marching orders.

However, if you want to write a computer program simulating a ferromagnet (or simulating planetary motions, for that matter), questions of sequence and synchrony cannot be swept aside. With conventional computer hardware, “let everything happen at once” is not an option. The program must consider each spin, one at a time, survey the surrounding neighborhood, apply an update rule that’s based on both the state of the neighbors and the temperature, and then decide whether or not to flip. Thus the program must choose a sequence in which to visit the lattice sites, as well as a sequence in which to visit the neighbors of each site, and those choices can make a difference in the outcome of the simulation. So can other details of implementation. Do we look at all the sites, calculate their new spin states, and then update all those that need to be flipped? Or do we update each spin as we go along, so that spins later in the sequence will see an array already modified by earlier actions? The original definition of the Ising model is silent on such matters, but the programmer must make a commitment one way or another.

This is where Glauber dynamics enters the story. Glauber presented a version of the Ising model that’s somewhat more explicit about how spins interact with one another and with the “heat bath” that represents the influence of temperature. It’s a theory of Ising dynamics because he describes the spin system not just at equilibrium but also during transitional stages. I don’t know if Glauber was the first to offer an account of Ising dynamics, but the notion was certainly not commonplace in 1963.


There’s no evidence Glauber was thinking of his method as an algorithm suitable for computer implementation. The subject of simulation doesn’t come up in his 1963 paper, where his primary aim is to find analytic expressions for the distribution of up and down spins as a function of time. (He did this only for the one-dimensional model.) Nevertheless, Glauber dynamics offers an elegant approach to programming an interactive version of the Ising model. Assume we have a lattice of \(N\) spins. Each spin \(\sigma\) is indexed by its coordinates \(x, y\) and takes on one of the two values \(+1\) and \(-1\). Thus flipping a spin is a matter of multiplying \(\sigma\) by \(-1\). The algorithm for a updating the lattice looks like this:

Repeat \(N\) times:

  1. Choose a spin \(\sigma_{x, y}\) at random.
  2. Sum the values of the four neighboring spins, \(S = \sigma_{x+1, y} + \sigma_{x-1, y} + \sigma_{x, y+1} + \sigma_{x, y-1}\). The possible values of \(S\) are \(\{-4, -2, 0, +2, +4\}\).
  3. Calculate \(\Delta E = 2 \, \sigma_{x, y} \, S\), the change in interaction energy if \(\sigma_{x, y}\) were to flip.
  4. If \(\Delta E \lt 0\), set \(\sigma_{x, y} = -\sigma_{x, y}\).
  5. Otherwise, set \(\sigma_{x, y} = -\sigma_{x, y}\) with probability \(\exp(-\Delta E/T)\), where \(T\) is the temperature.

Display the updated lattice.

Step 4 says: If flipping a spin will reduce the overall energy of the system, flip it. Step 5 says: Even if flipping a spin raises the energy, go ahead and flip it in a randomly selected fraction of the cases. The probability of such spin flips is the Boltzmann factor \(\exp(-\Delta E/T)\). This quantity goes to \(0\) as the temperature \(T\) falls to \(0\), so that energetically unfavorable flips are unlikely in a cold lattice. The probability approaches \(1\) as \(T\) goes to infinity, which is why the model is such a seething mass of fluctuations at high temperature.

(If you’d like to take a look at real code rather than pseudocode—namely the JavaScript program running the simulation above—it’s on GitHub.)

Glauber dynamics belongs to a family of methods called Markov chain Monte Carlo algorithms (MCMC). The idea of Markov chains was an innovation in probability theory in the early years of the 20th century, extending classical probability to situations where the the next event depends on the current state of the system. Monte Carlo algorithms emerged at post-war Los Alamos, not long after Glauber left there to resume his undergraduate curriculum. He clearly kept up with the work of Stanislaw Ulam and other former colleagues in the Manhattan Project.

Within the MCMC family, the distinctive feature of Glauber dynamics is choosing spins at random. The obvious alternative is to march methodically through the lattice by columns and rows, examining every spin in turn. Blinking checkerboardThat procedure can certainly be made to work, but it requires care in implementation. At low temperature the Ising process is very nearly deterministic, since unfavorable flips are extremely rare. When you combine a deterministic flip rule with a deterministic path through the lattice, it’s easy to get trapped in recurrent patterns. For example, a subtle bug yields the same configuration of spins on every step, shifted left by a single lattice site, so that the pattern seems to slide across the screen. Another spectacular failure gives rise to a blinking checkerboard, where every spin is surrounded by four opposite spins and flips on every time step. Avoiding these errors requires much fussy attention to algorithmic details. (My personal experience is that the first attempt is never right.)

Choosing spins by throwing random darts at the lattice turns out to be less susceptible to clumsy mistakes. Yet, at first glance, the random procedure seems to have hazards of its own. In particular, choosing 10,000 spins at random from a lattice of 10,000 sites does not guarantee that every site will be visited once. On the contrary, a few sites will be sampled six or seven times, and you can expect that 3,679 sites (that’s \(1/e \times 10{,}000)\) will not be visited at all. Doesn’t that bias distort the outcome of the simulation? No, it doesn’t. After many iterations, all the sites will get equal attention.

The nasty bit in all Ising simulation algorithms is updating pairs of adjacent sites, where each spin is the neighbor of the other. Which one goes first, or do you try to handle them simultaneously? The column-and-row ordering maximizes exposure to this problem: Every spin is a member of such a pair. Other sequential algorithms—for example, visiting all the black squares of a checkerboard followed by all the white squares—avoid these confrontations altogether, never considering two adjacent spins in succession. Glauber dynamics is the Goldilocks solution. Pairs of adjacent spins do turn up as successive elements in the random sequence, but they are rare events. Decisions about how to handle them have no discernible influence on the outcome.


Years ago, I had several opportunities to meet Roy Glauber. Regrettably, I failed to take advantage of them. Glauber’s office at Harvard was in the Lyman Laboratory of Physics, a small isthmus building connecting two larger halls. In the 1970s I was a frequent visitor there, pestering people to write articles for Scientific American. It was fertile territory; for a few years, the magazine found more authors per square meter in Lyman Lab than anywhere else in the world. But I never knocked on Glauber’s door. Perhaps it’s just as well. I was not yet equipped to appreciate what he had to say.

Now I can let him have the last word. This is from the introduction to the paper that introduced Glauber dynamics:

If the mathematical problems of equilibrium statistical mechanics are great, they are at least relatively well-defined. The situation is quite otherwise in dealing with systems which undergo large-scale changes with time. The principles of nonequilibrium statistical mechanics remain in largest measure unformulated. While this lack persists, it may be useful to have in hand whatever precise statements can be made about the time-dependent hehavior of statistical systems, however simple they may be.

Posted in computing, physics | 1 Comment

Empty nest season

I am past the stage of life when the kids go off to school in the fall, but nonetheless the house is feeling a bit desolate at this time of year. My companions of the summer have gone to earth or flown away.

Last spring a pair of robins built two-and-a-half nests on a sheltered beam just outside my office door. They raised two chicks that fledged by the end of June, and then two more in August. Both clutches of eggs were incubated in the same nest (middle photo below), which was pretty grimy by the end of the season. A second nest (upper photo) served as a hangout for the nonbrooding parent. I came to think of it as the man-cave, although I’m not at all sure about the sex of those birds. As for the half nest, I don’t know why that project was abandoned, or why it was started in the first place.

Three robin nests.

Elsewhere, a light fixture in the carport has served as a nesting platform for a phoebe each summer we’ve lived here. Is it the same bird every year? I like to think so, but if I can’t even identify a bird’s sex I have little hope of recognizing individuals. This year, after the tenant decamped, I discovered an egg that failed to hatch.

Phoebe nest

We also had house wrens in residence—noisy neighbors, constantly partying or quarreling, I can never tell the difference. It was like living next to a frat house. I have no photo of their dwelling: It fell apart in my hands.

Paper wasp nests

Under the eaves above our front door we hosted several small colonies of paper wasps. All summer I watched the slow growth of these structures with their appealing symmetries and their equally interesting imperfections. (Skilled labor shortage? Experiments in noneuclidean geometry?) I waited until after the first frost to cut down the nests, thinking they were abandoned, but I discovered a dozen moribund wasps still huddling behind the largest apartment block. They were probably fertile females looking for a place to overwinter. If they survive, they’ll likely come back to the same spot next year—or so I’ve learned from Howard E. Evans, my go-to source of wasp wisdom.

Another mysterious dwelling unit clung to the side of a rafter in the carport. It was a smooth, fist-size hunk of mud with no visible entrances or exits. When I cracked it open, I found several hollow chambers, some empty, some occupied by decomposing larvae or prey. Last year in the same place we had a few delicate tubes built by mud-dauber wasps, but this one is an industrial-strength creation I can’t identify. Any ideas?

Mud dauber wasp nest cross section 6274

The friends I’ll miss most are not builders but squatters. All summer we have shared our back deck with a population of minifrogs—often six or eight at a time—who took up residence in tunnel-like spaces under flowerpots. In nice weather they would join us for lunch alfresco.

Frog under flowerpot video

As of today two frogs are still hanging on, and I worry they will freeze in place. I should move the flowerpots, I think, but it seems so inhospitable.

May everyone return next year.

Posted in biology | Leave a comment

Another technological tragedy

One Thursday afternoon last month, dozens of fires and explosions rocked three towns along the Merrimack River in Massachusetts. By the end of the day 131 buildings were damaged or destroyed, one person was killed, and more than 20 were injured. Suspicion focused immediately on the natural gas system. It looked like a pressure surge in the pipelines had driven gas into homes where stoves, heaters, and other appliances were not equipped to handle the excess pressure. Earlier this week the National Transportation Safety Board released a brief preliminary report supporting that hypothesis.

Burned dwelling in Lawrence, MA.A house in Lawrence, Mass., burned on September 13, 2018, as a result of a natural gas pressure surge. (Image from the NTSB report.)

I had believed such a catastrophe was all but impossible. The natural gas industry has many troubles, including chronic leaks that release millions of tons of methane into the atmosphere, but I had thought that pressure regulation was a solved problem. Even if someone turned the wrong valve, failsafe mechanisms would protect the public. Evidently not. (I am not an expert on natural gas. While working on my book Infrastructure, I did some research on the industry and the technology, toured a pipeline terminal, and spent a day with a utility crew installing new gas mains in my own neighborhood. The pages of the book that discuss natural gas are online here.)


The hazards of gas service were already well known in the 19th century, when many cities built their first gas distribution systems. Gas in those days was not “natural” gas; it was a product manufactured by roasting coal, or sometimes the tarry residue of petroleum refining, in an atmosphere depleted of oxygen. The result was a mixture of gases, including methane and other hydrocarbons but also a significant amount of carbon monoxide. Because of the CO content, leaks could be deadly even if the gas didn’t catch fire.

Every city needed its own gasworks, because there were no long-distance pipelines. The output of the plant was accumulated in a gasholder, a gigantic tank that confined the gas at low pressure—less than one pound per square inch above atmospheric pressure (a unit of measure known as pounds per square inch gauge, or psig). The gas was gently wafted through pipes laid under the street to reach homes at a pressure of 1/4 or 1/2 psig. Overpressure accidents were unlikely because the entire system worked at the same modest pressure. As a matter of fact, the greater risk was underpressure. If the flow of gas was interrupted even briefly, thousands of pilot lights would go out; then, when the flow resumed, unburned toxic gas would seep into homes. Utility companies worked hard to ensure that would never happen.

Gasholders looming over a neighborhood in Genoa, Italy, once held manufactured gas for use at a steelmill. The photograph was made in 2001; Google Maps shows the tanks have since been demolished.

Gas technology has evolved a great deal since the gaslight era. Long-distance pipelines carry natural gas across continents at pressures of 1,000 psig or more. At the destination, the gas is stored in underground cavities or as a cryogenic liquid. It enters the distribution network at pressures in the neighborhood of 100 psig. The higher pressures allow smaller diameter pipes to serve larger territories. But the pressure must still be reduced to less than 1 psig before the gas is delivered to the customer. Having multiple pressure levels complicates the distribution system and requires new safeguards against the risk of high-pressure gas going where it doesn’t belong. Apparently those safeguards didn’t work last month in the Merrimack valley.

Liquified natural gas storage tanks in Everett, Mass.

Cryogenic storage tanks in Everett, Mass., near Boston, hold liquified natural gas that supplies utilities in surrounding communities.

The gas system in that part of Massachusetts is operated by Columbia Gas, a subsidiary of a company called NiSource, with headquarters in Indiana. At the time of the conflagration, contractors for Columbia were upgrading distribution lines in the city of Lawrence and in two neighboring towns, Andover and North Andover. The two-tier system had older low-pressure mains—including some cast-iron pipes dating back to the early 1900s—fed by a network of newer lines operating at 75 psig. Fourteen regulator stations handled the transfer of gas between systems, maintaining a pressure of 1/2 psig on the low side.

The NTSB preliminary report gives this account of what happened around 4 p.m. on September 13:

The contracted crew was working on a tie-in project of a new plastic distribution main and the abandonment of a cast-iron distribution main. The distribution main that was abandoned still had the regulator sensing lines that were used to detect pressure in the distribution system and provide input to the regulators to control the system pressure. Once the contractor crews disconnected the distribution main that was going to be abandoned, the section containing the sensing lines began losing pressure.

As the pressure in the abandoned distribution main dropped about 0.25 inches of water column (about 0.01 psig), the regulators responded by opening further, increasing pressure in the distribution system. Since the regulators no longer sensed system pressure they fully opened allowing the full flow of high-pressure gas to be released into the distribution system supplying the neighborhood, exceeding the maximum allowable pressure.

When I read those words, I groaned. The cause of the accident was not a leak or an equipment failure or a design flaw or a worker turning the wrong valve. The pressure didn’t just creep up beyond safe limits while no one was paying attention; the pressure was driven up by the automatic control system meant to keep it in bounds. The pressure regulators were “trying” to do the right thing. Sensor readings told them the pressure was falling, and so the controllers took corrective action to keep the gas flowing to customers. But the feedback loop the regulators relied on was not in fact a loop. They were measuring pressure in one pipe and pumping gas into another.

Schematic diagram of the gas distribution system, with a misconfigured control system measuring pressure in an abandoned pipeline.

The conjectured cause of the fires and explosions in Lawrence and nearby towns is a misconfigured pressure-control system, according to the NTSB preliminary report. Service was switched to a new low-pressure gas main, but the pressure regulator continued to monitor sensors attached to the old pipeline, now abandoned and empty.

The NTSB’s preliminary report offers no conclusions or recommendations, but it does note that the contractor in Lawrence was following a “work package” prepared by Columbia Gas, which did not mention moving or replacing the pressure sensors. Thus if you’re looking for someone to blame, there’s a hint about where to point your finger. The clue is less useful, however, if you’re hoping to understand the disaster and prevent a recurrence. “Make sure all the parts are connected” is doubtless a good idea, but better still is building a failsafe system that will not burn the city down when somebody goofs.

Suppose you’re taking a shower, and the water feels too warm. You nudge the mixing valve toward cold, but the water gets hotter still. When you twist the valve a little further in the same direction, the temperature rises again, and the room fills with steam. In this situation, you would surely not continue turning the knob until you were scalded. At some point you would get out of the shower, shut off the water, and investigate. Maybe the controls are mislabeled. Maybe the plumber transposed the pipes.

Since you do so well controlling the shower, let’s put you in charge of regulating the municipal gas service. You sit in a small, windowless room, with your eyes on a pressure gauge and your hand on a valve. The gauge has a pointer indicating the measured pressure in the system, and a red dot (called a bug) showing the desired pressure, or set point. If the pointer falls below the bug, you open the valve a little to let in more gas; if the pointer drifts up too high, you close the valve to reduce the flow. (Of course there’s more to it than just open and close. For a given deviation from the set point, how far should you twist the valve handle? Control theory answers this question.)

It’s worth noting that you could do this job without any knowledge of what’s going on outside the windowless room. You needn’t give a thought to the nature of the “plant,” the system under control. What you’re controlling is the position of the needle on the gauge; the whole gas distribution network is just an elaborate mechanism for linking the valve you turn with the gauge you watch. Many automatic control system operate in exactly this mindless mode. And they work fine—until they don’t.

As a sentient being, you do in fact have a mental model of what’s happening outside. Just as the control law tells you how to respond to changes in the state of the plant, your model of the world tells you how the plant should respond to your control actions. For example, when you open the valve to increase the inflow of gas, you expect the pressure to increase. (Or, in some circumstances, to decrease more slowly. In any event, the sign of the second derivative should be positive.) If that doesn’t happen, the control law would call for making an even stronger correction, opening the valve further and forcing still more gas into the pipeline. But you, in your wisdom, might pause to consider the possible causes of this anomaly. Perhaps pressure is falling because a backhoe just ruptured a gas main. Or, as in Lawrence last month, maybe the pressure isn’t actually falling at all; you’re looking at sensors plugged into the wrong pipes. Opening the valve further could make matters worse.

Could we build an automatic control system with this kind of situational awareness? Control theory offers many options beyond the simple feedback loop. We might add a supervisory loop that essentially controls the controller and sets the set point. And there is an extensive literature on predictive control, where the controller has a built-in mathematical model of the plant, and uses it to find the best trajectory from the current state to the desired state. But neither of these techniques is commonly used for the kind of last-ditch safety measures that might have saved those homes in the Merrimack Valley. More often, when events get too weird, the controller is designed to give up, bail out, and leave it to the humans. That’s what happened in Lawrence.

Minutes before the fires and explosions occurred, the Columbia Gas monitoring center in Columbus, Ohio [probably a windowless room], received two high-pressure alarms for the South Lawrence gas pressure system: one at 4:04 p.m. and the other at 4:05 p.m. The monitoring center had no control capability to close or open valves; its only capability was to monitor pressures on the distribution system and advise field technicians accordingly. Following company protocol, at 4:06 p.m., the Columbia Gas controller reported the high-pressure event to the Meters and Regulations group in Lawrence. A local resident made the first 9-1-1 call to Lawrence emergency services at 4:11 p.m.

Columbia Gas shut down the regulator at issue by about 4:30 p.m.


I admit to a morbid fascination with stories of technological disaster. I read NTSB accident reports the way some people consume murder mysteries. The narratives belong to the genre of tragedy. In using that word I don’t mean just that the loss of life and property is very sad. These are stories of people with the best intentions and with great skill and courage, who are nonetheless overcome by forces they cannot master. The special pathos of technological tragedies is that the engines of our destruction are machines that we ourselves design and build.

Looking on the sunnier side, I suspect that technological tragedies are more likely than Oedipus Rex or Hamlet to suggest a practical lesson that might guide our future plans. Let me add two more examples that seem to have plot elements in common with the Lawrence gas disaster.

First, the meltdown at the Three Mile Island nuclear power plant in 1979. In that event, a maintenance mishap was detected by the automatic control system, which promptly shut down the reactor, just as it was supposed to do, and started emergency pumps to keep the uranium fuel rods covered with cooling water. But in the following minutes and hours, confusion reigned in the control room. Because of misleading sensor readings, the crowd of operators and engineers believed the water level in the reactor was too high, and they struggled mightily to lower it. Later they realized the reactor had been running dry all along.

Second, the crash of Air France 447, an overnight flight from Rio de Janeiro to Paris, in 2009. In this case the trouble began when ice at high altitude clogged pitot tubes, the sensors that measure airspeed. With inconsistent and implausible speed inputs, the autopilot and flight-management systems disengaged and sounded an alarm, basically telling the pilots “You’re on your own here.” Unfortunately, the pilots also found the instrument data confusing, and formed the erroneous opinion that they needed to pull the nose up and climb steeply. The aircraft entered an aerodynamic stall and fell tail-first into the ocean with the loss of all on board.

In these events no mechanical or physical fault made an accident inevitable. In Lawrence the pipes and valves functioned normally, as far as I can tell from press reports and the NTSB report. Even the sensors were working; they were just in the wrong place. At Three Mile Island there were multiple violations of safety codes and operating protocols; nevertheless, if either the automatic or the human controllers had correctly diagnosed the problem, the reactor would have survived. And the Air France aircraft over the Atlantic was airworthy to the end. It could have flown on to Paris if only there had been the means to level the wings and point it in the right direction.

All of these events feel like unnecessary disasters—if we were just a little smarter, we could have avoided them—but the fires in Lawrence are particularly tormenting in this respect. With an aircraft 35,000 feet over the ocean, you can’t simply press Pause when things don’t go right. Likewise a nuclear reactor has no safe-harbor state; even after you shut down the fission chain reaction, the core of the reactor generates enough heat to destroy itself. But Columbia Gas faced no such constraints in Lawrence. Even if the pressure-regulating system is not quite as simple as I have imagined it, there is always an escape route available when parameters refuse to respond to control inputs. You can just shut it all down. Safeguards built into the automatic control system could do that a lot more quickly than phone calls from Ohio. The service interruption would be costly for the company and inconvenient for the customers, but no one would lose their home or their life.

Control theory and control engineering are now embarking on their greatest adventure ever: the design of self-driving cars and trucks. Next year we may see the first models without a steering wheel or a brake pedal—there goes the option of asking the driver (passenger?) to take over. I am rooting for this bold undertaking to succeed. I am also reminded of a term that turns up frequently in discussions of Athenian tragedy: hubris.

Posted in computing, technology | 22 Comments