## Sudden Deaf

My erstwhile employer, mentor, and dearest friend was Dennis Flanagan, who edited Scientific American for 37 years. He is the larger of the two aquatic specimens in the photograph below.

One of the quirks of life with Dennis was that he didn’t hear well, as a result of childhood ear infections. In an unpublished memoir he lists his deafness as a major influence on his path through life. It was a hardship in school, because he missed much of what his teachers were saying. On the other hand, it kept him out of the military in World War II.

Later in life, hearing aids helped considerably, but only on one side. When we went to lunch, I learned to sit to his right, so that I could speak to the better ear. When we took someone out to lunch, the guest got the favored chair. In our monthly editorial meetings, however, he turned his deaf ear to Gerard Piel, the magazine’s co-founder and publisher. (They didn’t always get along.) In Dennis’s last years, after both of us had left the magazine, we would take long walks through Lower Manhattan, with stops in coffee shops and sojourns on park benches, and again I made sure I was the right-hand man. Dennis died in 2005. I miss him all the time.

Although I was always aware of Dennis’s hearing impairment, I never had an inkling of what his asymmetric sensory experience might feel like from inside his head. Now I have a chance to find out. A few days ago I had a sudden failure of hearing in my left ear. At the time I had no idea what was happening, so I can’t reconstruct an exact chronology, but I think the ear went from normal function to zilch in a matter of seconds or minutes. It was like somebody pulled the plug.

I have since learned that this is a rare phenomenon (5 to 20 cases per 100,000 population) but well-known to the medical community. It has a name: Sudden Sensorineural Hearing Loss. It is a malfunction of the cochlea, the inner-ear transducer between mechanical vibration and neural activity. An audiological exam confirmed that my eardrum and the delicate linkage of tiny bones in the middle ear are functioning normally, but the signal is not getting through to the brain. In most cases of SSNH, the cause is never identified. I’m under treatment, and there’s a decent chance that at least some level of hearing will be restored.

I don’t often write about matters this personal, and I’m not doing so now to whine about my fate or to elicit sympathy. I want to record what I’m going through because I find it fascinating as well as distressing. A great deal of what we know about the human brain comes from accidents and malfunctions, and now I’m learning some interesting lessons at first hand.

The obvious first-order effect of losing an ear is cutting in half the amplitude of the received acoustic signal. This is perhaps the least disruptive aspect of the impairment, and the easiest to mitigate.

The second major effect is more disturbing: trouble locating the source of a sound. Binaural hearing is key to localization. For low-pitched sounds, with wavelengths greater than the diameter of the head, the brain detects the phase difference between waves reaching the two ears. The phase measurement can yield an angular resolution of just a few degrees. At higher frequencies and shorter wavelengths, the head effectly blocks sound, and so there is a large intensity difference between the two ears, which provides another localizing cue. This mechanism is somewhat less acurate, but you can home in on a source by turning your head to null the intensity difference.

With just one ear, both kinds of directional guidance are lacking. This did not come as a surprise to me, but I had never thought about what it would be like to perceive nonlocalized sounds. You might imagine it would be like switching the audio system from stereophonic to monoaural. In that case, you lose the illusion that the strings are on the left side of the stage and the brasses on the right; the whole orchestra is all mixed up in front of you. Nevertheless, in your head you are still localizing the sounds; they are all coming from the speakers across the room. Having one ear is not like that; it’s not just life in mono.

In my present state I can’t identify the sources of many sounds, but they don’t come from nowhere. Some of them come from everywhere. The drone of the refrigerator surrounds me; I hear it radiating from all four walls and the floor and ceiling; it’s as if I’m somehow inside the sound. And one night there was a repetitive thrub-a-dub that puzzled me so much I had to get out of bed and go searching for the cause. The search was essentially a random one: I determined it was not the heating system, and nothing in the kitchen or bathroom. Finally I discovered that the noise was rain pouring off the roof into the gutters and downspouts.

The failures of localization are most disturbing when the apparent source is not vague or unknown but rather quite definite—and wrong! My phone rings, and I reach out to my right to pick it up, but in fact it’s in my shirt pocket. While driving the other day, I heard the whoosh of a car that seemed to be passing me on the right, along the shoulder of the road. I almost veered left to make room. If I had done so, I would have run into the overtaking vehicle, which was of course actually on my left. (Urgent priority: Learn to ignore deceptive directional cues.)

In the first hour or so after this whole episode began, I did not recognize it as a loss of hearing; what I noticed instead was a distracting barrage of echoes. I was chatting with three other people in a room that has always seemed acoustically normal, but words were coming at me from all directions like high-velocity ping-pong balls. The echoes have faded a little in the days since, but I still hear double in some situations. And, interestingly, the echo often seems to be coming from the nonfunctioning ear. I have a hypothesis about what’s going on. Echoes are real, after all; sounds really do bounce off walls, so that the ears receive multiple instances of a sound separated by millisecond delays. Normally, we don’t perceive those echoes. The ears must be sensing them, but some circuitry in the brain is suppressing the perception. (Telephone systems have such circuitry too.) Based on my experience, I suspect that the suppression mechanism depends on the presence of signals from both ears.

Similar to echo suppression is noise suppression. I find I have lost the benefit of the “cocktail party effect,” whereby we select a single voice to attend to and filter out the background chatter. The truth is, I was never very good at that trick, but I’m notably worse now. A possibly related development is that I have the illusion of enhanced hearing acuity for some kinds of noise. The sound of water running from a faucet carries all through the house now. And the sound of my own chewing can be thunderous. In the past, perhaps the binaural screening process was turning down the gain on such commonplace distractions.

Even though no sounds of the outside world are reaching me from the left side of my head, that doesn’t mean the ear is silent. It seems to emit a steady hiss, which I’m told is common in this condition. Occasionally, in a very quiet room, I also hear faint chimes of pure sine tones. Do any of these signals actually originate in the affected cochlea, or are they phantoms that the brain merely attributes to that source?

The most curious interior noise is one that I’ve taken to calling the motor. In the still of the night, if I turn my head a certain way, I hear a putt-putt-putt with the rhythm of a sputtering lawn-mower engine, though very faint and voiceless. The intriguing thing is, the sound is altered by my breathing. If I hold my breath for a few seconds, the putt-putting slows and sometimes stops entirely. Then when I take a breath, the motor revs up again. Could this response indicate sensitivity to oxygen levels in the blood reaching my head? I like to imagine that the source of the noise is a single lonely neuron in the cochlea, bravely tapping out its spike train—the last little drummer boy in my left ear. But I wouldn’t be surprised to learn it comes from somewhere higher up in the auditory pathway.

One of the first manuscripts I edited at Scientific American (published in October 1973) was an article by the polymath Gerald Oster.

Ordinary beat tones are elementary physics: Whenever two waves combine and interfere, they create a new wave whose frequency is equal to the difference between the two original frequencies. In the case of sound waves at frequencies a few hertz apart, we perceive the beat tone as a throbbing modulation of the sound intensity. Oster asked what happens when the waves are not allowed to combine and interfere but instead are presented separately to the two ears. In certain frequency ranges it turns out that most people still hear the beats; evidently they are generated by some interference process within the auditory networks of the brain. Oster suggested that a likely site is the superior olivary nucleus. There are two of these bodies arrayed symmetrically just to the left and right of the midline in the back of the brain. They both receive signals from both ears.

Whatever the mechanism generating the binaural beats, it has to be happening somewhere inside the head. It’s a dramatic reminder that perception is not a passive process. We don’t really see and hear the world; we fabricate a model of it based on the sensations we receive—or fail to receive.

I’m hopeful that this little experiment of nature going on inside my cranium will soon end, but if it turns out to be a permanent condition, I’ll cope. As it happens, my listening skills will be put to the test over the next several months, as I’m going to be spending a lot of time in lecture halls. There’s the annual Joint Mathematics Meeting coming up in early January, then I’m spending the rest of the spring semester at the Simons Institute for the Theory of Computing in Berkeley. Lots of talks to attend. You’ll find me in the front of the room, to the left of the speaker.

My years with Dennis Flanagan offer much comfort when I consider the prospect of being half-deaf. His deficit was more severe than mine, and he put up with it from childhood. It never held him back—not from creating one of the world’s great magazines, not from leading several organizations, not from traveling the world, not from spearing a 40-pound bass while free diving in Great South Bay.

One worry I face is music—will I ever be able to enjoy it again?—but Dennis’s example again offers encouragement. We shared a great fondness for Schubert. I can’t know exactly what Dennis was hearing when we listened to a performance of the Trout Quintet together, but he got as much pleasure out of it as I did. And in his sixties he went beyond appreciation to performance. He had wanted to learn the cello, but a musician friend advised him to take up the brass instrument of the same register. He did so, and promptly learned to play a Bach suite for unaccompanied cello on the slide trombone.

Posted in biology, off-topic | 4 Comments

## Approximately Yours

Today, I’m told, is Rational Approximation Day. It’s 22/7 (for those who write dates in little-endian format), which differs from π by about 0.04 percent. (The big-endians among us are welcome to approximate 1/π.)

Given the present state of life in America, what we really need is an Approximation to Rationality Day, but that may have to wait for 20/1/21. In the meantime, let us merrily fiddle with numbers, searching for ratios of integers that brazenly invade the personal space of famous irrationals.

When I was a teenager, somebody told me about the number 355/113, which is an exceptionally good approximation to π. The exact value is

3.141592920353982520964564173482358455657958984375,

correct through the first six digits after the decimal point. In other words, it differs from the true value by less than one-millionth. I was intrigued, and so I set out to find an even better approximation. My search was necessarily a pencil-and-paper affair, since I had no access to any electronic or even mechanical aids to computation. The spiral-bound notebook in which I made my calculations has not survived, and I remember nothing about the outcome of the effort.

A dozen years later I acquired some computing machinery: a Hewlett-Packard programmable calculator, called the HP-41C. Here is the main loop of an HP-41C program that searches for good rational approximations. Note the date at the top of the printout (written in middle-endian format). Apparently I was finishing up this program just before Approximation Day in 1981.

What’s that you say? You’re not fluent in the 30-year-old Hewlett-Packard dialect of reverse Polish notation? All right, here’s a program that does roughly the same thing, written in an oh-so-modern language, Julia.

function approximate(T, dmax)
d = 1
leastError = T
while d <= dmax && leastError > 0
n = Int(round(d * T))
err = abs(T - n/d) / T
merit = 1 / ((n + d)^2 * err)
if err < leastError
println("$n/$d = $(n/d) error =$err  merit = \$merit")
leastError = err
end
d += 1
end
end

The algorithm is a naive, sequential search for fractions $$n/d$$ that approximate the target number $$T$$. For each value of $$d$$, you need to consider only one value of $$n$$, namely the integer nearest to $$d \times T$$. (What happens if $$d \times T$$ falls halfway between two integers? That can’t happen if $$T$$ is irrational.) Thus you can begin with $$d = 1$$ and continue up to a specified largest denominator $$d = dmax$$. The accuracy of the approximation is measured by the error term $$|T - n/d| / T$$. Whenever a value of $$n/d$$ yields a new minimum error, the program prints a line of results. (This version of the algorithm works correctly only for $$T \gt 1$$, but it can readily be adapted to $$T \lt 1$$.)

The HP-41C has a numerical precision of 10 decimal digits, and so the closest possible approximation to π is 3.141592654. Back in 1981 I ran the program until it found a fraction equal to this value—a perfect approximation, from the program’s point of view. According to a note on the printout, that took 13 hours. The Julia program above, running on a laptop, completes the same computation in about three milliseconds. You’re welcome to take a scroll through the results, below. (The numbers are not digit-for-digit identical to those generated by the HP-41C because Julia calculates with higher precision, about 16 decimal digits.)

     3/1     = 3.0                 error = 0.045070341573315915    merit =  1.3867212410256813
13/4     = 3.25                error = 0.03450712996224109     merit =  0.10027514940370374
16/5     = 3.2                 error = 0.018591635655129744    merit =  0.12196741256165179
19/6     = 3.1666666666666665  error = 0.007981306117055373    merit =  0.20046844169789904
22/7     = 3.142857142857143   error = 0.0004024993041452083   merit =  2.9541930379680195
179/57    = 3.1403508771929824  error = 0.00039526983405584675  merit =  0.04542368072920613
201/64    = 3.140625            error = 0.0003080138345651019   merit =  0.04623150469956595
223/71    = 3.140845070422535   error = 0.00023796324342470652  merit =  0.04861781754719378
245/78    = 3.141025641025641   error = 0.0001804858353094197   merit =  0.053107007660473673
267/85    = 3.1411764705882352  error = 0.00013247529441315622  merit =  0.060922789404334425
289/92    = 3.141304347826087   error = 9.177070539240495e-5    merit =  0.07506646742266793
311/99    = 3.1414141414141414  error = 5.6822320879624425e-5   merit =  0.10469195703580983
333/106   = 3.141509433962264   error = 2.6489760736525772e-5   merit =  0.19588127575835135
355/113   = 3.1415929203539825  error = 8.478310581938076e-8    merit = 53.85164473263654
52518/16717 = 3.1415923909792425  error = 8.37221074104896e-8     merit =  0.00249177288308447
52873/16830 = 3.141592394533571   error = 8.259072954625822e-8    merit =  0.0024921016732136797
53228/16943 = 3.1415923980404887  error = 8.147444291923546e-8    merit =  0.0024926612882136163
53583/17056 = 3.141592401500938   error = 8.03729477091334e-8     merit =  0.0024934520351304946
53938/17169 = 3.1415924049158366  error = 7.928595172899531e-8    merit =  0.0024944743578840687
54293/17282 = 3.141592408286078   error = 7.821317056655376e-8    merit =  0.0024957288257085445
54648/17395 = 3.141592411612532   error = 7.715432730151448e-8    merit =  0.002497216134767719
55003/17508 = 3.1415924148960475  error = 7.610915194012454e-8    merit =  0.0024989371196291283
55358/17621 = 3.1415924181374497  error = 7.507738155653036e-8    merit =  0.0025008927426067996
55713/17734 = 3.1415924213375437  error = 7.405876001006156e-8    merit =  0.0025030840968725283
56068/17847 = 3.1415924244971145  error = 7.305303737979925e-8    merit =  0.002505512419906649
56423/17960 = 3.1415924276169265  error = 7.20599703886498e-8     merit =  0.002508179074048983
56778/18073 = 3.141592430697726   error = 7.107932141383905e-8    merit =  0.0025110855755419263
57133/18186 = 3.14159243374024    error = 7.01108591937022e-8     merit =  0.002514233565685482
57488/18299 = 3.1415924367451775  error = 6.915435783817789e-8    merit =  0.0025176248413626597
57843/18412 = 3.1415924397132304  error = 6.820959725288218e-8    merit =  0.0025212613363967255
58198/18525 = 3.141592442645074   error = 6.727636243231866e-8    merit =  0.002525145143834103
58553/18638 = 3.141592445541367   error = 6.635444374259433e-8    merit =  0.0025292785028112976
58908/18751 = 3.141592448402752   error = 6.544363663870371e-8    merit =  0.0025336638062423296
59263/18864 = 3.141592451229856   error = 6.454374152317083e-8    merit =  0.002538303603848205
59618/18977 = 3.1415924540232916  error = 6.365456332197522e-8    merit =  0.002543200616913158
59973/19090 = 3.1415924567836564  error = 6.277591190862598e-8    merit =  0.002548357720152209
60328/19203 = 3.1415924595115348  error = 6.190760125601375e-8    merit =  0.0025537779743748956
60683/19316 = 3.1415924622074964  error = 6.10494500018427e-8     merit =  0.0025594646031786867
61038/19429 = 3.1415924648720983  error = 6.020128088319864e-8    merit =  0.002565421015548036
61393/19542 = 3.141592467505885   error = 5.936292059519092e-8    merit =  0.0025716508123781218
61748/19655 = 3.141592470109387   error = 5.853420007366852e-8    merit =  0.0025781577749599853
62103/19768 = 3.1415924726831244  error = 5.771495407114599e-8    merit =  0.002584945883912429
62458/19881 = 3.141592475227604   error = 5.690502101544554e-8    merit =  0.002592019327133724
62813/19994 = 3.141592477743323   error = 5.6104242868339024e-8   merit =  0.0025993825084809985
63168/20107 = 3.1415924802307655  error = 5.531246526690591e-8    merit =  0.0026070400439016164
63523/20220 = 3.1415924826904056  error = 5.4529537523533324e-8   merit =  0.0026149967637792084
63878/20333 = 3.141592485122707   error = 5.375531191912607e-8    merit =  0.002623257749852838
64233/20446 = 3.141592487528123   error = 5.2989644268538606e-8   merit =  0.0026318283126966317
64588/20559 = 3.141592489907097   error = 5.22323933551431e-8     merit =  0.0026407140236596287
64943/20672 = 3.1415924922600618  error = 5.148342135490336e-8    merit =  0.002649920699086574
65298/20785 = 3.1415924945874427  error = 5.0742592988226976e-8   merit =  0.002659454449139831
65653/20898 = 3.1415924968896545  error = 5.0009776226755164e-8   merit =  0.0026693216486930156
66008/21011 = 3.141592499167103   error = 4.928484186928889e-8    merit =  0.002679528965991537
66363/21124 = 3.1415925014201855  error = 4.8567663400430846e-8   merit =  0.0026900833784673454
66718/21237 = 3.1415925036492913  error = 4.7858116990585446e-8   merit =  0.0027009921818650063
67073/21350 = 3.141592505854801   error = 4.715608149595883e-8    merit =  0.0027122629998437182
67428/21463 = 3.1415925080370872  error = 4.6461438175842924e-8   merit =  0.002723903810648984
67783/21576 = 3.1415925101965145  error = 4.577407111668933e-8    merit =  0.002735922933992634
68138/21689 = 3.1415925123334407  error = 4.5093866383961494e-8   merit =  0.0027483290931549346
68493/21802 = 3.1415925144482157  error = 4.442071258756658e-8    merit =  0.002761131395876878
68848/21915 = 3.141592516541182   error = 4.375450074049751e-8    merit =  0.002774339356802981
69203/22028 = 3.1415925186126747  error = 4.309512411747499e-8    merit =  0.0027879629217230834
69558/22141 = 3.1415925206630235  error = 4.244247783087354e-8    merit =  0.002802012512429091
69913/22254 = 3.14159252269255    error = 4.179645953751142e-8    merit =  0.0028164989998024
70268/22367 = 3.1415925247015695  error = 4.115696873186072e-8    merit =  0.0028314337694556623
70623/22480 = 3.1415925266903915  error = 4.0523907028763286e-8   merit =  0.002846828724926181
70978/22593 = 3.141592528659319   error = 3.989717788071482e-8    merit =  0.00286269633032941
71333/22706 = 3.1415925306086496  error = 3.9276686719222797e-8   merit =  0.0028790496258831624
71688/22819 = 3.1415925325386738  error = 3.86623409548065e-8     merit =  0.0028959022542887716
72043/22932 = 3.141592534449677   error = 3.805404969428105e-8    merit =  0.0029132685103826087
72398/23045 = 3.1415925363419395  error = 3.7451723882115376e-8   merit =  0.0029311633622333107
72753/23158 = 3.1415925382157353  error = 3.685527615907423e-8    merit =  0.002949602495467867
73108/23271 = 3.1415925400713336  error = 3.626462086221821e-8    merit =  0.002968602349703417
73463/23384 = 3.1415925419089974  error = 3.567967430761971e-8    merit =  0.002988180133716996
73818/23497 = 3.141592543728987   error = 3.510035365949903e-8    merit =  0.003008353961046636
74173/23610 = 3.1415925455315543  error = 3.452657862652023e-8    merit =  0.003029142753805288
74528/23723 = 3.1415925473169497  error = 3.395826962413729e-8    merit =  0.0030505664465106676
74883/23836 = 3.141592549085417   error = 3.339534904681598e-8    merit =  0.0030726459300795604
75238/23949 = 3.1415925508371956  error = 3.283774056124397e-8    merit =  0.003095403169820992
75593/24062 = 3.141592552572521   error = 3.228536938904675e-8    merit =  0.0031188612412389144
75948/24175 = 3.1415925542916234  error = 3.173816202407169e-8    merit =  0.0031430444223940115
76303/24288 = 3.14159255599473    error = 3.1196046373746034e-8   merit =  0.0031679782521683033
76658/24401 = 3.141592557682062   error = 3.065895190043484e-8    merit =  0.0031936895918127546
77013/24514 = 3.1415925593538385  error = 3.01268089146511e-8     merit =  0.0032202067806171002
77368/24627 = 3.141592561010273   error = 2.9599549423203633e-8   merit =  0.003247559639023363
77723/24740 = 3.1415925626515766  error = 2.9077106281049175e-8   merit =  0.0032757796556622983
78078/24853 = 3.1415925642779543  error = 2.8559414180798277e-8   merit =  0.0033048999843237645
78433/24966 = 3.14159256588961    error = 2.804640823913544e-8    merit =  0.003334955716987436
78788/25079 = 3.1415925674867418  error = 2.753802541039899e-8    merit =  0.0033659838476231357
79143/25192 = 3.141592569069546   error = 2.703420321435919e-8    merit =  0.003398023556100075
79498/25305 = 3.1415925706382137  error = 2.6534880725724155e-8   merit =  0.0034311162371627422
79853/25418 = 3.141592572192934   error = 2.6039997867349902e-8   merit =  0.0034653057466538235
80208/25531 = 3.141592573733892   error = 2.554949569295635e-8    merit =  0.0035006385417218717
80563/25644 = 3.14159257526127    error = 2.5063316245769302e-8   merit =  0.0035371638899188347
80918/25757 = 3.1415925767752455  error = 2.4581402841236452e-8   merit =  0.0035749340371894456
81273/25870 = 3.1415925782759953  error = 2.410369936023742e-8    merit =  0.003614004535709633
81628/25983 = 3.1415925797636914  error = 2.3630150955873712e-8   merit =  0.00365443439340209
81983/26096 = 3.141592581238504   error = 2.3160703488036753e-8   merit =  0.00369628643041249
82338/26209 = 3.141592582700599   error = 2.2695304230197833e-8   merit =  0.003739627468693587
82693/26322 = 3.1415925841501404  error = 2.2233900879902193e-8   merit =  0.0037845288174018898
83048/26435 = 3.1415925855872895  error = 2.1776442124200985e-8   merit =  0.0038310665494126084
83403/26548 = 3.1415925870122043  error = 2.1322877639651253e-8   merit =  0.00387932189896066
83758/26661 = 3.1415925884250404  error = 2.087315795095796e-8    merit =  0.003929381726572982
84113/26774 = 3.1415925898259505  error = 2.0427234430973973e-8   merit =  0.003981339007706688
84468/26887 = 3.1415925912150855  error = 1.9985059017984126e-8   merit =  0.004035293430477111
84823/27000 = 3.1415925925925925  error = 1.9546584922495102e-8   merit =  0.004091351857390988
85178/27113 = 3.1415925939586176  error = 1.9111765637729565e-8   merit =  0.004149629190123568
85533/27226 = 3.1415925953133033  error = 1.868055578777407e-8    merit =  0.004210248941258058
85888/27339 = 3.141592596656791   error = 1.825291042078912e-8    merit =  0.004273344214343279
86243/27452 = 3.1415925979892174  error = 1.78287858571571e-8     merit =  0.004339058439193095
86598/27565 = 3.1415925993107203  error = 1.7408138417260385e-8   merit =  0.004407546707464268
86953/27678 = 3.1415926006214323  error = 1.6990925835061217e-8   merit =  0.004478976601684539
87308/27791 = 3.1415926019214853  error = 1.6577106127237806e-8   merit =  0.004553529781140699
87663/27904 = 3.1415926032110093  error = 1.6166637875900305e-8   merit =  0.004631403402447433
88018/28017 = 3.141592604490131   error = 1.5759480794022753e-8   merit =  0.0047128116308472546
88373/28130 = 3.141592605758976   error = 1.5355594877295166e-8   merit =  0.004797987771392931
88728/28243 = 3.1415926070176683  error = 1.4954940686839493e-8   merit =  0.0048871863549194705
89083/28356 = 3.141592608266328   error = 1.4557479914641577e-8   merit =  0.004980685405908598
89438/28469 = 3.141592609505076   error = 1.4163174252687263e-8   merit =  0.005078789613658918
89793/28582 = 3.1415926107340284  error = 1.3771986523826276e-8   merit =  0.005181833172630217
90148/28695 = 3.141592611953302   error = 1.338387969226633e-8    merit =  0.005290183824183623
90503/28808 = 3.1415926131630103  error = 1.2998817570363058e-8   merit =  0.005404246870669908
90858/28921 = 3.1415926143632653  error = 1.2616764535904027e-8   merit =  0.005524470210563737
91213/29034 = 3.141592615554178   error = 1.2237685249392783e-8   merit =  0.005651350205744754
91568/29147 = 3.1415926167358563  error = 1.1861545360838771e-8   merit =  0.005785438063205309
91923/29260 = 3.1415926179084073  error = 1.1488310802967408e-8   merit =  0.005927347979056494
92278/29373 = 3.141592619071937   error = 1.111794779122008e-8    merit =  0.006077766389438445
92633/29486 = 3.141592620226548   error = 1.0750423671902066e-8   merit =  0.006237462409303776
92988/29599 = 3.1415926213723435  error = 1.0385705649960649e-8   merit =  0.006407301439430316
93343/29712 = 3.1415926225094237  error = 1.0023761778491034e-8   merit =  0.00658826005755035
93698/29825 = 3.1415926236378877  error = 9.664560534662385e-9    merit =  0.006781444748602359
94053/29938 = 3.1415926247578327  error = 9.308070961075804e-9    merit =  0.006988114128701429
94408/30051 = 3.1415926258693556  error = 8.954262241690382e-9    merit =  0.007209706348604964
94763/30164 = 3.14159262697255    error = 8.603104549971112e-9    merit =  0.007447871540046976
95118/30277 = 3.14159262806751    error = 8.254567918024995e-9    merit =  0.007704513406469473
95473/30390 = 3.141592629154327   error = 7.90862336746494e-9     merit =  0.007981838717667477
95828/30503 = 3.1415926302330917  error = 7.565241919903853e-9    merit =  0.008282421184374838
96183/30616 = 3.1415926313038933  error = 7.224395162386583e-9    merit =  0.008609280341750632
96538/30729 = 3.1415926323668195  error = 6.8860552473899216e-9   merit =  0.008965982432171553
96893/30842 = 3.1415926334219573  error = 6.550194468748648e-9    merit =  0.009356770586561815
97248/30955 = 3.141592634469391   error = 6.216785968445456e-9    merit =  0.009786732283709331
97603/31068 = 3.1415926355092054  error = 5.885802747105052e-9    merit =  0.010262022067809991
97958/31181 = 3.1415926365414837  error = 5.557218370784088e-9    merit =  0.010790155391967196
98313/31294 = 3.1415926375663066  error = 5.231007112329143e-9    merit =  0.011380406450991833
98668/31407 = 3.1415926385837554  error = 4.907143103228812e-9    merit =  0.012044356667029002
99023/31520 = 3.1415926395939087  error = 4.585601323119603e-9    merit =  0.01279665696194468
99378/31633 = 3.141592640596845   error = 4.266356751638026e-9    merit =  0.013656119502875172
99733/31746 = 3.1415926415926414  error = 3.9493849338525334e-9   merit =  0.014647305857352692
100088/31859 = 3.141592642581374   error = 3.6346615561895634e-9   merit =  0.015802906908552822
100443/31972 = 3.1415926435631176  error = 3.322162870507497e-9    merit =  0.017167407267272748
100798/32085 = 3.1415926445379463  error = 3.0118652700227016e-9   merit =  0.018802933529623964
101153/32198 = 3.141592645505932   error = 2.703745854741474e-9    merit =  0.020798958087527405
101508/32311 = 3.1415926464671475  error = 2.397781441954139e-9    merit =  0.02328921472604781
101863/32424 = 3.141592647421663   error = 2.0939496970989362e-9   merit =  0.026482916558483883
102218/32537 = 3.1415926483695484  error = 1.7922284269720909e-9   merit =  0.03072676661447583
102573/32650 = 3.141592649310873   error = 1.492595579727815e-9    merit =  0.03664010548445531
102928/32763 = 3.141592650245704   error = 1.195029527594277e-9    merit =  0.04544847105306477
103283/32876 = 3.1415926511741086  error = 8.995092082315892e-10   merit =  0.05996553050516452
103638/32989 = 3.1415926520961532  error = 6.060132765838922e-10   merit =  0.0883984797913258
103993/33102 = 3.1415926530119025  error = 3.1452123574324146e-10  merit =  0.16916355170353897
104348/33215 = 3.141592653921421   error = 2.5012447443706518e-11  merit =  2.1127131430431656

The error values in the middle column of the table above shrink steadily as you read from the top of the list to the bottom. Each successive approximation is more accurate than all those above it. Does that also mean each successive approximation is better than those above it? I would say no. Any reasonable notion of “better” in this context has to take into account the size of the numerator and the denominator.

If you want an approximation of $$\pi$$ accurate to seven digits, I can give you one off the top of my head: $$3141593/1000000$$. But the numbers making up that ratio are themselves seven digits long. What makes $$355/113$$ impressive is that it achieves seven-digit accuracy with only three digits in the numerator and the denominator. Accordingly, I would argue that a “better” approximation is one that minimizes both error and size. The rightmost column of the table, filled with numbers labeled “merit” is meant to quantify this intuition.

When I wrote that program in 1981, I chose a strange formula for merit, one that now baffles me:

$\frac{1}{(n + d)^2 * err}.$

Adding the numerator and denominator and then squaring the sum is an operation that makes no sense, although the formula as a whole does have the correct qualitative behavior, favoring both smaller errors and smaller values of $$n$$ and $$d$$. In trying to reconstruct what I had in mind 26 years ago, my best guess is that I was trying to capture a geometric insight, and I flubbed it when translating math into code. On this assumption, the correct figure of merit would be:

$\frac{1}{\sqrt{n^2 + d^2} * err}.$

To see where this formula comes from, consider a two-dimensional lattice of integers, with a ray of slope $$\pi$$ drawn from the origin and going on to infinite distance.

Because the line’s slope is irrational, it will never pass through any point of the integer lattice, but it will have many near misses. The near-miss points, with coordinates interpreted as numerator and denominator, are the accurate approximations to $$\pi$$. The diagram suggests a measure of the merit based on distances. An approximation gets better when we minimize the distance of the lattice point from the origin as well as the vertical distance from the point to the $$\pi$$ line. That’s the meaning of the formula with $$\sqrt{n^2 + d^2}$$ in the denominator.

Another approach to defining merit simply counts digits. The merit is the ratio of the number of correctly predicted digits in the irrational target $$T$$ to the number of digits in the denominator. A problem with this scheme is that it’s rather coarse. For example, $$13/4$$ and $$16/5$$ both have single-digit denominators and they each get one digit of $$\pi$$ correct, but
$$16/5$$ actually has a smaller error.

To smooth out the digit-counting criterion, and distinguish between values that differ in magnitude but have the same number of digits, we can take logarithms of the numbers. Let merit equal: $$-log(err) / log(d)$$. (The $$log(err)$$ term is negated because the error is always less than $$1$$ and so its logarithm is negative.)

Here’s a comparison of the three merit criteria for some selected approximations to $$\pi$$:

     n/d           1981 merit                  distance merit              log merit

3/1        1.3867212448620723            7.016316181613145         --
13/4        0.10027514901117529           2.1306165422053285        2.4284808488226544
16/5        0.12196741168912356           3.208700907602539         2.4760467349663537
19/6        0.20046843839209055           6.288264070960828         2.6960388788612515
22/7        2.954192079226498           107.61458138965322          4.017563128080901
179/57       0.04542369572848121          13.467303354323912         1.9381258641568968
201/64       0.04623152429195394          15.390920494844842         1.9441196398907357
223/71       0.04861784421796857          17.956388291625093         1.9573120958787444
245/78       0.05310704607396699          21.548988850935377         1.9785253787278367
267/85       0.06092284944437125          26.93965209372642          2.0098618723780515
289/92       0.07506657421887829          35.92841360228601          2.055872071177696
311/99       0.10469219759604646          53.921550739835986         2.1273838230139175
333/106      0.1958822412726219          108.02438852795403          2.259868093766371
355/113     53.76883630752973          31610.90993685001             3.444107245852723
52163/16604    0.002495514149618044        215.57611105028013          1.6757260012234105
•                  •                         •                            •
•                  •                         •                            •
•                  •                         •                            •
103993/33102    0.2892417579456485        49813.04849576935             2.1538978293241056
104348/33215    0.5006051667655171        86508.24042805366             2.2065386096084607
208341/66317    0.3403602724772912       117433.39822796892             2.1589243556399245
312689/99532    0.6343809166515098       328504.0552596196              2.207421489352196


All three measures agree that $$22/7$$ and $$355/113$$ are quite special. In other respects they give quite different views of the data. My weird 1981 formula compares $$(n + d)^{-2}$$ with $$err^{-1}$$; the asymmetry in the exponents suggests the merit will tend to zero as $$n$$ and $$d$$ increase, at least in the average case. The maximum of the distance-based measure, on the other hand, appears to grow without bound. And the logarithmic merit function seems to be settling on a value near 2.0. This implies that we shouldn’t expect to see many $$n/d$$ approximations where the number of correct digits is greater than twice the number of digits in $$d$$. The late Tom Apostol and Mamikon A. Mnatsakanian proved a closely related proposition (“Surprisingly accurate rational approximations,” Mathematics Magazine, Vol. 75, No. 4 (Oct. 2002), pp. 307-310).

The final joke on my 1981 self is that all this searching for better approximants can be neatly sidestepped by a bit of algorithmic sophistication. The magic phrase is “continued fractions.” The continued fraction for $$\pi$$ begins:

$\pi = 3+\cfrac{1}{7+\cfrac{1}{15+\cfrac{1}{1+\cfrac{1}{292+\cfrac{1}{1 + \cdots}}}}}$

Evaluating the successive levels of this expression yields a sequence of “convergents” that should look familiar:

$3/1, 22/7, 333/106, 355/113, 103993/33102, 104348/33215.$

It is a series of “best” approximations to $$\pi$$, generated without bothering with all the intervening non-“best” values. I produced this list in CoCalc (a.k.a. SageMathCloud), following the excellent tutorial in William Stein’s Elementary Number Theory. Even much larger approximants gush forth from the algorithm in milliseconds. Here’s the 100th element of the series:

$\frac{4170888101980193551139105407396069754167439670144501}{1327634917026642108692848192776111345311909093498260}$

A question remains: In what sense are these approximations “best”? It’s guaranteed that every element of the series is more accurate than all those that came before, but it’s not clear to me that they also satisfy any sort of compactness criterion. But that’s a question to be taken up another day. Perhaps on Continued Fraction Day.

Posted in computing, mathematics | 10 Comments

## Counting Your Chickens Before They’re Pecked

It started with a brief story in the New York Times about Luke Robitaille, a 13-year-old student from Euless, Texas, who won the Raytheon Mathcounts National Competition by correctly answering the following question:

In a barn, 100 chicks sit peacefully in a circle. Suddenly, each chick randomly pecks the chick immediately to its left or right. What is the expected number of unpecked chicks?

Robitaille took less than a second to buzz in with the correct answer, according to the Times.

The next day, Jordan Ellenberg tweeted a followup problem:

Since I don’t have to squeeze this story into 140 characters, I’ll fill in some details of Ellenberg’s question, as I understand it. Where the original problem called for a single round of synchronized random pecking, we now have multiple rounds. During a round, each chick randomly turns either left or right and pecks one of its neighbors. However, once a chick has been pecked, it will never peck again, even if it continues to receive pecks. When two adjacent chicks peck each other in the same round, they both drop out of the pecking game for all future rounds. If an unpecked chick winds up sitting between two pecked neighbors, it can never be pecked and will therefore keep on pecking forever. The question is, what proportion of the flock will survive to become invulnerable peckers?

Spoilers below, so now’s the time to work out the answers for yourself. While you’re busy with that, I’m going to say a few words about chickens, and about the rhetoric and semiotics of mathematical “word problems.”

My only direct knowledge of poultry comes from boyhood visits to my Aunt Noretta’s farm in southern New Jersey. That’s not much of a claim to expertise, but for what it’s worth I never saw her chickens sit in a circle, and they didn’t peck randomly. (They had a pecking order!) Furthermore, nothing I observed in their social interactions resembled the turn-the-other-cheek behavior of the chickens described in this problem. Why does a pecked chick never peck again? This is a bigger riddle than the quantitative question we are asked to address. Has the chick suddenly discovered the wisdom and power of nonviolence? I can think of another explanation, but it’s not for the squeamish: Maybe pecked chicks don’t peck back because pecks are lethal.

I know it’s silly to demand narrative realism in a story like this one. Mathematical word problems belong to a genre where no one expects verisimilitude. They are set in a world where knaves always lie and knights always speak the truth, where shipwrecked sailors obsess about the divisibility properties of a pile of coconuts, where people don’t know the color of the hat on their own head. Even the laws of physics yield to mathematical necessity: A fly shuttling between oncoming locomotives instantaneously reverses direction. Those chicks sitting in a circle are not fluffly bundles of yellow plumage; they are mathematical abstractions. They have coordinates and state variables rather than feathers.

I’m okay with abstraction; by all means, let us strip away extraneous detail. Nevertheless, isn’t the point of word problems to connect the mathematics to some aspect of familiar experience? Consider the ancient and famous river-crossing problem, where the fox must not be left alone with the chicken, which must not be left alone with the bag of corn. These constraints are easy to understand when you know something about the dietary preferences of foxes and chickens. That kind of intuitive boost is not to be found in the pecking problem. On the contrary, a little knowledge of avian behavior actually makes the problem more perplexing.

The single-round problem from the Mathcounts Competition yields to the oldest trick in the probability book. A chick remains unpecked only if both of its neighbors turn away and peck in the other direction. On both the left and the right, the probability of escaping a peck is $$\frac{1}{2}$$, and the two events are independent, so the probability of staying unpecked on both sides is $$\frac{1}{2} \times \frac{1}{2} = \frac{1}{4}$$. This argument applies identically to all the birds in the circle, so you can expect 25 percent of the chicks to come through unscathed.

Do you agree with this analysis? I came up with it pretty quickly when I read the Times article (though not nearly fast enough to beat Luke Robitaille to the buzzer). But then I began to have doubts. Is it strictly true that a chick’s left and right neighbors are totally independent? After all, they are connected by a chain of other chicks. Perhaps some influence can propagate around the circle, creating a correlation between left and right and altering the probability of survival.

Time for an experiment: Write the program, run the simulation. Set up a ring of 100 unpecked chickens and allow a single round of random simultaneous pecking. Repeat many times and calculate the mean number of unpecked birds remaining. (Some quick notation: Let $$N$$ be the number of chicks in the ring and $$S$$ be the number that survive unpecked. I’ll use $$\bar{S}$$ for the mean value of $$S$$ averaged over $$R$$ repetitions of the experiment.) My results:

$$R$$ $$\bar{S}$$
100 24.79
10,000 24.9881
1,000,000 25.000274
100,000,000 24.99991518

As expected, the mean is quite close to 25 survivors. Furthermore, each time the sample size increases by a factor of 100, the accuracy of the approximation improves about tenfold. This pattern conforms to a statistical rule of thumb—that the fluctuations in a random process are proportional to the square root of the sample size. Thus the slight departures from $$\bar{S} = 25$$ appear to be innocent random noise, not some systematic bias.

So that settles it, right?

Well, the simulation looks pretty convincing for the specific case of $$N = 100$$ chicks, but the result might differ for other values of N. In particular, perhaps there’s some finite-size effect that becomes apparent only when N is small. Consider a “circle” of just two chicks. In this situation the left neighbor and the right neighbor are one and the same chicken! No matter what random choices are made, the two chicks immediately peck each other, and the proportion of survivors is not 25 percent but zero.

The next-larger “circle” consists of three chicks arranged in a triangle. The two neighbors of a chick are distinct, but they are also neighbors of each other. What happens when the three chickens are set loose on one another? The system has $$2^3 = 8$$ possible pecking patterns, and we can easily examine all of them. In the diagram, the arrows indicate where the chicks choose to direct their pecking.

In two cases, where all the chicks peck left or all peck right, there are no survivors. In every other instance exactly one chick remains unpecked. Aggregating the eight patterns, we find six unpecked chicks out of 24 total chicks, for a proportion of $$\bar{S} = \frac{1}{4}$$. Thus it appears the finite-size anomaly afflicts only the two-chick version of the problem.

But wait! There’s another possible confounding factor. Can we be sure of seeing the same outcome for both even and odd numbers of chicks? For any odd value of N there is just one way to annihilate all the chicks in a single round: They must all peck in the same direction. For even N, however, another pattern also leads to immediate extinction: Adjacent chicks can pair up, knocking each other out. Won’t this extra pathway slightly alter the overall probability of survival?

Let’s see what happens with N = 4. Now there are $$2^4 = 16$$ possible outcomes:

As expected, four patterns leave no survivors at all. On the other hand, there are also four patterns that leave two chicks unpecked rather than just one. Miraculously, the extra losses and the extra gains balance exactly. In all we have 16 survivors out of 64 chicks, so the ratio is again $$\bar{S} = \frac{1}{4}$$.

After that long and twisty detour through the combinatorics of chicken pecking, we are right back where we started. The probability of surviving unpecked after a single round of pecking is $$\frac{1}{4}$$ for any $$N \gt 2$$. All of my fretting about finite-size effects and odd-even disparities was a waste of time. So why have I inflicted it on you? Well, although those worries turn out to be unfounded, they are not farfetched. Making just a small change to the pecking protocol leads to a different outcome. Let the pecking be sequential rather than simultaneous. Some designated chick initiates the sequence of pecks, and then the birds take turns, proceeding clockwise around the circle. When a chick’s turn comes, if it has already been pecked, it does nothing. If it is unpecked, it pecks either its left or its right neighbor, choosing randomly. The round ends when every chick has had a turn.

For $$N = 2$$ it’s easy to see that the first chick to peck always survives and the other chick always dies, for a survival rate of $$\frac{1}{2}$$. With a little more pencil-and-paper chicken scratching, you can establish that the 50 percent survival rate also holds for $$N = 3$$. Looking at very large values of $$N$$, computer experiments indicate that the survival fraction again approaches $$\frac{1}{2}$$ as N goes to infinity. Between these extremes, however, there’s some funny business:

At $$N = 4$$ the survivor rate dips below 0.47. (The exact probability is $$\frac{30}{64} = 0.46875$$.) This is a minimum. But as the rate recovers back toward 0.5, there is some telltale wiggling in the curve that reveals an odd-even bias: The survival probability is depressed further for even N than for odd N. This is just the kind of behavior I was looking for (but not finding) in the original Mathcounts version of the problem.

Let us now take up Ellenberg’s problem of iterated pecking (using the simultaneous rather than the sequential protocol). We already know that after the first round we can expect to find about one-fourth of the chicks still unpecked. Clearly, the unpecked fraction cannot increase after multiple rounds. Thus in the final state the expected surviving fraction $$\bar{S}$$ must lie somewhere between zero and $$\frac{1}{4}$$.

It’s helpful to look at a typical configuration of pecked (●) and unpecked (○) chicks after a single round of synchronized pecking:

●○●●●○○●●●●○●●○●●○○●●●○○●●●●●○●●●●●●●●●○○●●●●●●○○●●●○●●●●●●○●●●○○●●●●●

(You’ll have to use your imagination to connect the left and right ends of this array and thereby form a ring.) Notice that there are long strings of pecked chicks, but the unpecked chicks appear in only two configurations. They are either singletons (●○●) or pairs (●○○●). The cause of this pattern is not hard to understand. After a round of pecking, a group of three consecutive unpecked chicks (●○○○●) is impossible. The middle chick must have pecked either left or right, and so it cannot have two unpecked neighbors.

These constraints simplify the analysis of subsequent rounds. The singletons are essentially immortal and unchangeable: The unpecked chick in the middle can never be pecked, and the pecked neighbors can never be unpecked. For the pairs, there are four possible fates, corresponding to the four ways the two active chicks could choose to peck:

In any one round, all four of these events have the same probability, namely $$\frac{1}{4}$$. The first three result states are terminal, in the sense that further rounds of pecking will leave them unchanged. In the fourth case we are left with an adjacent pair again, which will therefore face the same set of choices in the next round. Eventually, as the number of rounds goes to infinity, the fourth case must yield one of the other outcomes, and thus in the long run we can consider the fourth case to have probability zero and each of the other three cases to have probability $$\frac{1}{3}$$.

And now it’s time to bring all these contingent events together and work out a chicken’s long-term probability of survival. The diagram below presents the scheme. In the first round of pecking, three-fourths of the chicks are eliminated immediately. Of the remaining one-fourth, half are singletons, which survive indefinitely. The other surviving chicks are members of pairs, with another pecking chick as either a right neighbor or a left neighbor.

The lower part of the diagram summarizes the effect of all subsequent rounds, which are assumed to continue until all pairs have been either annihilated or reduced to singletons. (I call this pecking to completion.) For each pathway that leads to a surviving singleton, the probability is the product of the individual probabilities encountered along that pathway. There are three such pathways, with probabilities $$\frac{1}{8}, \frac{1}{48}$$, and $$\frac{1}{48}$$, for a sum of $$\frac{1}{6}$$.

I have to confess that I did not come up with this analysis—or with the correct answer—on my first try. I was able to work it out only after I had run a simulation and thus knew what I was looking for. Even then I had trouble with double counting.

Here are the simulation results:

$$R$$ $$\bar{S}$$
100 16.53
10,000 16.6835
1,000,000 16.664404
100,000,000 16.66701664

Again note that accuracy seems to improve as the square root of the sample size, although the variance here is larger than in the single-round experiment.

What about finite-size effects? In circles with only two or three members, the fate of the chicks is fully decided after a single round of pecking: $$\bar{S}$$ is 0 and $$\frac{1}{4}$$ respectively. Thus these smallest rings escape the $$\frac{1}{6}$$ rule, but it appears that circles of all larger sizes converge to $$\frac{1}{6}$$. There’s no evidence of even-odd discrepancies.

Another approach to understanding the iterated chicken-pecking problem is through the theory of Markov chains. For a ring of $$N$$ chicks we list all $$2^N$$ states of the flock and assign a probability to each transition between states. Consider a ring of four chicks, which has 16 states. Symmetries allow us to consolidate some sets of states, and other states can be ignored because they are unreachable from the starting state of four unpecked chicks ().

Only the four states in the red box need to be retained in the model. The transitions between them are recorded in a directed graph, where each arrow is labeled with the corresponding probability. Note that the starting state has only outgoing arrows; there is no way to re-enter the state once you leave. The states and are absorbing: The only outgoing arrow leads directly back to the same state; thus, once you reach one of those states, you never escape it.

The essential information from the directed graph can be captured in a $$4 \times 4$$ matrix, where the rows and columns are labeled with the four states, and the matrix entries represent the probability of a transition from the row state to the column state. The entries in each row sum to 1, as they must if they are to represent probabilities.

The pattern of zero entries in the transition matrix implies that certain states can’t be reached from other states, even by an indirect route. For this reason the Markov model is said to be irregular. That’s a bit awkward, because regular Markov models are easier to analyze and understand. In a regular model, when you take successive powers of the transition matrix, it converges to a steady state, where all the rows are identical and every column consists of a single, repeated value. This fixed point reveals the system’s long-term probability distribution. An irregular Markov model may not even have a stable limiting distribution, but this one does, and it seems to offer some insight. Every ring of four chickens must wind up in one of the two absorbing states. With probability two-thirds that terminal state will be and with probability one-third . This result is consistent with the finding that one-sixth of the chickens survive unpecked.

So, finally, that wraps it up, right? Both the contest problem and Ellenberg’s iterative extension asked for the expected number of surviving chickens, and we have supplied the answers: for a circle of $$N$$ chickens, the expected number of survivors $$\bar{S}$$ is $$\frac{N}{4}$$ after a single round of pecking and $$\frac{N}{6}$$ upon pecking to completion. Ironically, though, the expected value of a probabilistic process doesn’t necessarily tell you what to expect. Consider a simpler problem: When you flip a fair coin 100 times, how many heads do you expect to see? The obvious answer is 50, and it’s correct in the sense that no other number has a higher likelihood of correctly predicting the outcome of the experiment. However, the probability of seeing exactly 50 heads is only about 0.08, and thus some other number will turn up more than 90 percent of the time.

Instead of looking only at the expected value, let’s examine the range of possible $$S$$ values in the pecking game. We’ve already established that zero survivors is a possible outcome, so that forms a lower bound. What is the upper bound—the maximum number of survivors? In the single-round process, every chick pecks, and so after that round every chick must have at least one pecked neighbor. On the basis of this fact I claim that the surviving population can never be greater than $$\frac{N}{2}$$. (Do you agree? It took me a while to persuade myself it’s true.)

If $$S$$ can never be greater than $$\frac{N}{2}$$, the next question is whether it can ever attain that bound. And if we can have equal numbers of pecked (●) and unpecked (○) chicks, how are they arranged in the ring? It’s tempting to propose the following configuration:

●○●○●○●○●○●○

This is a stable state: The unpecked chicks can never be pecked, so no further changes are possible. And the fraction of survivors is $$\frac{1}{2}$$. But there’s a problem with this pattern: It cannot be reached from the starting state. Look at any of the black pecked chicks and ask yourself: Which of its neighbors did it peck? Neither of them, evidently, since they are both unpecked. But that’s not possible, given that every chicken must peck in the first round.

Although the alternating black and white arrangement is ruled out, we’re on the right track. There’s another configuration that also leaves one-half of the chicks unpecked after a single round, and that pattern is achievable from the starting state:

●●○○●●○○●●○○

When you join the ends to form a ring, every chick, whether pecked or not, has one pecked neighbor. It turns out this is the only way—after allowing for some obvious symmetries—to reach 50 percent survivorship. (Strictly speaking, 50 percent is attainable only when $$N$$ is divisible by 4, but $$S$$ is never less than $$\frac{N-2}{2}$$.)

When the pecking continues to completion, the upper bound of $$S = \frac{N}{2}$$ is no longer reachable. Suppose we tried to maintain $$\frac{N}{2}$$ over multiple rounds of pecking. Clearly we would have to start in the first round with the maximal-survivor state ●●○○●●○○●●○○. However, at least half of the unpecked chicks in this configuration must succumb in subsequent rounds, leaving no more than $$\frac{N}{4}$$ survivors.

Does this argument mean that $$S = \frac{N}{4}$$ is the greatest possible after pecking to completion? No, it doesn’t. There’s another pattern where one of every three chicks survives:

●●○●●○●●○●●○

This configuration is reachable in a single round and stable indefinitely, since none of the pecking chicks has any pecking neighbors. No other arrangement has a higher density of survivors once the pecking process goes to completion.

To summarize: After one round of pecking the number of surviving chicks must lie somewhere between zero and $$\frac{N}{2}$$, and the expected number $$\bar{S}$$ is right in the middle at $$\frac{N}{4}$$. After all further rounds of pecking are completed, the count of unpecked chicks is between zero and $$\frac{N}{3}$$, with the expected value again in the middle, at $$\bar{S} = \frac{N}{6}$$.

“How many chickens survive?” is a question that seems to call for a numeric answer, but in truth the most informative response is not a number at all; it is a distribution:

Each curve records the results of a million experiments with a ring of 100 chicks, giving the frequency of each possible value of $$S$$. As expected, the one-round distribution has a peak at 25 survivors, and the iterated curve peaks at 17 (the closest integer to $$\frac{100}{6}$$. Note that the red curve is not only shifted to the left but is also slightly taller and narrower.

To get a better view of the details, let’s zoom in. For the sake of smoother curves, I’m going to switch to experiments with $$N = 10{,}000$$ chickens. First the green single-round curve, then the red one for the iterated pecking experiment:

With the larger value of $$N$$, the curves now peak at 2500 and at 1666.67—exactly the positions expected for $$\frac{N}{4}$$ and $$\frac{N}{6}$$. Finding the peaks at these positions is no surprise, but what governs the width and the overall shape of the curve? In other words, what is the mathematical nature of the distributions?

One guess that’s always worth a try is the normal (or Gaussian) distribution. For the pecking problem, a normal distribution defines $$P(S)$$, the probability of observing $$S$$ survivors, as follows:

$P(S) = \frac{1}{\sigma\sqrt{2 \pi}} \exp -\frac{1}{2}\left(\frac{S - \mu}{\sigma}\right)^2.$

That’s a pretty messy equation for such a familiar concept, but it’s possible to tease out the basic meaning. The equation defines a symmatric curve with a peak where $$S$$ is equal to $$\mu$$, the mean of the distribution. The width of the peak depends on $$\sigma$$, the standard deviation. Because the area under the curve is a constant, $$\sigma$$ also effectively determines the height: A narrower peak has to be taller.

We can fit a normal distribution to the pecking data using a procedure that finds the optimal values of $$\mu$$ and $$\sigma$$—those that minimize the discrepancy between the data points and the mathematical model. In the two graphs below the fitted models are superimposed on the two data plots, first for one round of pecking and then for pecking to completion:

The fits appear to be quite close indeed, with the theoretical curves splitting the experimental ones from end to end. In some sense this result has to be counted a success, and yet I don’t find this approach to the problem fully satisfying. The normal curve provides a very good descriptive model of the pecking process, but not a predictive or explanatory one. Remember, the curve is fitted to the data, not the other way around. I see no obvious way to construct a specific normal distribution from what I know about the underlying interactions of pecking chickens. In particular, where do the values of $$\sigma$$ in the two models come from? Why is $$\sigma \approx 25$$ in the one-round model and $$\sigma \approx 23.6$$ in the iterated model? These values look like free parameters, which we have to tune to suit the data. Moreover, they will differ for every value of $$N$$. Another issue: the normal curve is a continuous distribution, defined over the entire real number line. The pecking function is discrete; it makes sense only for integer numbers of chickens.

Let’s set aside the normal curve and consider another plausible model: the binomial distribution, which is discrete, and which turns up in many probabilistic contexts. Suppose you roll 10,000 dice and count how many of them come to rest with a 1 showing on the upper face. When you repeat this experiment many times, the expected number of 1s is one-sixth of 10,000, the same as the expected number of survivors in the iterated chicken-pecking experiment. With dice, there’s a well-known mathematical expression that defines not just the expected value but also the form of the entire distribution. Assume that every die has probability $$p$$ of showing a $$1$$. We are going to roll $$N$$ dice and we want to know the probability of seeing $$k$$ $$1$$s for any $$k$$ between $$0$$ and $$N$$. The formula that supplies this information is:

$P(k) = {N \choose k} p^k (1 - p)^{N - k}.$

Here $$p^k (1 - p)^{N - k}$$ gives the probability of any specific arrangement of $$k$$ $$1$$s among $$N$$ dice. The binomial coefficient $$N \choose k$$, equal to $$N! / k! (N-k)!$$, counts the number of such arrangements.

With $$N = 10000$$ and $$p = \frac{1}{6}$$ we get a curve showing the outcome of the dice-rolling experiment mentioned above. Perhaps the same curve also describes what happens to the iterated pecking model, which has the same expected value? Alas no.

The binomial curve is wider and flatter than the distribution of iterated pecking survivors. What has gone wrong? When I first saw the graph, I had an inkling. As noted above, the binomial coefficient $$N \choose k$$ counts all the ways of choosing $$k$$ items from a set of size $$N$$. This is appropriate for an experiment with dice, since all possible arrangementds of $$k$$ successes among $$N$$ trials are equally likely. In particular, when you roll $$10{,}000$$ dice, you could conceivably see no $$1$$s at all, or all $$10{,}000$$ dice could land with a $$1$$ showing face up; the entire range of outcomes has probability greater than zero.

The pecking problem is different. It’s not possible for 100 percent of the chickens to remain unpecked. Thus only a subset of the $$N \choose k$$ arrangements are attainable. If the binomial distribution is going to work in this context, we need to adjust it somehow to include only the feasible outcomes.

With the thought that it’s easier to solve a problem if you already know the answer, I tried fiddling with the parameters of the distribution to see how the graph responded. My goal was to squeeze the curve into a narrower and taller profile while keeping it centered at the same mean. The mean is equal to $$Np$$, so if we decrease $$N$$ we have to increase $$p$$ by the same factor. Here are the results of some experiments:

The dark green curve is the one we’ve already seen, for a binomial distribution with $$N = 10000$$ and $$p = \frac{1}{6}$$. Going to $$N = 5000$$ and $$p = \frac{1}{3}$$ appears to be a step in the right direction, and $$N = 3333$$ and $$p = \frac{1}{2}$$ is even better. Then, with $$N = 2500$$ and $$p = \frac{2}{3} \ldots$$ Bingo! The yellow curve is an excellent match to the pecking data. Thus it appears we can predict the survivorship of an $$N$$-member pecking ring by constructing a binomial distribution with parameters $$N^\prime = \frac{N}{4}$$ and $$p^\prime = 4p$$.

I can pull the same trick to find a binomial distribution that matches the single-round pecking data. This time the magic numbers that bend the curve to the correct trajectory are $$N’ = \frac{N}{3} = 3333$$ and $$p’ = 3p = \frac{3}{4}$$.

Unlike the normal distribution, the binomial model is constructive, or predictive. From the two parameters $$N’$$ and $$p’$$ we can calculate both the mean of the distribution and the standard deviation. The mean is simply $$N’ p’$$; the standard deviation is $$\sqrt{N’ p’ (1 - p’)}$$. For the example of the $$10{,}000$$ chickens pecking to completion, the mean $$\mu$$ works out to $$1{,}666.666 \dots$$ (as expected), and the standard deviation $$\sigma$$ is $$23.570226$$. (The fitted normal distribution had $$\sigma = 23.567337$$.) For the single-round case, $$\mu$$ is exactly $$2500$$ and $$\sigma$$ is $$25$$. (To avoid roundoff errors, I am taking $$N$$ to be $$9999$$ instead of $$10{,}000$$.)

Hooray, eh? At last we have a formula for calculating the shape and location of the chicken-pecking distribution, based on a few simple parameters—$$N’$$ and $$p’$$. But I’m still grumpy, indeed more perplexed and frustrated than ever. Maybe the model explains the data, but what explains the model? With $$10{,}000$$ chickens and a first-round survivor probability of $$\frac{1}{4}$$, why does the formula call for $$N’ = 3333$$ and $$p’ = \frac{3}{4}$$? Where do those numbers come from? And why $$N’ = 2500$$ and $$p’ = \frac{2}{3}$$ for the iterated case?

I am embarrassed to admit how long I have spent helplessly flailing and thrashing in the bogs of probability theory, trying to solve these mysteries. (I even turned to a recent book called The Probability Lifesaver, which I highly recommend—but it didn’t save my life.) In the search for answers I have investigated the multinomial extensions of binomials. I have looked into convolutions of distributions and computed contingent probabilities. I have filled whole pads of scratch-paper with soldierly rows of ●s and ○s, searching for patterns that would explain those enigmatic fractions $$\frac{N}{3}$$ paired with $$\frac{3}{4}$$, and $$\frac{N}{4}$$ with $$\frac{2}{3}$$. Night after night I’ve gone to bed with a promising idea, only to awaken and recognize a fatal flaw.

Now I believe I do have a correct explanation. It has passed the overnight test several nights in a row. I’m going to reveal it, but not until the end of this essay. Perhaps you’ll figure it out on your own before then. In the meantime, I’m going to widen the horizons of the chicken problem.

Our cozy circle of chickens is a one-dimensional structure. You can go clockwise or counterclockwise around the ring; there are no other meaningful directions in this little universe. Now suppose that instead of getting all our chickens in a row, we arrange them in a grid, an array of columns and rows, covering a region of a two-dimensional surface. To avoid leaving a subset of chickens on the exposed edges of a rectangular array, we can mate the left edge with the right edge and the top edge with the bottom edge. (Topologically, this turns the rectangle into a torus.) Getting real chickens to cooperate in this experiment would be even harder than in the one-dimensional version, but no matter; we’ve long since lost all touch with barnyard reality.

The most important fact about the two-dimensional flock is that each chicken has four neighbors instead of two. With twice as many hostile neighbors, one might well guess that a chicken would be more vulnerable to a pecking attack. On the other hand, each of those neighbors spreads its pecking over twice as many potential targets. How do these competing effects balance out?

For a single round of pecking, we can calculate the survival probability in the same way we did for the one-dimensional system. A chick remains unpecked only if all of its neighbors turn elsewhere to peck. Each neighbor does so with probability $$\frac{3}{4}$$, and so the probability that all of them turn away is $$\left(\frac{3}{4}\right)^4$$. Numerically, this works out to about 0.3164, compared with 0.25 in the circle. Thus the fraction surviving is greater in two dimensions than in one; the distraction of having more targets outweighs the danger of having more attackers. The distribution observed in computer experiments confirms this finding.

Here’s what a $$40 \times 40$$ lattice of chicks looks like after a single round of pecking.

There are 1,600 chicks in the two-dimensional array. If you count the unpecked ○s, you’ll find there are 501, for a survival fraction of 0.3131, close to the theoretical value of 0.3164. Simulations confirm the expected survival rate of $$\left(\frac{3}{4}\right)^4$$ for $$N \times N$$ lattices with any value of $$N$$ greater than $$2$$. (For the $$2 \times 2$$ grid, the survival rate is $$\frac{1}{4}$$, as in the one-dimensional system. There’s a reason why!)

When I stare at the pattern above, I notice a certain stringy or loopy texture, with chains of ○s separating blobs of ●s. This might be a trick of the eye and mind, but I think not. In two dimensions the no-three-in-a-row restriction is lifted; the array includes rows and columns with as many as six consecutive unpecked chicks, as well as diagonal lines. But you will not see a solid $$3 \times 3$$ block () of unpecked chicks, or even a $$3 \times 3$$ cross (). Such patterns cannot exist because the chick in the middle of the block must have pecked one of its four neighbors. More generally, the system is still bound by the rule that every chick, whether pecked or unpecked, must have at least one pecked neighbor.

Since more chicks survive the first round of pecking in a two-dimensional world, it seems plausible there might also be a greater proportion of survivors when the pecking continues to completion. Let’s try the experiment:

In this $$40 \times 40$$ array there are 238 survivors out of 1,600 chicks, which is less than the one-sixth survival rate seen in one dimension. In a sample of a million such pecking grids, I found that the mean survival rate $$\bar{S}$$ is about 0.1533. Compare the distributions for one- and two-dimensional systems:

In going from 1D to 2D the peak shifts to the left, with the mean moving from 0.1667 to 0.1533. The 2D hump is also a little taller and skinnier, thus showing reduced variance.

Why stop at two dimensions? Let us ask our ever-accommodating chickens to roost in a three-dimensional lattice, again with opposite boundaries joined to create the 3D equivalent of a toroidal surface. It’s not hard to guess how this experiment is going to turn out. Back in one dimension, where every chick had two neighbors, the fraction of survivors after a single round of pecking was $$\left(\frac{1}{2}\right)^2 = \frac{1}{4}$$. In two dimensions, with four neighbors, the corresponding number was $$\left(\frac{3}{4}\right)^4 = \frac{81}{256}$$. In the three-dimensional pecking party each chick has six neighbors, so the obvious extrapolation is $$\left(\frac{5}{6}\right)^6 = \frac{15625}{46656}$$, with a value of $$\approx 0.3349$$. Running the simulation supports this surmise, and shows a clear trend when we construct chicken lattices with still higher numbers of dimensions.

From this series of results we can boldly generalize: When every chick has $$n$$ neighbors, the fraction expected to survive a single round of pecking is:

$\left(\frac{n - 1}{n}\right)^n.$

As $$n$$ increases, this expression converges on a value of approximately $$0.36787944$$. Does that number look familiar? It is $$\frac{1}{e}$$. (Changing the minus sign to a plus generates $$e$$ itself, $$2.71828$$.) When I stumbled upon this formula, the sudden appearance of $$\frac{1}{e}$$ took me by surprise, but it shouldn’t have. The constant turns up in the same way in a model of rumor spreading that I wrote about some years ago.

What about the iterated pecking process in higher dimensions? The fraction of survivors shows a steady decline as the number of dimensions increases:

The proportion of chicks that never get pecked falls from 16.7 percent in one dimension to about half that when we embed our intrepid chickens in seven-dimensional space. In other words, a higher-dimensional space raises the initial survival rate (after one round of pecking), but depresses long-term survival (after pecking to completion). Here’s another way of showing the effect of dimension—tracking the mean number of survivors remaining after each round of pecking in one dimension through seven dimensions.

I can offer a rough, hand-wavy rationale for this trend. If you are a chick in a one-dimensional ring, your chance of surviving the first round of pecking is only $$\frac{1}{4}$$, but if you make it through that round, your chance of avoiding a peck in the second round is at least $$\frac{1}{2}$$. Why the improvement? It’s because of your own actions: Your pecking in the first round eliminated the threat from one of your two neighbors. Your odds continue improving in subsequent rounds: The longer you last, the greater the chance that you will hang on until all your neighbors are pacified.

The same trend holds in higher dimensions, but the magnitude of the effect tapers off. In four dimensions, for example, you have eight neighbors, and your chance of surviving the first round is $$\left(\frac{7}{8}\right)^8$$, or about 0.34. Because you peck one of those neighbors, your probability of making it through the second round is better, but only slightly so: $$\left(\frac{7}{8}\right)^7$$, or 0.39.

Looking at the graphs above, one might surmise that as the dimension $$D$$ goes to infinity, the number of survivors (after pecking to completion) will drop to zero. To explore this idea, we don’t actually need infinite-dimensional space. What matters most is not the geometric arrangement of the chickens but the number of neighbors, and we can approximate an infinite-dimensional lattice just by declaring that all chickens are nearest neighbors. In other words, the who-pecks-whom graph becomes complete, with an arc from every chick to every other chick. This does seem to be a recipe for annihilation; you can’t be safe as long as even one other chicken continues to peck. But the details of the end game allow a little room for variation. Will there be one survivor or none?

Peter Winkler discusses a similar problem, “Group Russian Roulette,” in Mathematical Puzzles: A Connoisseur’s Collection (p. 33). The actors in his version are not chickens but “armed and angry people,” who engage in rounds of simultaneously shooting random neighbors. Winkler observes that the probability of a survivor does not approach a limit as $$N$$ increases. I don’t see this effect in the chicken problem: There is almost always a last chicken standing. What makes the difference, I believe, is that Winkler’s roulette players don’t waste their ammunition on players who have already been shot, whereas the chickens continue to peck at neighbors who don’t peck back.

Finally, I return to the narrow confines of one dimension and to the mysterious binomial distributions that seem to predict the statistics of chicken pecking in this system. To review: If you roll 10,000 dice and count those that show a $$1$$, you can expect to find about 1667. If you put 10,000 chicks in a circle and wait until all the pecking is done, you can expect about 1667 unpecked survivors. The dice experiment is described by a binomial distribution with parameters $$N = 10000$$ and $$p = \frac{1}{6}$$. The same model doesn’t work for the chickens: The predicted distribution is much broader than the observed one. But that’s not the weird part. The real puzzler is why a different binomial model, with parameters $$N’ = 2500$$ and $$p’ = \frac{2}{3}$$, does seem to match the experimental results.

The dice model’s failure to work for chicken pecking is not really a surprise. A key assumption underlying the binomial distribution is that the events or objects being counted are independent. That’s true for dice; one die doesn’t care what the others do. But the circle of pecking chickens is all about interactions between neighbors. If you have already been pecked, that alters the odds that your neighbors will eventually be pecked. Independence enters the binomial distribution through the coefficient $$N \choose k$$. Given $$N$$ dice with $$k$$ of them showing $$1$$s, all possible interleavings of the $$1$$s among the other dice are equally likely; the binomial coefficient counts those arrangements. But given $$N$$ chicks with $$k$$ of them unpecked, it’s not true that all arrangements are equally likely. Indeed, many patterns, such as ○○○, are impossible.

If neighbor interactions spoil the binomial model with $$N = 10{,}000$$ and $$p = \frac{1}{6}$$, how are those interactions overcome in the model with $$N’ = 2500$$ and $$p’ = \frac{2}{3}$$? For the longest time I was beguiled by the observation that 2500 is the expected number of survivors after a single round of pecking, and two-thirds of those individuals can be expected to survive all subsequent rounds. Surely, having those two numbers turn up in the binomial distribution cannot be a meangingless coincidence. Maybe not, but I was able to make sense of the situation only when I gave up on that line of inquiry.

What’s needed is a model in which we count the arrangements of 2500 objects, where two-thirds of the objects can be considered successes or survivors. I have found such a model. The objects are not individual chickens. They are groups of four chickens. Consider this set of 4-tuples:

a = ○●●●
b = ●○●●
c = ●●●●

If you select elements from this set at random and string them together, any sequence you create could be an output of the iterated pecking process. A typical result looks like this:

●●●●○●●●●○●●○●●●●●●●○●●●●○●●●●●●●○●●○●●●●○●●●●●●○●●●●○●●●○●●○●●●●●●●○●●

Note that this sequence satisfies all the rules for flock of chickens that has pecked to completion. All unpecked ○s are singletons, surrounded by pecked neighbors. At least two ●s separate every pair of ○s, and this ensures that every element of the sequence has at least one ● neighbor. There is no way of con­catenating any selection of a, b, and c elements that violates these rules. Furthermore, if a, b, and c are chosen with equal probability, the expected proportion of ○s in the sequence is $$\frac{1}{6}$$.

I am deeply ambivalent about this discovery. On the one hand, it’s always a relief to get to the bottom of a problem that has stumped you. On the other hand, what we have here is a recipe for creating a sequence with the same structure and statistics as the product of the pecking process, but it offers no insight into the nature of that process. There’s no connection with the behavior of the chickens. Worse, it’s not even a true or exact model. Although the curve appears to coincide with the data, it’s only an approximation. The proof of this fact is simple. The binomial distribution with $$N’ = 2500$$ and $$p’ = \frac{2}{3}$$ has an absolute cutoff at $$2500$$. For any number of survivors greater than $$2500$$, the model assigns a probability of zero. Yet the flock of $$10{,}000$$ pecking chickens can in fact leave up to $$3333$$ survivors.

The defect becomes visible in a smaller model, such as this one with $$N = 24$$:

The predicted and observed curves exhibit slight mismatches everywhere, but pay particular attention to the right tail of the distribution, where the binomial curve (purple) dives to zero for all survivor numbers greater than six, whereas the experimental data (red) include 6718 instances with seven survivors and 49 instances with eight survivors.

A similar model for the one-round pecking process uses a set of four 3-tuples:

a = ○●●
b = ○●●
c = ●●○
d = ●●●

Again it generates a sequence that looks very much like the outcome of a pecking experiment, but fails to reproduce the tail of the distribution. In the model the highest possible density of survivors is $$\frac{1}{3}$$ whereas it should be $$\frac{1}{2}$$.

Perhaps you’re thinking that a cute high school problem about chicks pecking their neighbors doesn’t really merit an 8,000-word screed on Markov chains and probability distributions, with tables and equations and 25 graphs and diagrams. That thought has crossed my mind, too. However, I want to add just a few more words to argue that the exercise is not totally frivolous.

Mathematics does not owe us a tidy, closed-form, one-line solution to every problem, but we’d be foolish to give up the quest too easily. In this case, computer simulations are easy and productive. By running a program for five minutes I can get answers to a multitude of detailed questions, and I don’t have serious doubts about the correctness of those answers. But they don’t help me make the connection between the microscopic mechanisms (a chicken pecks left or right at random) and macroscopic observations (the distribution has $$\mu = \frac{1}{6}$$ and $$\sigma = 23.56$$). Richard Hamming’s old chestnut says the purpose of computing is insight, not numbers, but insight is just what I’m missing.

Second, this is not really a problem about chickens, whether real or abstract. It is a gateway to a collection of other many-body problems in statistical physics and dynamical systems and cellular automata.

Finally, I’ve had fun, and what’s the harm in that? Maybe the fun’s not over. What about zombie chickens, whose pecks bring other chickens back to life?

Update 2017-07-11: Carl Witty has worked out the correct probability distribution for the single-round case. See his comment below.

## The short arm of coincidence

James Tanton tosses off number theory problems the way John D. Rockefeller handed out dimes. I wrote about one of Tanton’s problems back in January. Then a few weeks ago this tweet about factorials and squares snagged my attention, and it hasn’t let go:

With pencil and paper it’s easy to show that $$6!$$ doesn’t work. The factorial of $$6$$ is $$1 \times 2 \times 3 \times 4 \times 5 \times 6 = 720$$; adding $$1$$ brings us to $$721$$, which is not a square. (It factors as $$7 \times 103$$.) On the other hand, $$7!$$ is $$5040$$, and adding $$1$$ yields $$5041$$, which is equal to $$71^2$$. This makes for a very cute equation:

$7! + 1 = 71^2.$

Continuing on, you can establish that $$8! + 1$$, $$9! +1$$ and $$10! + 1$$ are not square numbers. But to extend the search much further, we need mechanized assistance. Here’s a Julia function that does the obvious thing, generating successive factorials and checking each one to see if it is $$1$$ less than a perfect square:

function search_fac_sqr(maxn)
fac = big(1)                      # bigints needed for n > 20
for n in 1:maxn
fac *= n                        # incremental factorial
r = isqrt(fac + 1)              # floor of sqrt
if r * r == fac + 1
println(n, "! + 1 = ", r, "^2 = ", r^2)
end
end
println("That's all folks!")
end

With this tool in hand, let’s check out $$n! + 1$$ for all $$n$$ between $$1$$ and $$100$$. Here’s what the program reports:

search_fac_sqr(100)
4! + 1 = 5^2 = 25
5! + 1 = 11^2 = 121
7! + 1 = 71^2 = 5041
That's all folks!

Those are the three cases we’ve already discovered with pencil and paper—and no more are listed. In other words, among all values of $$n! + 1$$ up to $$n = 100$$, only $$n = 4$$, $$n = 5$$, and $$n = 7$$ yield squares. When I continued the search up to $$n = 1{,}000$$, I got exactly the same result: no more squares. Likewise $$n = 10{,}000$$ and $$n = 100{,}000$$. Allow me to mention that the factorial of $$100{,}000$$ is a rather large number, with $$456{,}574$$ decimal digits. At this point in the search, I began to grow weary; furthermore, I began to lose hope. When $$99{,}993$$ successive values of $$n$$ fail to produce a single square, it’s hard to sustain faith that success might be just around the corner. Nevertheless, I persisted. I got as far as $$n = 500{,}000$$, which has $$2{,}632{,}341$$ decimal digits. Not one more perfect square in the whole lot.

What can we learn from this evidence—or lack of evidence? Are 4, 5, and 7 the only values of $$n!$$ that lie $$1$$ short of a perfect square? Or are there more such cases somewhere out there along the number line, maybe just beyond my reach, waiting to be found? Could there be infinitely many? If so, where are they? If not, why not?

To my taste, the most satisfying way to resolve these questions would be to find some number-theoretical principle ensuring that $$n! + 1 \ne m^2$$ for $$n \gt 7$$. I have not discovered any such principle, but in a dreamy sort of way I can imagine what a proof might look like. Suppose we eliminate the “$$+1$$” part of the formula, and search for integers such that $$n! = m^2$$. It turns out there is just one solution to this equation, with $$n = m = 1$$. You needn’t bother lathering up your laptop in the quest for larger examples; there’s a simple proof they don’t exist. In any square number, all the prime factors must be present an even number of times, as in $$36 = 2 \times 2 \times 3\times 3$$. In a factorial, at least one prime factor—the largest one—always appears just once. (If you’re not sure why, check out Bertrand’s postulate/Chebyshev’s theorem.)

Of course when we put the “$$+1$$” back into the formula, this whole line of reasoning falls to pieces. In general, the factorization of $$n!$$ and of $$n! + 1$$ are totally different. But maybe there’s some other property of $$n! + 1$$ that conflicts with squareness. It might have something to do with congruence classes, or quadratic residues. From the definition of a factorial, we know that $$n!$$ is divisible by all positive integers less than or equal to $$n$$, which means that $$n! + 1$$ cannot be divisible by any of those numbers (except $$1$$). This observation rules out certain kinds of squares, namely those that have small primes in their factorization. But for all $$n \gt 4$$ the square root of $$n!$$ greatly exceeds $$n$$, so there’s plenty of room for larger factors, as in the case of $$7! + 1 = 71^2$$.

Here’s another avenue that might be worth exploring. The decimal representation of any large factorial ends with a string of $$0$$s, formed as the products of $$5$$s and $$2$$s among the factors of the number. Thus $$n! + 1$$ must look like

$XXXXX \ldots XXXXX00000 \ldots 00001,$

where $$X$$ represents any decimal digit, and the trailing sequence of $$0$$s now ends with a single terminal $$1$$. Can we figure out a way to prove that a number of this form is never a square? Well, if the final digit were anything other than $$1, 4,$$ or $$9$$, the proof would be easy, but lots of squares end in $$\ldots 01$$, such as $$10{,}201 = 101^2$$ and $$62{,}001 = 249^2$$. If there’s some algebraic argument along these lines showing that $$n! + 1$$ can’t be a square, it will have to be something subtler.

All of the above is make-believe mathematics. I have stirred up some ingredients that look like they might make a tasty confection, but I have no idea how to bake the cake. Perhaps someone else will supply the recipe. In the meantime, I want to entertain an alternative hypothesis: that nothing prevents $$n! + 1$$ from being a square except improbability.

The pattern observed in the $$n! + 1 = m^2$$ problem—a few matches among the smallest elements of the sequences, and then nothing more for many thousands of terms—is not unique to factorials and squares. Other pairs of sequences exhibit similar behavior. For example, I have tried matching factorials with triangular numbers. The triangulars, beginning $$1, 3, 6, 10, 15, 21, \ldots$$, are defined by the formula $$T(m) = m(m + 1)/2$$. If we look for factorials that are also triangular, we get $$1! = T(1) = 1$$, then $$3! = T(3) = 6$$, and finally $$5! = T(15) = 120$$. No more examples appear through $$n = 100{,}000$$.

What about factorials that are $$1$$ less than a triangular, satisfying the equation $$n! + 1 = T(m)$$? I know of only one case: $$2! + 1 = 3$$. Broadening the search a little, I found that $$n! + 4$$ is triangular for $$n \in {2, 3, 4}$$, again with no more hits up to $$100{,}000$$.

For another experiment we can bring back the square numbers and swap out the factorials, replacing them with the ever-popular Fibonacci sequence, $$1, 1, 2, 3, 5, 8, 13, \ldots$$, defined by the recurrence $$F(n) = F(n - 1) + F(n - 2)$$, with $$F(1) = F(2) = 1$$. It’s been known since the 1960s that $$1$$ and $$144$$ are the only positive integers that are both Fibonacci numbers and perfect squares. Looking for Fibonacci numbers that are $$1$$ less than a square, I found that $$F(4) + 1 = 4$$ and $$F(6) + 1 = 9$$, with no other instances up to $$F(500{,}000)$$.

We can do the same sort of thing with the Catalan numbers, $$1, 1, 2, 5, 14, 42, 132 \ldots$$, another sequence with a huge fan club. I find no squares other than $$1$$ among the Catalan numbers up to $$n = 100{,}000$$; I don’t know if anyone has proved that none exist. A search for cases where $$C(n) + 1 = m^2$$ also comes up empty, but there are a few low-lying matches for $$C(n) + k = m^2$$ for $$k \in {2, 3, 4}$$.

Finding similar behavior in all of these diverse sequences changes the complexion of the problem, in my view. If we discover some obscure, special property of $$n! + 1$$ that explains why it never lands on a square (for large values of $$n$$), do we then have to invent another mechanism for Fibonacci numbers and still another for Catalan numbers? Isn’t it more plausible that some single, generic cause lies behind all the observations?

But the cause can’t be too generic. It’s not the case that you can take any two numeric sequences and expect to see the same kind of pattern in their intersections. Consider the factorials and the prime numbers. By the very nature of a factorial, none of them except 2! = 2 can possibly be prime, but there’s no obvious reason that $$n! + 1$$ can’t be a prime. And, indeed, for $$n \le 100$$ nine values of $$n! + 1$$ are prime. Extending the search to $$n \le 1000$$ turns up another seven. Here is the full set of known numbers for which $$n! + 1$$ is prime:

$1, 2, 3, 11, 27, 37, 41, 73, 77, 116, 154, 320, 340, 399, 427, 872, 1477, \\ 6380, 26951, 110059, 150209$

They get rare as $$n$$ increases, but there’s no hint of a sharp cutoff, as there is in the other cases explored above. Does the sequence continue indefinitely? That seems a reasonable conjecture. (For more on this sequence, including references, see Chris K. Caldwell’s factorial prime page.)

My question is this: Can we understand these curious patterns in terms of mere chance coincidence? The values of $$n! + 1$$ form an infinite sequence of integers spread over the number line, dense near the origin but becoming extremely sparse as $$n$$ increases. The values of $$m^2$$ form another infinite sequence, again with diminishing density, although the dropoff is not as steep. Maybe factorials bump into squares among the smallest integers because there just aren’t enough of those integers to go around, and some of them have to do double duty. But in the vast open spaces out in the farther reaches of the number line, a factorial can wander around for years—maybe forever—and not meet a square.

Let me try to state this idea more precisely. Since $$n!$$ cannot be a square, we know that it must lie somewhere between two square numbers; the arrangement on the number line is $$(m - 1)^2 \lt n! \lt m^2$$. The distance between the end points of this interval is $$m^2 - (m - 1)^2 = 2m - 1$$. Now choose a number $$k$$ at random from the interval, and ask whether $$n! + k = m^2$$. Exactly one value of $$k$$ must satisfy this condition, and so the probability of success is $$1/(2m - 1)$$, or roughly $$1 / (2 \sqrt{n!})$$. Because $$\sqrt{n!}$$ increases very rapidly, this probability takes a nosedive toward zero as $$n$$ increases. It is represented by the red curve in the graph below. Note that by $$n = 100$$ the red curve has already reached $$10^{-80}$$.

The green curve gives the probability of a collision between Fibonacci numbers and squares; the shape is similar, though it dives off the precipice a little later. The Fibonacci-square curve approximates a negative exponential: The probability is proportional to $$\phi^{-\sqrt{F(n)}}$$, where $$\phi = (\sqrt{5} + 1) / 2 \approx 1.618$$. The factorial-square curve is even steeper because the factorial function is superexponential: $$n!$$ grows faster than $$c^n$$ for any fixed $$c$$.

The blue curve, recording the probability of coincidences between factorials and primes, has a very different shape. In the neighborhood of $$n!$$ the average distance between consecutive primes is approximately $$\log n!$$, which grows just a little faster than $$n$$ itself and very much slower than $$n!$$. The probability of collision between factorials and primes is roughly $$1 / \log n!$$. The continuous blue curve corresponds to this smooth approximation. The blue dots sprinkled near that line give the probability based on actual distances between consecutive primes.

What to make of those curves? Is it legitimate to apply probability theory to these totally deterministic sequences of numbers? I’m not quite sure. Before confronting the question directly, I’d like to retreat a few steps and look at a simpler model where probability is clearly entitled to a seat at the table.

Let us borrow one of Jacob Bernouilli’s famous urns, which have room to hold an infinite number of ping pong balls. Start with one black ball and one white ball in the urn, then reach in and take a ball at random. Clearly, the probability of choosing black is $$1/2$$. Put the chosen ball back in the urn, and also add another white ball. Now there are three balls and only one is black, so the probability of drawing black is $$1/3$$. Add a fourth ball, and the probability of black falls to $$1/4$$. Continuing in this way, the probability of black on the $$n$$th draw must be $$\frac{1}{n + 1}$$.

If we go on with this protocol forever—always choosing a ball at random, putting it back, and adding an extra white ball—what is the probability of eventually seeing the black ball at least once? It’s easier to answer the complement of this question, calculating the probability of never seeing the black ball. This is the infinite product $$\frac{1}{2} \times \frac{2}{3} \times\frac{3}{4} \times\frac{4}{5} \ldots$$, or:

$P(\textrm{never black}) = \prod_{n = 1}^{\infty} 1 - \frac{1}{n+1}$

The product goes to zero as $$n$$ goes to infinity. In other words, in an endless series of trials, the probability of never drawing black is $$0$$, which means the probability of seeing black at least once must be $$1$$. (“Probability $$1$$” is not exactly the same thing as “certain,” but it’s mighty close.)

Now let’s try a different experiment. Again start with one black ball and one white ball, but after the first draw-and-replace cycle add two white balls, then four white balls, and so on, so that the total number of balls in the urn at stage $$n$$ is $$2^n$$; throughout the process all of the balls but one are white. Now the probability of never seeing the black ball is $$\frac{1}{2} \times \frac{3}{4} \times\frac{7}{8} \times\frac{15}{16} \ldots$$, or:

$P(\textrm{never black}) = \prod_{n = 1}^{\infty} 1 - \frac{1}{2^n}$

This product does not go to zero, no matter how large $$n$$ becomes. Neither does it go to $$1$$. The product converges to a constant with the approximate value $$0.288788095$$. Strange, isn’t it? Even in an infinite series of draws from the urn, you can’t be sure whether the black ball will turn up or not.

These two urn experiments do not correspond directly to any of the sequence coincidence problems described above; they simply illustrate a range of possible outcomes. But we can rig up an urn process that mimics the probabilistic treatment of the factorials-and-squares problem. At the $$n$$th stage, the urn holds $$1 + 2 \sqrt{n!}$$ balls, only one of which is black. The probability of never seeing the black ball, even in an infinite series of trials, is

$\prod_{n = 1}^{\infty} 1 - \frac{1}{1 + 2 \sqrt{n!}}.$

This expression converges to a value of approximately $$0.2921426977$$. It follows that the probability of seeing black at least once is $$1 - 0.2921426977$$, or $$0.7078573023$$. (No, that number is not $$1/\sqrt{2}$$, although it’s close.)

An urn process resembling the factorials-and-primes problem gives a somewhat different result. Here the number of balls in the urn at stage $$n$$ is $$\log n!$$, again with just one black ball. The infinite product governing the cumulative probability is

$\prod_{n = 2}^{\infty} 1 - \frac{1}{\log n!}.$

On numerical evidence this expression seems to dwindle away to zero as $$n$$ goes to infinity (although I’m not $$100$$ percent sure of that). If it does go to $$0$$, then the complementary probability that the black ball will eventually appear must be $$1$$.

Some of these results leave me feeling befuddled, and even a little grumpy. Call me old-fashioned, but I always thought that rolling the dice infinitely many times ought to be enough to settle beyond doubt whether a pattern appears or not. In the harsh light of eternity, I would have said, everything is either forbidden or mandatory; as $$n$$ goes to infinity, probability goes to $$0$$ or it goes to $$1$$. But apparently that’s not so. In the factorial urn model the probability of never seeing a black ball is neither $$0$$ nor $$1$$ but lies somewhere in the neighborhood of $$0.2921426977$$. What does that mean, exactly? How am I supposed to verify the number, or even check its first few digits? Running an infinite series of trials is not enough; you need to collect a statistically significant sample of infinite experiments. For an exact result, try an infinite series of infinite experiments. Sigh.

The urn model corresponds in a natural way to the randomized version of the factorial-square problem, where we look at $$n! + k = m^2$$ and choose $$k$$ at random from an appropriate range of values. But what about the original problem of $$n! + 1 = m^2$$? In this case there’s no random variable, and hence there’s no point in running multiple trials for each value of $$n$$. The system is deterministic. For each $$n$$ the factorial of $$n$$ has a definite value, and either it is or it isn’t adjacent to a perfect square. There’s no maybe.

Nevertheless, there might be a way to sneak probabilities in through the back door. To do so we have to assume that factorials and squares form a kind of ergodic system, where observing one chain of events for a long period is equivalent to watching many shorter chains. Suppose that factorials and squares are uncorrelated in their positions on the number line—that when a factorial lands between two squares, its distance from the larger square can be treated as a random variable, with every possible distance being equally likely. If this assumption holds, then instead of looking at one value of $$n!$$ and trying many random values of $$k$$, we can adopt a single value of $$k$$ (namely $$k = 1$$) and look at $$n!$$ for many values of $$n$$.

Is the ergodic assumption defensible? Not entirely. Some distances between $$n!$$ and $$m^2$$ are known to be more likely than others, and indeed some distances are impossible. However, the empirical evidence suggests that the deviations must be slight. The histogram below shows the distribution of distances between a factorial and the next larger square for the first $$100{,}000$$ values of $$n!$$. The distances have all been normalized to the range $$(0, 1)$$ and classified in $$100$$ bins. There is no obvious sign of bias. Calculating the mean and standard deviation of the same $$100{,}000$$ relative distances yields values within $$1$$ percent of those expected for a uniform random distribution. (The expected values are $$\mu = 1/2$$ and $$\sigma = 1/12$$.)

If this probabilistic approach can be taken seriously, I can make some quantitative statements about the prospects for ever finding a large factorial adjacent to a perfect square. As mentioned above, the overall probability that one or more values of $$n! + 1$$ are equal to squares is about $$0.7078573023$$. Thus we should not be too surprised that three such cases are already known, namely the examples with $$n = 4, 5,$$ and $$7$$. Now we can apply the same method to calculate the probability of finding at least one more case with $$n \gt 7$$. Let’s make the question more general: “Whether or not I have seen any squares among the first $$C$$ values of $$n! + 1$$, what are the chances I’ll see any thereafter?” To answer this question, we can just remove the first $$C$$ elements from the infinite product:

$\prod_{n = C+1}^{\infty} 1 - \frac{1}{1 + 2\sqrt{n!}}.$

For $$C = 7$$, the answer is about $$0.0037$$. For $$C = 100$$, it’s about $$5.7 \times 10^{-80}$$. We are sliding down the steep slope of the red curve.

As a practical matter, further searching for another factorial-square couple does not look like a promising way to spend time and CPU cycles. The probability of success soon falls into the realm of ridiculously small numbers like $$10^{-1{,}000{,}000}$$. And yet, from the mathematical point of view, the probability never vanishes. Removing a finite number of terms from the front of an infinite product cannot change its convergence properties. If the original product converged to a nonzero value, then so will the truncated version. Thus we have wandered into the canyon of maximal frustration, where there’s no realistic hope of finding the prize, but the probabilities tell us it still might exist.

I am going to close this shambling essay by considering one more example—a cautionary one. Suppose we apply probabilistic reasoning to the search for a cube that is $$1$$ less than a square. If we were looking for exact matches between cubes and squares, we’d find plenty of them: They are the sixth powers: $$1, 64, 729, \ldots$$. But integer solutions to the equation $$n^3 + 1 = m^2$$ are not so abundant. One low-lying example is easy to find: $$2^3 + 1 = 3^2$$, but after 8 and 9 where can we expect to see the next consecutive cube and square?

The probabilistic approach suggests there might be reason for optimism. Compared with factorials and Fibonaccis, cubes grow quite slowly; the rate is polynomial rather than exponential or superexponential. As a result, the probability of finding a cube at a given distance from a square falls off much less steeply than it does for $$n!$$ or $$F(n)$$. In the graph below, $$P(n^3 + k = m^2)$$ is the orange curve.

Note that the orange curve lies just below the blue one, which represents the probability that $$n!$$ lies near a prime. The proximity of the two curves suggests that the two problems—factorials adjacent to primes, cubes adjacent to squares—might belong to the same class. We already know that factorial primes do seem to go on and on, perhaps endlessly. The analogy leads to a surmise: Maybe cube-square coincidences are also unbounded. If we keep looking, we’ll find lots more besides $$8$$ and $$9$$.

The surmise is utterly wrong. The problem has a long history. In 1844 Eugène Catalan conjectured that $$8$$ and $$9$$ are the only consecutive perfect powers among the integers; the conjecture was finally proved in 2004 by Preda Mihăilescu. For the special case of squares and cubes, Euler had already settled the matter in the 18th century. Thus, probabilities are beside the point.

All of the questions considered here belong to the category of Diophantine analysis—the study of equations whose solutions are required to be integers. It is a field notorious for problems that are easy to state but hard to solve. Catalan’s conjecture is one of the most famous examples, along with Fermat’s Last Theorem. When Diophantine problems are ultimately resolved, the proofs tend to be non-elementary, drawing on sophisticated tools from distant realms of mathematics—algebraic geometry in the proof of Fermat’s Last Theorem by Andrew Wiles and Richard Taylor, cyclotomic fields in Mihăilescu’s proof of the Catalan conjecture. As far as I know, probability theory has not played a central role in any such proof.

When I started wrestling with these questions a few weeks ago, I did not expect to discover a definitive solution. I’ve certainly fulfilled my expectations! As a matter of fact, in my own head the situation is more muddled now than it was at the outset. The realization that even an infinite series of experiments would not necessarily resolve some of the questions is deeply unsettling, and makes me wonder how much I really understand about probability theory. But that’s hardly unprecedented in mathematics. I suppose I’ll just have to get used to it.

Update: Thanks to a further tip from Tanton, I have learned that the problem has an extensive history, and also a name: Brocard’s problem, after Henri Brocard, who published on it in 1876 and 1885. Ramanujan mentioned it in 1913. Erdos conjectured there are no more solutions. Marius Overholt connected it with the abc conjecture. Bruce C. Berndt and William F. Galway established that there are no more solutions up $$10^9$$. All this comes from the Wikipedia entry on Brocard’s problem. That article also mentions (but does not explain) that the solutions are called Brown numbers.

I have some more reading to do.

Posted in computing, featured, mathematics, problems and puzzles | 14 Comments

## The uniqueness constraint

For the past few weeks the Sunday New York Times has been publishing a puzzle called Capsules, devised by Wei-Hwa Huang. Here are the instructions:

Place numbers in the grid so that each outlined region contains the numbers 1 to n, where n is the number of squares in the region. The same number can never touch itself, not even diagonally.

Here is a partially completed example:

The black, pre-printed numbers are the “givens,” supplied by the puzzle creator. I filled in the pencil-written numbers in a sequence of “forced” moves dictated by two simple rules:

1. A number can be placed in a square if no other number is allowed there. For example, the three singleton squares in the bottom row must each hold a 1, and these squares are the obvious place to start solving the puzzle. After the 1s are written in, the square outlined in yellow in the diagram below can also be filled in; its neighbors forbid any number other than 3.
2. A number can be placed in a square if the number has no other possible home within a region. The blue-outlined 1 in the diagram below was determined by this rule. There must be a 1 somewhere in the region, but none of the other squares can accommodate it.

At this point in the solution process, with the grid in the state shown above, I was unable to find any other blank squares whose contents could be decided by following these two rules and no others. But I did spot a move based on a different kind of reasoning. Consider the two pairs of open squares marked in color:

The salmon-pink squares must hold the numbers 2 and 5, but it’s not immediately clear which number goes in which square. Likewise the lime-green squares must hold 2 and 4, in one order or the other. I submit that the numbers must have the following arrangement:

How do I justify that choice? Suppose the green 2 and 4 were transposed:

Then the pink 2 and 5 could be placed in either permutation, and no later moves elsewhere in the puzzle would ever resolve the ambiguity. This outcome is not acceptable if we assume the puzzle must have a unique solution. The uniqueness constraint might be expressed as a third rule:

1. A number can be placed in a square if it is needed to prevent other squares from having multiple legal configurations.

I have vague qualms about this mode of puzzle-solving. It’s surely not cheating, but the third rule has a different character from the others. It exploits an assumed global property of the solution, rather than relying on local interactions. We are not making a choice because it is forced on us; we are choosing a cofiguration that will force a choice elsewhere.

In this particular puzzle it’s not actually necessary to apply the uniqueness constraint. There is at least one other pathway to a solution—which I’ll leave to you to find. Can we devise a puzzle that requires rule 3? I’m not quite sure the question is even well-formed. All constraint-satisfaction problems can be solved by a mindless brute-force algorithm: Just write in some numbers at random until you reach a contradiction, then backtrack. So if we want to force the solver to use a specific tool, we somehow have to outlaw that universal jackhammer.

The uniqueness constraint is not unique to the Capsules puzzle. I’ve encountered it often in kenkens, and occasionally in sudokus. I even have a sense of deja lu as I write this. I feel sure I’ve read a discussion of this very issue, somewhere in recent years, but I haven’t been able to lay hands on it. Pointers to precedents are welcome.

Addendum 2017-03-19: Jim Propp reminds me of his marvelous Self Referential Aptitude Test. The instructions begin:

The solution to the following puzzle is unique; in some cases the
knowledge that the solution is unique may actually give you a short-cut
to finding the answer to a particular question.

I completed the 20-question puzzle when SRAT first went public some years ago. This morning I found I was able to do it again with no diminution in enjoyment—or effort. I remembered none of the answers or the sequence of deductions needed to find them.

Highly recommended. And while you’re at it, check out Propp’s Mathematical Enchantments blog and his Twitter feed: @JimPropp.

Posted in problems and puzzles | 17 Comments