Truth, Trump, and Trisectors

What is truth? said jesting Pilate, and did not stay for an answer.

Lately there’s been a lot of news about fake news (some of it, for all I know, fake). Critics are urging Facebook, Google, and Twitter to filter out the fraudulent nonsense. This seems like a fine idea, but it presupposes that the employees—or algorithms—doing the filtering can reliably distinguish fact from fiction. Even if they can tell the difference, can we count on the companies to stand up to the prevaricators? Sure, Facebook can block traffic from a clickbait website run by a teenager in Macedonia. But what if the lies were to come from an account registered in the .gov domain?

When misinformation is stamped with the imprimatur of the president or other high government officials, there’s not much hope of shutting it down at the source or breaking the chain of transmission. This problem was not created by the new communication technologies of the internet age, and it is not unique to the incoming Trump administration. I have probably been lied to by every president who has served during my lifetime, and I could name seven of those presidents whose fibs are well documented. But Trump is different. He is not a devious liar, careful not to be caught in a contradiction. He is simply indifferent to truth. When challenged to support a dubious claim, he shrugs or changes the subject. The question of veracity seems not to interest him. And his election suggests that some part of the voting public feels the same way.

What to do? The only practical remedy I can suggest is to work diligently to uncover the truth, to publish it widely, and to help the public reach sound judgments about what to believe. All three of these tasks are difficult, but the last one, in my view, is the real stumper. As the signal-to-noise ratio in public discourse dives toward zero, we would all do well to sharpen our powers of discrimination. But I worry most about that subpopulation for whom strict factual accuracy is not the primary criterion when they choose stories to pass on to their friends and to embrace as the basis of important decisions. I don’t know how to change this, but I feel it’s important to try.

I’d like to begin with a more personal and less political anecdote. Some years ago, when the internet was young, a friend began sending me emails with subject lines like “Save 7 y.o. Jessica Mydek from cancer” or “Fw: Fw: FW: Fw: Bill No. 602P 5-cent tax on every email.” I would reply with a link to the debunking report at Snopes. My friend would thank me and sheepishly apologize, then the next month she would forward a message warning me not to blink my high beams if I saw a car with the headlights off—I’d be attacked by gang members conducting a rite of initiation. Email exchanges like these continued for a year or so, then they tapered off. Had my friend developed a measure of skepticism? Yes, but not in the way I had hoped. She had become skeptical of snopes.com. After all, it’s a website with a funny name, run by smug, self-appointed know-it-alls who make fun of gullible people. Why should she trust them?

Instead of an ad hoc watchdog like Snopes, maybe we should have an official arbiter of factuality, a certified and sanctified public agency. Call it the Ministry of Truth. And let’s give it enforcement powers: No social network or news outlet is allowed to publish anything unless the ministry attests to its accuracy.

Okay, that’s not such a hot idea after all.

In any case, no amount of scrupulous fact-checking would have cured my friend’s addiction to hoax email. There was something in those messages she wanted to believe. Even if 7 y.o. Jessica Mydek doesn’t exist, a world where chain letters can cure cancer is more appealing and empowering than the Snopesian world of grim facts, where you can only watch helplessly while a child dies. When you see a car driving without headlights, it’s more exciting to imagine a murderer at the wheel than a forgetful old fool. I’m sure my friend had her own doubts about some of these breathless pleas and warnings, but she was willing to overlook dodgy evidence or flawed logic for the sake of a good story.

As far as I know, my friend’s lax attitude toward factuality never caused grievous harm to herself or anyone else. But sometimes credulity can be disastrous.


Those who can make you believe absurdities can make you commit atrocities.

—Voltaire (paraphrase)

Let’s talk about Edgar Maddison Welch, the young man who showed up at the Comet Ping Pong pizzeria with a rifle and a handgun. By his own account, he sincerely believed he was going to rescue children being held captive in a basement room and subjected to unspeakable acts by Hillary Clinton and her associates. Where did that idea come from? Apparently it began with leaked emails from the hacked account of John Podesta, Clinton’s campaign chairman. According to an outline in the New York Times, eager sleuths on Reddit and 4chan discovered the phrase “cheese pizza” in the email texts, and recognized it as a code word for “child pornography.” Connecting the rest of the dots was easy and obvious: Podesta had corresponded with the owner of Comet Ping Pong, and Barack Obama had been photographed playing ping pong with a small boy, and so the basement of the restaurant must be where the Democrats slaughter their child sex slaves. However, the would-be rescuer with the AR-15 found no basement kill room—in fact, no basement at all. “The intel on this wasn’t 100 percent,” he told a Times reporter.

In case there’s even the slightest doubt, let me say plainly that I don’t believe a word of that grotesque tale about child abuse in the pizza parlor. Indeed, I can make sense of it only as a stupid joke, a parody, a deliberately preposterous confection. If I were fabricating such a malicious fiction, and if I wanted people to believe it, I would come up with something that’s not such a total affront to plausibility. Yet at least one reader of these fantasies took them in deadly earnest. We’ll never know how many more believe there might be a “grain of truth” in the story, even if specific details are wrong. And the purveyors of the myth are not backing down. In an AP story that ran in the Times on December 9 they propose that the Comet Ping Ping event was a “false flag,” yet another twist in the larger plot:

James Fetzer, a longtime conspiracy theorist who also believes the Sandy Hook school shooting was a hoax, told The Associated Press that Welch’s visit to the pizzeria was staged to distract the public from the truth of the “pizzagate” allegations. . . .

Fetzer and other conspiracy theorists seized on the fact that Welch had dabbled in movie acting as a giveaway that his visit to the restaurant was staged. . . . Blogger Joachim Hagopian, a false-flag proponent, told the AP that conspirators look for “a patsy or stooge” to pose as a lone gunman with an assault rifle. Welch, he said, “fits the pattern” with his acting background.

“He’s got an IMDB (Internet Movie Database) profile,” Hagopian said.

It’s easy to heap ridicule on these ideas. Indeed, by quoting them at length that’s exactly what I’m doing. How could anyone possibly believe in such contrived and convoluted schemes, such teetering towers of improbabilities? But it’s useful to keep in mind that the incredulity goes both ways. The conspiracy theorists would snigger at my naiveté for believing what I read in New York Times. Anyone who’s paying attention knows that all the big papers and TV networks are parties to the conspiracy. (Snopes is surely in on it too.)


Mathematics alone proves, and its proofs are held to be of universal and absolute validity, independent of position, temperature or pressure. You may be a Communist or a Whig or a lapsed Muggletonian, but if you are also a mathematician, you will recognize a correct proof when you see one.

—Philip J. Davis, American Mathematical Monthly, 79(3):254 (March 1972)

A high-stakes presidential election and accusations of child rape and murder certainly add force and immediacy to a discourse on the nature of truth, but they also distract. I would like to retreat from these incendiary themes, at least for a few paragraphs, and look at the calmer universe of mathematics, where we have well-developed mechanisms for distinguishing between truth and falsehood.

Take the case of angle trisectors—people who claim they can divide an arbitrary angle into equal thirds with the standard Euclidean toolkit of straightedge and compass. In some respects, trisectors are like peddlers of pizza parlor pedophilia, but when a trisector comes before you, you can give a stronger response than: “What you claim is contrary to common sense.” You can offer an absolute refutation: “What you claim is impossible. Pierre Laurent Wantzel proved it 180 years ago.” But I wouldn’t count on the trisector meekly accepting this answer and going away.

A few years ago, writing in American Scientist, I made an earnest effort to explain the Wantzel proof in some detail and in plain words, and I provided an English translation of Wantzel’s own paper from 1837. Soon after the article appeared, I began receiving letters festooned with elaborate geometric diagrams, some of them quite pretty, which the authors presented as proper straightedge-and-compass trisections. I wasn’t surprised at this development, but I was at a loss for how to respond. If a mathematical proof fails to persuade the reader of the truth of a mathematical proposition, what other kind of argument could possibly be more effective?

In the past few weeks I’ve given this incident further thought, and I’ve come to see it in a different light. The task of “persuading the reader,” even in mathematics, is not just about truth; it’s also about trust, or rapport, or social solidarity. The quip by Philip Davis that I reproduce above has long been a favorite of mine, but at this point I am tempted to turn it inside out. What I would say is not “If you’re a mathematician, you’ll recognize a proof” but “If you recognize a proof, you’re a mathematician.” The ability and willingness to engage in a certain style of reasoning, and to accept the consequences of that mental process no matter what the outcome, marks you as a member of the mathematical tribe. And, conversely, if you respond to a proof by saying “It may be impossible but I can do it anyway,” then you are not a member of this particular affinity group.

I am not arguing here that mathematical truth is some kind of socially determined quantity, and no more fundamental than religious or political doctrines. Quite the contrary, I am one of those stubborn prepostmodernists who believes in a reality that’s not just my private daydream. I’m convinced we all share one universe, where certain things are true and others aren’t, where certain events happened and others didn’t. The interior angles of a plane triangle will always sum to 180 degrees no matter what I say. Nevertheless, the process by which we recognize such truths and reach consensus about them is a social one, and it’s not infallible.

The same essay in which I discussed Wantzel’s proof also mentioned the infamous Monty Hall problem.

In 1990 Marilyn vos Savant, a columnist for Parade magazine, discussed a hypothetical situation on the television game show “Let’s Make a Deal,” hosted by Monty Hall. A prize is hidden behind one of three doors. When a contestant chooses door 1, Hall opens door 3, showing that the prize is not there, and offers the player the option of switching to door 2. Vos Savant argued . . . that switching improves the odds from 1/3 to 2/3.

In the following weeks thousands of letter writers berated vos Savant for her blatant error, insisting that the two remaining closed doors were equally likely to conceal the prize. Quite a few of those critics identified themselves as mathematicians or mathematics teachers. Even Paul Erdős took this side of the controversy (although he didn’t write a letter to Parade). But of course vos Savant was right all along.

This story was already well known when I told it in my American Scientist essay, but I have a reason for retelling it yet again now. Along with the mail from angle trisectors I also received irate messages from Monty Hall deniers, who insisted that the probabilities really are 1/2 and 1/2. But this time it wasn’t professional mathematicians who raised objections; they had long since resolved their differences and settled on the correct answer. Now it was outsiders, dissidents, who attacked what they perceived to be an ignorant, entrenched orthodoxy enforced by the professoriat. In other words, the same two factions continued to fight over the same question, but they had switched positions.

The point I’m making here is the unsurprising one that social factors influence judgment. We are all predisposed to go along with the views of those we know and trust, and we are skeptical, at least initially, of ideas that come from outsiders. We listen more attentively and sympathetically when the speaker is a trusted colleague. The scrawled manuscript from an unknown author claiming a simple proof of the Riemann hypothesis gets a cursory reading or none at all. There’s nothing wrong with making such distinctions. The alternative—equal treatment for the competent and the crackpot—would certainly not help advance the cause of truth. But it has to be acknowledged that these practices further alienate outsiders. By pushing them away and closing off the channel of communication—treating them as irredeemables and deplorables—we diminish the chance that they will ever find a path into the community.


Why do the nations so furiously rage together, and why do the people imagine a vain thing?

—Psalms, 2:1, via George Frideric Handel

Do these skirmishes over minor mathematical questions have anything to do with “Fakebook” news that might have turned the tide of a presidential election? I submit there is a connection. In both cases the nub of the problem is not discovering the truth but persuading people to recognize and own it. The mathematical examples show that even the most irrefutable kind of evidence—a deductive proof—is not always enough to win over skeptics or opponents.

Proof is said to “compel belief”: You embrace the result even against your will. Once you grant the premises, and you work through the chain of implications, accepting the validity of each step in turn, you have no choice but to accept the ultimate conclusion. Or so one might think. But this view of proof as an irresistible engine of reason underestimates the flexibility and creativity of the human mind. In fact we are all capable of believing impossible things before breakfast, and denying certainties after dinner, if we choose to. Mathematicians—members of the tribe—promise not to do so, but that pledge is not binding on anyone else.

When I look back over my various encounters with angle trisectors and other mathematical mavericks, I can’t recall a single instance where I successfuly persuaded someone to give up an erroneous belief and accept the truth. Not one soul saved. This record of failure does not give me great confidence when I think of venturing forth to combat fake political news, where we don’t even have the secret weapon of deductive proof.

I’m left with the thought that compelling people to acknowledge a truth may be the wrong approach, the wrong attitude. Voltaire was a great hero of free-thinking, but his motto “Écrassez l’infâme!” is a bit too militaristic for my taste. However you choose to translate that phrase, he meant it as a call to arms. Let us crush superstition, wipe out error and ignorance, put an end to fanaticism and irrationality. I’m for all that, but I don’t want to be bludgeoning people into accepting the truth. It doesn’t really change their minds, and at some point they bludgeon you back.

Rather than force the people to give up their false notions and vain things, I would let the truth seduce them. Let them fall in love with it. Doesn’t that sound grand? If only I had the slightest clue about how to make it happen.


I like mathematics largely because it is not human and has nothing particular to do with this planet or with the whole accidental universe—because, like Spinoza’s God, it won’t love us in return.

At this point my only consolation is a cold and severe one. Trump may be indifferent to truth, but the universe, in the long run, is utterly indifferent to him and his foibles. Our new president can declare that climate change is a hoax, and purge government agencies of all those who disagree, but those acts will not lower the concentration of carbon dioxide in the atmosphere.

Mathematical truths are even more aloof from human interference. In Orwell’s 1984 the Thought Police boast of making citizens believe that two plus two equals five. But all the sophistry of the Ministry of Truth and all the torture chambers of the Ministry of Love cannot alter the equation itself. They cannot make two and two equal five.

These are very small islands of certainty in a vast maelstrom of confusion, but they offer refuge, and maybe a place to build from.

Posted in mathematics, modern life, off-topic | 6 Comments

ABC and FLT

This weekend I’ll be attending a workshop at the University of Vermont, “Kummer Classes and Anabelian Geometry: An introduction to concepts involved in Mochizuki’s work on the ABC conjecture, intended for non-experts.” It was the last phrase in this title that gave me the courage to sign up for the meeting, but I have no illusions. There are degrees of non-expertness, and mine is really quite advanced. The best I can hope for is to go home a little less ignorant than I arrived.

I’m glad to be here all the same. The ABC conjecture is one of those beguiling artifacts in number theory that seem utterly simple one moment and utterly baffling the next. As for Shinichi Mochizuki’s 500-page treatise on the conjecture, that’s baffling from start to finish, and not just for me. Four years after the manuscript was released, it remains a proof-on-probation because even the non-non-experts have yet to fully digest it. The workshop here in Burlington is part of the community’s digestive effort. I’m grateful for an opportunity to see the process at work, and naturally I’m curious about the eventual outcome. Will we finally have a proof, or will a gap be discovered? Or, as has happened in other sad cases, will the question remain unresolved? For me the best result will be not just a proof certified by a committee of experts but a proof I can understand, at least in outline, if I make the effort—a proof I might be able to explain to a broader audience.

The drive up to Burlington gave me a few hours of solitude to think about the conjecture and how it fits in with more familiar ideas in number theory. One notable connection is summed up as “ABC implies FLT.” Proving the ABC conjecture will bring us a new proof of Fermat’s Last Theorem, independent of the celebrated Andrew Wiles–Richard Taylor proof published 20 years ago. Interesting. But how are the two problems linked? As I cruised through the chlorophyll-soaked hills of the Green Mountain state, I noodled away at this question.

[Correction: ABC actually implies only that FLT has no more than a finite number of counterexamples, and only for exponent \(n \ge 4.\)]


I have written about the ABC conjecture twice before (2007 and 2012). Here’s a third attempt to explain what it’s all about.

The basic ingredients are three distinct positive integers, \(a\), \(b\), and \(c\), that satisfy the equation \(a + b = c\). Given this statement alone, the problem is so simple it’s silly. Even I can solve that equation. Pick any \(a\) and \(b\) you please, and I’ll give you the value of \(c\).

To make the exercise worth bothering with, we need to put some constraints on the values of \(a\), \(b\), and \(c\). One such constraint is that the three integers should have no factors in common. In other words they are relatively prime, or in still other words their greatest common divisor is 1. Excluding common factors doesn’t really make it any harder to find solutions to the equation, but it eliminates redundant solutions. Suppose that \(a\), \(b\), and \(c\) are all multiples of 7; then dividing out this common factor yields a set of smaller integers that also satisfy the equation: \(a\,/\,7 + b\,/\,7 = c\,/\,7\).

The ABC conjecture imposes a further constraint, and this is where the arithmetic finally gets interesting. We are to restrict our attention to triples of distinct positive integers that pass a certain test. First we find the prime factors of \(a\), \(b\), and \(c\), and cast out any duplicates. For example, given the triple \(a = 4, b = 45, c = 49\), the prime factors are \(2, 2, 3, 3, 5, 7, 7\); eliminating duplicates leaves us with the set \(\{2, 3, 5, 7\}\). Now we multiply all the distinct primes in the set, and call the product the radical, \(R\), of \(abc\). Here’s the punchline: The solution is admissible—it is an “\(abc\)-hit”—only if \(R \lt c\). For the example of \(a = 4, b = 45, c = 49\), this condition is not met: \(2 \times 3 \times 5 \times 7 = 210\), which of course is not less than 49.

The ABC conjecture holds that \(abc\)-hits are rare, in some special sense. Hits do exist; try working out the radical of \(a = 5, b = 27, c = 32\) to see an example. In fact, there are infinitely many \(abc\)-hits, with constructive algorithms for generating endless sequences of them. Yet, it’s part of the maddening charm of modern mathematics that objects can be both infinitely abundant and vanishingly rare at the same time. The particular kind of rareness at issue here says that \(R\) can be less than \(c\), but seldom by very much. As a measure of how much, define the power \(P\) of an \(abc\)-hit as \(\log(c) \,/\, \log(R)\). Then one version of the ABC conjecture states that there are only finitely many \(abc\)-hits with \(P \gt (1 + \epsilon)\) for any \(\epsilon \gt 0\).


On first acquaintance, all this rigmarole about radicals seems arbitrary and baroque. Who came up with that, and why? The answer to the who question is Joseph Oesterlé of the University of Paris and David W. Masser of the University of Basel, in 1985. If you play around long enough with integers and their prime factors, you can find all sorts of curious relations, so what’s so special about this one? Whether or not the conjecture is true—and whether or not it’s provable—why should we care?

As I tooled along the Interstate, I tried to answer this question to my own satisfaction. I made a little progress by thinking about what kinds of numbers we might expect to produce \(abc\)-hits. Are they big or small, nearly equal or of very different magnitudes? Are they primes or composites? Do we find squares or other perfect powers among them?

Primes are always a good place to start. Can we have an \(abc\)-hit with \(a\), \(b\), and \(c\) all prime? One complication is that either \(a\) or \(b\) will have to be equal to 2, because 2 is the only even prime, and we can’t have \(a + b = c\) with all three numbers odd. But that’s all right; there are still lots of triples (and conjecturally infinitely many) of the form \(2 + b = c\) with \(b\) and \(c\) prime; they are called twin primes. However, none of them are \(abc\)-hits. It’s easy to see why: If \(a\), \(b\), and \(c\) are all prime, then their radical is simply the product \(abc\), which for numbers larger than 1 is always going to be greater than \(a + b\).

We can extend this reasoning from the primes to all squarefree numbers, that is, numbers that have no repeated prime factors. (They are called squarefree because they are not divisible by any square.) For example, \(10, 21,\) and \(31\) form a squarefree \(abc\) triple, with prime factorizations \(2 \times 5\), \(3 \times 7\), and \(31\). But they do not produce an \(abc\)-hit, because their radical \(2 \times 3 \times 5 \times 7 \times 31 = 6510\) is clearly larger than \(a + b = c = 31\). And the same argument that rules out all-prime \(abc\)-hits applies here to exclude all-squarefree hits.

These results suggest that we look in the opposite direction, at squarefull numbers—and even cubefull numbers, and so on. We want lots of repeated factors. This strategy immediately pays off in the search for \(abc\)-hits. It maximizes the sum \(a + b = c\) without overly enlarging the radical—the product of the distinct primes. The very first of all \(abc\)-hits (in any reasonable ordering) offers an example. It is \(1 + 8 = 9\), or in factored form \(1 + 2^3 = 3^2\). This is a high-power hit, with \(\log(9) \,/\, \log(6) = 1.23\). The triple with highest known power is \(2 + (3^{10} \times 109) = 23^5\), yielding \(\log(6436342) \,/\, \log(15042) = 1.63.\)

Let’s look more closely at that first \(abc\)-hit, \(1 + 8 = 9\). Note that 8 and 9 are not just squarefull numbers; they are perfect powers. This triple is the subject of another famous conjecture. Eugène Catalan asked if \(8\) and \(9\) are the only consecutive perfect powers. Preda Mihailescu answered affirmatively in 2002. Thus we know that the equation \(1 + x^m = y^n\) has only this single solution. However, if we relax the rules just a little bit, we can find solutions to \(a + x^m = y^n\) where \(a\) has some value greater than 1. For example, there’s 3 + 125 = 128 (or 3 + 5^3 = 2^7), which is another high-power \(abc\)-hit.

Suppose we tighten the rules instead of relaxing them and ask for solutions to \(x^n + y^n = z^n\), where the three members of the triple are all nth powers of integers. If we could find solutions of this equation with large values of n, they would surely be a rich ore for high-power \(abc\)-hits. But alas, that’s the equation that Fermat’s Last Theorem tells us has no solutions in integers for \(n \gt 2\). The ABC conjecture turns this implication on its head. It says (if it can be proved) that the rarity of \(abc\)-hits implies there are no solutions to the Fermat equation.


That’s as far as I was able to get while musing behind the wheel—a vague intuition about the balance between addition and multiplication, a tradeoff between increasing the sum and reducing the radical, a hint of a connection between ABC and FLT. Not much, but a better sense of why it’s worth focusing some attention on this particular relation among numbers.

Now morning breaks over Burlington. Time to go learn something from those who are less non-expert.

Posted in mathematics | 5 Comments

The secret life of tweets

On Twitter you can say anything you want as long as it fits in 140 characters. The length limit is one of those frozen accidents of history, like QWERTY and the genetic code. In olden days (2006), tweets had to fit into cell-phone text messages, which imposed a limit of 160 characters. (Twitter reserves 20 characters for the sender’s @handle.) Back then, resources were so scarce the company had to squeeze the vowels out of its name: “twttr,” they called it. Now, we have bandwidth to burn. On the other hand, human attention is still a constraint.

Tweet being composed. The text reads: 'To my prolific tweeps: I dote upon every precious character you send my way, which is why I am sometimes grateful you can send me no more than 140.' This message exceeds the 140-character limit by 5 characters.

Last year, a proposal to raise the limit to 10,000 characters was shouted down in a storm of very terse but intense tweets.

The 140-character limit is enforced by the Twitter software. When you compose a tweet, a counter starts at 140 and is decremented with each character you type; if the number goes negative, the Tweet button is disabled (as in the screen capture above). Based on this observation, I had long believed that every tweet was indeed a little snippet of pure text composed of no more than 140 characters. Was I naïve, or what?


My belated enlightenment began earlier this week, when I began having trouble with links embedded in tweets. Clicking on a link opened a new browser tab, but the requested page failed to load. The process got stuck waiting to connect to a URL such as https://t.co/E0R99xtQng. The “t.co” domain gave me a clue to the source of the problem. A long URL (http://bit-player.org/2016/bertrand-russell-donald-trump-and-archimedes, for example) can use up your 140-character quota in a hurry, and so twitterers long ago turned to URL-shortening services such as bit.ly and TinyURL, which allow you to substitute an abbreviated URL for the original web address. The shortening services work by redirection. When your browser issues the request “GET http://bit.ly/xyz123″, what comes back is not the web page you’re seeking but a message such as “REDIRECT http://ultimate.destination.page.com”. The browser then automatically issues a second GET request to the provided destination address.

In 2011 Twitter introduced its own shortening service, t.co. Use of this service is automatic and inescapable. That is, any link included in a tweet will be converted into a 23-character t.co URL, whether you want it to be or not, and even if it’s already shorter than 23 characters. The displayed link may appear to refer to the original URL, but when you click on it, the browser will go first to a t.co address and only afterwards to the true target. Embedded images also have t.co URLs.

A drawback of all redirection services is that they become a bottleneck and a potential point of failure for the sites that depend on them. If t.co goes down, every link posted on Twitter becomes unreachable, and every image disappears. Is that what happened earlier this week when I was having trouble following Twitter links? Probably not; a disruption of that scale would have been widely noted. Indeed, I soon discovered that the problem was quite localized: It plagued all browsers on my computer, but other machines in the household were unaffected.

When I did a web search for “t.co broken links,” I quickly discovered a long discussion of the issue in the Twitter developer forum, with 87 messages going back to 2012. Grouchy complaints are interspersed with a welter of conflicting diagnoses and inconsistent remedies. Much attention focused on Apple hardware and software (which I use). A number of contributors argued that the problem is not in the browser but somewhere upstream—in the operating system, the router, the cable interface, or even the internet service provider.

After a day or two, my problem with Twitter links went away, and I never learned the exact cause. I hate it when that happens, although I hate it more when the problem doesn’t go away. However, that’s not why I’m writing this. What I want to talk about is something I stumbled upon in the course of my troubleshooting. I found a plugin for the Google Chrome browser, Goodbye t.co, that promised to bypass t.co and thereby fix the problem. How could it do that? If t.co is not responding, or if the response is not getting through to the browser, how can code running in the browser make any difference? It seems like tinkering with your television set when the broadcaster is off the air.

The source code for Goodbye t.co is on GitHub, so I took a look. The program is just a couple dozen lines of JavaScript. What I saw there sent me running back to my Twitter feed, to examine the web page using the browser’s developer tools.

Here’s a tweet I posted a few days ago, as it is displayed by the Twitter web site. Note the link to an arXiv paper:

Beckett tweet

And here’s the HTML that encodes that tweet in the web page:

Beckett tweet HTML

The text of the tweet (“A problem in coding theory that comes from a Samuel Beckett play: ”) amounts to 66 characters, plus 25 more for the link (“arxiv.org/abs/1608.06001 “). But that’s not all that Twitter is sending out to my followers. Far from it. The block of HTML shown above is 751 characters, and the complete markup for this one tweet comes to just under 7,000 characters, or 50 times the nominal limit.

Take a closer look at the anchor tag in that HTML block:

Beckett tweet HTML anchor tag

The href attribute of the anchor tag is a t.co URL; that’s where the browser will go when you click the link. But, reading on, we come to a data-expanded-url that gives the final destination link in full. And then that same final destination URL appears again in the title attribute. This explains immediately how Goodbye t.co can “bypass” the t.co service. It simply retrieves the data-expanded-url and sends the browser there, without making the detour through t.co.

I have two questions. First, if you’re going to use a shortened, redirected URL, why also include the full-length URL in the page markup? The apparent answer is: So that the web browser can show the user the true destination. This is clearly the point of the title attribute. When you hover on a link, the content of the title attribute is displayed in a “tooltip.” I’m not so sure about the purpose of the data-expanded-url attribute. It’s surely not there to help the author of Goodbye t.co. Twitter presumably has some JavaScript of its own that accesses that field.

The second question is the inverse of the first: If you’re going to include the full-length URL, why bother with the shortening-and-redirecting rigmarole? Twitter could shut down the t.co servers and doubtless save a pile of money. Those servers have to deal with all the links and images in some 200 billion tweets per year. The use of redirection doubles the number of requests and responses—that’s a lot of internet bandwidth—and introduces delays of a few hundred milliseconds (even when the service works correctly). Note that Twitter could still display a shortened URL within the text of the tweet, without requiring redirection.

Twitter’s own developer documents offer an answer to the second question:

Tens of millions of links are tweeted on Twitter each day. Wrapping these shared links helps Twitter protect users from malicious content while offering useful insights on engagement.

The promise to “protect users from malicious content” presumably means that if I link to a sufficiently sleazy site, Twitter will refuse to redirect readers there, or perhaps will just warn them of the danger. (I don’t know which because I’ve never encountered this behavior.) As for “offering useful insights on engagement,” I believe that phrase could be translated as “helping us target advertising and collect data with potential market value.” In other words, t.co is not just a cost center but also a revenue center. Every time you click on a link within a tweet, Twitter knows exactly where you’re going and can add that information to your profile.


A few months ago, Twitter announced a slight change to the 140-character rule. @handles included in the text will no longer count toward the character total, and neither will images or other media attachments. Some press reports suggested that links would also be excluded from the count, but the official announcement made no mention of links. And t.co redirection is clearly here to stay.

I can suggest two takeaway messages from this little episode in my life as an internaut.

If you want to limit the “insights on engagement” that Twitter accumulates about your activities, you might consider installing a plugin to bypass t.co redirection. There’s an ongoing argument about the wisdom and morality of such actions, focused in particular on ad-blocking software. I have my own views on this issue, but I’m not going to air them here and now.

The other small lesson I’ve learned is that using alternative URL-shortening services with Twitter is worse than pointless. Pre-shrinking the URL has no effect on the character count. It also obscures the true destination from the reader (since the title attribute is “bit.ly/whatever”). Most important, it interposes two layers of redirection, with two delays, two potential points of failure, and two opportunities to collect saleable data. Yet I still see lots of bit.ly and goo.gl links in tweets. Am I missing or misunderstanding something?

Posted in computing | 4 Comments

Bertrand Russell, Donald Trump, and Archimedes

A habit of finding pleasure in thought rather than in action is a safeguard against unwisdom and excessive love of power, a means of preserving serenity in misfortune and peace of mind among worries. A life confined to what is personal is likely, sooner or later, to become unbearably painful; it is only by windows into a larger and less fretful cosmos that the more tragic parts of life become endurable.

—Bertrand Russell, “Useless” Knowledge, 1935

For Russell, mathematics was one of those windows opening on a calmer universe. So it is for me too, and for many others. When you are absorbed in solving a problem, understanding a theorem, or writing a computer program, the world’s noisy bickering is magically muted. For a little while, at least, you can hold back life’s conflicts, heartaches, and disappointments.

But Russell was no self-absorbed savant, standing aloof from the issues of his time. On the contrary, he was deeply engaged in public discourse. During World War I he took a pacifist position (and went to prison for it), and he continued to speak his peace into the Vietnam era.

Bit-player.org is meant to be a little corner of Russell’s less fretful cosmos, both for me and, I hope, for my readers. In this space I would prefer to shut out the clamor of the hustings and the marketplace. And yet there comes a time to look up from bit-playing and listen to what’s going on outside the window.

A candidate for the U.S. presidency is goading his followers to murder his opponent. Here are his words (in the New York Times transcription):

Hillary wants to abolish—essentially abolish—the Second Amendment. By the way, and if she gets to pick—if she gets to pick her judges, nothing you can do folks. Although the Second Amendment people—maybe there is, I don’t know.

A day later, Donald Trump said he was merely suggesting that gun owners might be roused to come out and vote, not that they might assassinate a president. Yeah, sure. And when Henry II of England mused, “Who will rid me of this troublesome priest?” he was just asking an idle question. But soon enough Thomas Becket was hacked to death on the floor of Canterbury Cathedral.

This is not the first time Trump has strayed beyond tasteless buffoonery into reckless incitement. But this instance is so egregious I just cannot keep quiet. His words are vile and dangerous. I have to speak out against them. We face a threat to the survival of democracy and civil society.

Mathematics, after all, is one of those luxuries we can afford only so long as the thugs do not come crashing through the door. Another great mathematician offers a lesson here, in a tale told by Plutarch. During the sack of Syracuse, according to one version of the legend, Archimedes was puzzling out a mathemetical problem. He was staring at a diagram sketched in the sand when Roman soldiers came upon him. Deep in thought, he refused to turn away from his work until he had finished the proof of his theorem. One of the soldiers drew a sword and ran him through.

Death of Archimedes. Engraving after a painting by Gustave Courtois (1853-1923).Death of archimedes courtois 075dpi

Posted in off-topic | 2 Comments

The 39th Root of 92

My new office space suffers from a shortage of photons, so I’ve been wiring up light fixtures. As I was snaking 14-gauge cable through the ceiling cavity, I began to wonder: stripped end of 14/2 Romex cable: two insulated 14-gauge conductors with bare ground wireWhy is it called 14-gauge? I know that the gauge specifies the size of the copper conductors, but how exactly? The number can’t be a simple measure of diameter or cross-sectional area, because thicker wires have smaller gauge numbers. Twelve-gauge is heavier than 14-gauge, and 10-gauge is even beefier. Going in the other direction, larger numbers denote skinnier wires: 20-gauge for doorbells and thermostats, 24-gauge for telephone wiring.

Why do the numbers run backwards? Could there be a connection with shotguns, whose sizes also seem to go the wrong way? A 20-gauge shotgun has a smaller bore than a 12-gauge, which in turn is smaller than a 10-gauge gun. Mere coincidence?

Answers are not hard to find. The Wikipedia article on American Wire Gauge (AWG) is a good place to start. And there’s a surprising bit of mathematical fun along the way. It turns out that American wire sizes make essential use of the 39th root of 92, a somewhat frillier number than I would have expected to find in this workaday, blue-collar context.

Wire is made by pulling a metal rod through a die—a block of hard material with a hole in it. In cross section, the hole is shaped something like a rocket nozzle, with conical walls that taper down to a narrow throat.Sketch of a wire drawing die, with copper wire moving left to right through a round aperture with a double-cone profile. As the rod passes through the die, the metal deforms plastically, reducing the diameter while increasing the length. But there’s a limit to this squeezing and stretching; you can’t transform a short, fat rod into a long, thin wire all in one go. On each pass through a die, the diameter is only slightly reduced—maybe by 10 percent or so. To make a fine wire, you need to shrink the thickness in stages, drawing the wire through several dies in succession. And therein lies the key to wire gauge numbers: The gauge of a wire is the number of dies it must pass through to reach its final diameter. Zero-gauge is the thickness of the original rod, without any drawing operations. Fourteen-gauge wire has been pulled through 14 dies in series.

Or at least that was how it worked back when wire-drawing was a hand craft, and nobody worried too much about exact specifications. If two wires had both been pulled through 14 dies, they would both be labeled 14-gauge, but they might well have different diameters if the dies were not identical. By the middle of the 19th century this sort of variation was becoming troublesome; it was time to adopt some standards.

The AWG standard keeps the traditional sequence of gauge numbers but changes their meaning. The gauge is no longer a count of drawing operations; instead each gauge number corresponds to a specific wire diameter. Even so, there’s an effort to keep the new standardized sizes reasonably close to what they were under the old die-counting system.

Wire gauge PSF WikipediaCredit: WikipediaThe mapping from gauge numbers to diameters has two fixed reference points. At the thin end of the scale, \(36\)-gauge wire is defined as having a diameter of exactly \(0.005\) inch. At the stout end, a wire size designated \(0000\)-gauge is assigned a diameter of \(0.46\) inch. This quadruple-zero gauge is three steps larger than \(0\)-gauge (and might more sensibly be named \(–3\)-gauge). Thus there are \(40\) integer-valued gauge numbers, and \(39\) steps between them. The extreme values are known, and we need to devise some interpolation process to assign diameters to all the gauges between \(36\) and \(0000\).

The wire-drawing process itself suggests how to do this. Each pass through a die reduces the wire diameter to some fraction of its former size, but the value of the fraction might vary a little from one die to the next. The standard simply decrees that the fraction is exactly the same in all cases. In other words, for every pair of adjacent gauge numbers, the corresponding wire diameters have the same ratio, \(R\).

What remains is to work out the value of \(R\). If we start with \(d_{36} = 0.005\) and multiply by \(R\), we’ll get \(d_{35}\); then, multiplying \(d_{35}\) by \(R\) yields \(d_{34}\), and so on. Continuing in this way, after multiplying by \(R\) \(39\) times, we should arrive at \(d_{-3} = 0.46.\) This iterative process can be summarized as:

\[\frac{d_{-3}}{d_{36}} = R^{39}.\]

Filling in the numeric values, we get:

\[\frac{0.46}{0.005} = 92 = R^{39}, \quad \textrm{and thus}\quad R = \sqrt[39]{92}.\]

And there the number lies before us, the \(39\)th root of \(92\). The numerical value is about \(1.122932\), with \(1/R \approx 0.890526\).

With this fact in hand we can now write down a formula that gives the AWG gauge number \(G\) as a function of wire diameter \(d\) in inches:

\[G(d) = -39 \log_{92} \frac{d}{0.005} + 36.\]

That’s a fairly bizarre-looking formula, with base-92 logarithms and a bunch of arbitrary constants floating around. On the other hand, at least it’s a genuine mathematical function, with a domain covering all the positive real numbers. It’s also smooth and invertible. That’s more than you can say for some other standards, such as the British Imperial Wire Gauge, which pastes together several piecewise linear segments.

Who came up with the rule of \(\sqrt[39]{92}\)? As far as I can tell it was Lucian Sharpe, of Brown and Sharpe, a maker of precision instruments and machine tools in Providence, Rhode Island. A history of the company published in 1949 gives this account:

Another activity begun in the [1850s] was the production of accurate gages. The brass business of Connecticut, centered in the Naugatuck Valley, required sheet metal and wire gauges for measuring their products. Mr. Sharpe, with his methodical mind, conceived the idea of producing sizes of wire in a regular progression, choosing a geometric series as best adapted to these needs. Such gages as were in use prior to this time were the product of English manufacture and were very irregular in their sizes.

The first Brown and Sharpe wire gauge was produced in 1857 and later became the basis of an American standard, which is now administered by ASTM.


Wire gauges are not the only numbers defined by a weird-and-wonderful root-taking procedure. The equal-tempered scale of music theory is based on the 12th root of 2. A musical octave represents a doubling of frequency, and the scale divides this interval into 12 semitones. In the equal-tempered version of the scale, any two adjacent semitones differ by a ratio of \(\sqrt[12]{2}\), or about \(1.05946\). It’s worth noting that instruments were being tuned to this scale well before the invention of logarithms. I assume it was done by ear or perhaps by geometry, not by algebra. Around 1600 Simon Stevin did attempt to calculate numerical values for the pitch intervals by decomposing 12th roots into combinations of square and cube roots; his results were not flawless. What would he have done with 39th roots?

Another example of a backward-running logarithmic progression is the magnitude scale for the brightness of stars and other celestial objects. For the astronomers, the magic number is the fifth root of \(100\), or about \(2.511886\); if two stars differ by one unit of magnitude, this is their brightness ratio. A difference of five magnitudes therefore works out to a hundredfold brightness ratio. Brighter bodies have smaller magnitudes. The star Vega defines magnitude \(0\); the sun has magnitude \(-27\); the faintest stars visible without a telescope are at magnitude \(6\) or \(7\).

The idea of stellar magnitudes is ancient, but the numerical scheme in current use was developed by the British/Indian astronomer N. R. Pogson in 1856. That was just a year before Sharpe came up with his wire gauge scale. Could there be a connection? It would make a nice story if we could find some timely account of Pogson’s work that Sharpe might plausibly have read (maybe in Scientific American, founded 1845), but that’s a pure flight of fancy for now.

And what about those shotguns? Are their gauges also governed by some sort of logarithmic law? No, the numerical similarity of gauges for wires and shotguns really is nothing but coincidence. The shotgun law is not logarithmic but reciprocal. Wikipedia explains:

The gauge of a firearm is a unit of measurement used to express the diameter of the barrel. Gauge is determined from the weight of a solid sphere of lead that will fit the bore of the firearm, and is expressed as the multiplicative inverse of the sphere’s weight as a fraction of a pound, e.g., a one-twelfth pound ball fits a 12-gauge bore. Thus there are twelve 12-gauge balls per pound, etc. The term is related to the measurement of cannon, which were also measured by the weight of their iron round shot; an 8 pounder would fire an 8 lb (3.6 kg) ball.

Addendum 2016-08-08: Leon Harkleroad has brought to my attention his excellent article on “Tuning with Triangles” (College Mathematics Journal, Vol. 39, No. 5 (Nov. 2008), pp. 367–373). He describes a simple geometric procedure that Vincenzo Galilei (father of Galileo) used for fretting stringed instruments. In essence it takes \(18/17 \approx 1.05882\) as an approximation to \(\sqrt[12]{2} \approx 1.05946\).

Posted in computing, mathematics, technology | 12 Comments