Has living through a pandemic made us all better at maths?

David Sumpter
5 min readOct 13, 2020

hen Boris Johnson addressed the nation to announce new coronavirus restrictions last month, he talked about how the virus would “spread again in an exponential way” and warned us that the “iron laws of geometric progression [shout] at us from the graphs”.

My first reaction, as an applied mathematician, was to smile to myself at his careless use of mathematical ideas. Disease spread is nearly always exponential, it is just another way of saying that the virus multiplies over time. So, it is not the exponential nature of the growth itself that has changed, but the multiplication constant (the R number) that has increased. The term “geometric progression” implies that the virus spreads at evenly spaced, discrete intervals, rather than continuously, at any time of the day.

The prime minister’s faux-academic style isn’t everyone’s cup of tea, but most of us have a sense of what he is trying to get at (even if he is taking liberties with the terms). We have seen the graphs of cases and deaths; we have understood log scales (where 1, 10, 100, 1000 … are equally spaced on the y-axis of the graphs); we know that we want the R number to be less than one; and we get why exponential growth leads to sudden outbreaks.

Our collective mathematical knowledge has increased greatly during the pandemic. Days before the new restrictions were announced, talkRadio’s Julia Hartley-Brewer took Matt Hancock to task over whether he understood the implications of a false positive rate (when a person who doesn’t have the disease tests positive, because of an error in the test) of 1% for tests.

Hartley-Brewer argued that if the FPR (yes, we are even using initialisms now) was 0.8% then 91% of “cases” were false positives. Her analysis was built on an explainer by Carl Heneghan, professor of evidence-based medicine at Oxford University. He pointed out that testing 10,000 people at an FPR of 0.1% would on average give 10 false positive tests. Then he noted that if only 1 in 1,000 people had the disease then within that same population of 10,000 there were 10 real cases on average. If the test picked up 80% of these real cases, we would expect only eight of the positive test results to be for people who really had the disease. Thus, out of the total of 18 positive tests, 10/18 or roughly 56% of the “cases” were false positives.

Hartley-Brewer’s calculation follows the same logic. 10,000 people at an FPR of 0.8% would, on average, give 80 false positive tests. And if there are eight true positive tests then 80/(80+8) or 91% of the reported “cases” would be false positives. Exactly as she claimed.

When I hear these arguments playing out in public discourse, I get goosebumps. Not because Heneghan is necessarily correct in his conclusions, but because it’s the type of intellectual approach we need more of. The daily news is starting to sound like one of my university research group meetings. Models are built, data is collected and assumptions are challenged. Yes, it gets heated, we don’t all agree and all but one of us end up being proved wrong. But the debate is passionate and scientific.

The mathematical equation used to explain false positives was discovered by the Reverend Thomas Bayes in the mid-18th century. It was first applied by Richard Price, a friend of Bayes, to argue for the plausibility of religious miracles. Price attacked an argument by the philosopher David Hume, who had argued that when something miraculous has occurred, like the resurrection of Christ, we should consider all the occasions similar events had not occurred, ie all the times people did not come back from the dead, as evidence against accepting the possibility of the miracle. In modern terminology, he was arguing that miracles were best explained as false positives: witnesses were mistaken when they saw someone come back to life.

In Price’s counterargument, miracles take the role of people who have got the virus: the fact that a small proportion are infected and false positives occur does not imply that people aren’t ever infected. Similarly, if miracles are rare (which they are by definition) then the existence of false positive miracles now and again is not strong evidence against their existence. Price dismissed Hume’s argument as “contrary to every reason”. Hume never provided an effective counterargument.

Such arguments have their limitations, but they illustrate that equations can sharpen the thinking of even the greatest philosophers, and talkshow hosts. In fact, Bayes’ rule can provide better judgment in most things. For instance, imagine you are an experienced traveller, having flown 100 times before. But on this flight the plane starts to rattle and shake in a way you have never experienced before. Should you be worried?

What you need to do is think about the baseline rate of plane crashes (something like one in 10m) then think about the fact this is “only” your worst ever flight — one out of 100 earlier flights. The probability that you are experiencing a true positive (a shaky ride ending in a crash) is then roughly 100/10,000,000 or 0.001%. You are very probably not going to die.

The same reasoning can be used to help us to judge our friends less harshly. For example, if a longstanding friend lets you down, even very badly, you should consider the let-down as likely to be a false positive — that they made a mistake this time — rather than “proof” of their flawed character. Before you make a judgment, you need to weigh up the likelihood of all alternative explanations.

Johnson’s “iron law of geometric progression” is an example of a different and equally important equation: the influencer equation (also known as the less catchy “ stationary distribution of a Markov chain “). It is used by Google and Instagram to look for the most influential webpages and people on their networks. They first use webcrawlers — automated bots that hop in discrete-time jumps from one person to another in the network — to collect data on our social connections. The influencer equation then allows these companies’ engineers to measure the rate at which information spreads between us. It is the continuous-time version of this same equation that allows epidemiologists to measure, through physical connections, how a virus spreads through the population.

This new openness for mathematical ideas might be one of the few positive things to come from the current crisis. I look forward to seeing debate where, instead of simply hurling numbers at each other, we use equations and models to structure our thinking. We may even hear future prime ministers talking about financial crises in terms of inaccurate assumptions in their market equation or admitting that artificial productivity targets in academia and healthcare result from poorly thought-out skill equations.

Bayes would tell us that whether or not this last “miracle” occurs remains somewhat uncertain. But what is true is that equations allow us to better explain our assumptions and reasoning, even in the most heated of debates. And, if we want to, we can use them to create a better world.

David Sumpter is the author of The Ten Equations that Rule the World: And How You Can Use Them Too (Allen Lane)

Originally published at https://www.theguardian.com on October 13, 2020.

--

--

David Sumpter

Books: Four Ways of Thinking (2023); The Ten Equations (2020); Outnumbered (2018); Soccermatics (2016) and Collective Animal Behavior (2010).