The Problems of Prediction, Part 1

Michael Gove was an unlikely representative of a supposedly emerging disdain for expertise. The Oxford-educated British Parliament member is unusually cerebral and intellectual for a politician, and his tenure as the Secretary for Education, although controversial, featured waves of idea-based reform. But it was through his prominent role in the Leave campaign, during 2016’s referendum concerning UK membership in the European Union, that Gove left an unexpected mark. Faced with a barrage of negative predictions from governmental and other prominent organizations about the effects of exiting the EU, Gove started to explain to an interviewer that people had “had enough of experts from organizations with acronyms saying that they know what is best and getting it consistently wrong.”

It was a statement about predictions and the degree to which both the expertise of predictors and their record of success in prediction should govern whether we regard their predictions as definitive in debate and use them as a basis for action. Those who argued that the UK should leave the EU—there were and are cogent arguments on both sides—could point to cataclysmic and ultimately inaccurate predictions, from many of the same authorities, regarding Britain’s failure to join the European Single Currency (Euro). There could hardly have been a more pertinent topic than predictions and their veracity because during the unpleasant and vituperative campaign, the British public was battered and bullied from both sides, despite the UK’s 43-year experience in the EU, by proclamations about what would certainly happen if the nation did or did not remain. But Gove was cut off by an interviewer incandescent with the idea of “having enough of experts” and another misleading narrative was given life.

“In recent years, a narrative has emerged of elections being contests between the educated and the knuckle-draggers, the ‘post-fact’ loons,” Fraser Nelson contends in the British political magazine, The Spectator, while considering Gove’s experience. “This has led to the musings of experts being presented as fact, and to the rise of the argument-without-argument: x is correct because experts say so. And if you disbelieve x, you’re a post-truth Flat Earther.” This rather smug and self-satisfied perspective seems to be taking hold of large sections of educated populations in Europe and America. And as it does so, the casualty is a collective and individual self-understanding that incorporates both inherent human limitations and the innate complexity of reality. Compounded by the technologically-driven demand that our predictive power match our increasingly Promethean ability to act, these persistent limitations leave us increasingly vulnerable to the unintended and unpredictable consequences which the same technological way of life only multiplies and deepens.

In other words, not only do we know less than we think about a world less knowable than we think, we are making our life in this world even less predictable through the same means that make us less tolerant of unpredictability. There is no excuse for a callous or careless relationship to truth, but identifying that in others will help no-one who combines this diagnosis with an assumption that humans—whether represented by experts, elites, or themselves—have definitively mastered either the present or the future; that may be the most dangerous delusion of all.

We Know Less Than We Think We Know
We need to place a little less faith in experts and their predictions not because knowledge or truth are unimportant but because we ought to place a little less faith in ourselves. This would be true if humans were only fallible and limited. But it is even more the case because people tend, even in the teeth of their inherent limitations, to be over-confident. Philip Tetlock, in his book, Expert Political Judgment: How Good Is It? How Can We Know?, summarized the results of asking 300 experts for a total of 27,000 predictions. He found that “experts thought they knew more than they knew” and that “there was a systematic gap between subjective probabilities that experts were assigning to possible futures and the objective likelihoods of those futures materializing.”

The experts’ predictions were actually rendered even less reliable than they already would have been due to their undue confidence in their predictive ability. This is what Nassim Taleb, author of The Black Swan, calls “illusions of certainty,” which are particularly pertinent in the area of modern life simultaneously most reliant on predictions and most laden with over-confidence: finance. “We act as though we are able to predict historical events,” Taleb reflects, thinking about long-term projections about deficits, profits, and prices that are routinely used to determine policy. “What is surprising is not the magnitude of our forecast errors, but our absence of awareness of it.” Predictions that utterly mislead and misalign practice year after year do not slow the tide of either predictions or the importance assigned to them and those that make them. And this is exacerbated by two characteristic elements of our technological way of thinking and acting: information saturation and the false objectivity which enthrones abstraction.

It is consistent with our solutionism and technologically-driven way of life to imagine that the main thing standing between fallible humans and infallible predictive power is more information. But excessive data can distract as much as it informs. As the concept “illusions of certainty” indicates, Taleb identifies not only a tendency to overestimate our knowledge but also to “underestimate uncertainty, by compressing the range of possible uncertain states.” In other words, the deluge of information we throw at a situation misleads us about the amount of knowledge that remains both unknown and unknowable. If we imagine it is small or non-existent when it is great, we confidently blunder into messes of our own making.

Cracked Crystal Ball

iStock.com / petesaloutos + Manuel-F-O composite

Furthermore—as anyone who has recently tried to teach young people research skills will tell you—the more information you have access to, the more important it becomes to distinguish helpful from unhelpful information. And yet, armed with voluminous data, people (even experts) often take this less, not more, seriously. “The more information you give someone, the more hypotheses they will formulate along the way, and the worse off they will be,” Taleb argues. “The more detailed knowledge one gets of empirical reality, the more one will see the noise (i.e. the anecdote) and mistake it for actual information.” Along these lines, Taleb describes an experiment undertaken with another profession which relies on assigning probability and predicting outcomes. A group of bookmakers was given, in increments, relevant information regarding past racehorse performance (by psychologist Paul Slovic), with which they predicted the results of a series of horse races. After each additional increment, they renewed their predictions. While their accuracy hovered around 15% despite the growing amount of data, and even dropped at the point when they had the most, “their confidence in their choices…went up markedly.”

Our enduring and even deepening inability to accurately predict, as we drown in a deluge of data that we mistake for knowledge and truth, is shielded from our perception by the hubris of being information emperors. There is also concomitant false objectivity inherent in the technological way of thinking (see Parts 1 and 2 of “Principles of Technoskepticism”) in which we only understand reality through our own abstractions, which distances us from reality while deluding us into believing we have obtained ultimate objective insight. The English philosopher Alfred North Whitehead described this error, as it already appeared in the late 19th and early 20th centuries, as the “fallacy of misplaced concreteness.” There could hardly be a better motto for our own age. “This fallacy consists in neglecting the degree of abstraction involved when an actual entity is considered merely so far as it exemplifies certain categories of thought. There are aspects of actualities which are simply ignored so long as we restrict thought to these categories.”

We amass data, plug it into our abstract ways of interpreting reality, and produce clean statistics of probability and prediction. But these statistics are a façade that lend a sense of authority to data that relies not on objective truth, but on fallible assumptions, a dismissal of the unknown, and artificial categories that we exclusively interact with in place of the reality beneath it. We have lost the ability to distinguish in any meaningful way between our systems for understanding reality and reality itself. And so, with our fallacies of misplaced concreteness, we predict as if we know more than we know, and, like Charlie Brown once more kicking the air rather than the football, seem genuinely shocked that we end up flat on our backs.

Reality is Less Knowable Than We Think
But it is not just our own limitations that we have to understand if we are to replace our hubris regarding the present and future with appropriate humility. Our inability also has something to do with the nature of what we seek to know. If I put a copy of both Pride and Prejudice and The Cat in the Hat in front of a chicken, its complete inability to understand the plot of either would tell us nothing useful about the intelligibility of the books. Placing the same works in the hands of a series of humans would provide insight into both the books (the sophistication of the language used and concepts described) and the readers (their relative ability to understand the same). But what if we found a book so complex that it exceeded any human’s ability to understand? This hypothetical book had been shown to the finest minds and none of them could wrap their heads around it. What would we conclude? Certainly we could and would admit that our own human limitations had been highlighted. But we would also judge that there was something about this book that put it beyond us, and we would ask what it was. In the same way, when we see reality eluding our ability to predict, we ought to not only admit our fallibility but also acknowledge that reality is less knowable than we think.

Acknowledging this is not denying that many things about reality as we find it are, in fact, delightfully intelligible to us. Albert Einstein famously commented that “the most incomprehensible thing about the universe is that it is comprehensible.” That is a revealingly precise statement. There are things we can comprehend because “it is comprehensible,” not because we are great thinkers. Indeed when we find ourselves able to comprehend elements of the universe’s comprehensible reality, we are recognizing a natural alignment between our minds and the world for which they were fitted. But in the same way that our comprehension is a meaningful statement about comprehensibility, our incomprehension, which is far greater, points to a deeper and wider incomprehensibility. At times, we are adults enjoying Pride and Prejudice, written for us to enjoy, other times we are chickens pecking at it dumbly.