The British political philosopher John Gray has written that “today, for the mass of humanity, science and technology embody ‘miracle, mystery, and authority.’ Science promises that the most ancient human fantasies will at last be realized. Sickness and aging will be abolished; scarcity and poverty will be no more; the species will become immortal.”
Science here is shorthand for the scientific knowledge used to develop commercial and consumer technologies and products. But classically defined, science refers to processes of inquiry that enable verifiable, replicable data to be gathered and classified. Perhaps it’s time to stop pairing science and tech together. Although susceptible to many forms of corruption, science is possibly the best tool we have to get at “truths” about the planet and ourselves, whereas technology serves humans’ material needs and aspirations. What propels continual innovation is a reflexive impulse toward “more, bigger, faster.” At a meta level, humans see tech as our best hope at cheating death and extending our reach across the universe.
John Lanchester’s review of James C. Scott’s recent work Against the Grain: A Deep History of the Earliest States, advances a similar point about the dichotomy between science and tech, albeit with a different thrust: “For much of human history, though, technology had nothing to do with science. Many of our most significant inventions are pure tools, with no scientific method behind them,” such as fire.
What changed? In the last century, creeping marketization has driven a wedge between the acquisition of inert scientific knowledge and monetizable technologies or processes. The rise of the so-called military-industrial complex, a close cooperation between civilian government, defense contractors, academia, and the military in the aftermath of World War II has also played a role. The Internet emerged out of precisely this arrangement.
But, ideology and vestigial religious belief also play a role. Darwin’s theory of evolution, for example, is broadly accepted within the scientific community as the account of how life forms came to populate Earth. And yet, according to Gallup, in 2014, 42 percent of Americans continued to believe in a creationist account of human origins, with another 31 percent subscribing to a view that blended creationist and evolutionary elements, and only 19 percent believing in “pure” evolution.
According to Science magazine, The US ranks below Europe and Japan in terms of public support for evolution. At the same time the US espouses the outwardly secular philosophy of humanism, which has at its core the idea that human beings are rational and perfectible.
At the heart of America’s two opposing commitments, the glue that syncretizes religious faith and humanism into a semblance of coherence, is anthropocentrism.
In the Bible, Genesis states: “God created man in his own image; male and female created he them. And God blessed them, and said, Be fruitful, and multiply, and replenish the earth, and subdue it: and have dominion over the fish, and over the fowl, and over every living thing that moves on the earth.”
The secular pillar on which humanist ideals rests is the fairly common misconception that evolution is necessarily a teleological or progressive transition, which in turn feeds the myth of human history as a process of continuing, perhaps limitless, material and technological advancement.
In this view, humans have moved from the precariousness of hunting/gathering through sedentarism and feudalism, to reach the Enlightenment and the Industrial Revolution which mark the beginning of history proper. Our present standing as technological and intellectual masters of the universe is unchallenged but also incomplete.
This reading renders slightly more palatable the idea that we evolved to our present form, and importantly enshrines mastery of science and innovation as the engines of human ascendancy. Still, it is false. If anything, unfathomable layers of genetic and historical contingency and randomness have led us—and the planet—to the current state.
Techno-optimism, including its most extreme form, techno-utopianism, ignores a painful truth: science has enabled humans to advance our understanding and standard of living but has not yet shown itself capable of making us more ethical or moral, or less inclined towards cruelty, exploitation, or violence.
Rather techno-optimism, an ideology underpinned by faith in the power of science, drives us toward traversing two existential frontiers. One is the quest to eliminate death and sickness. It’s worth remembering that at one time religion alone promised humans eternal life. Today more and more of us hedge our bets by placing chips on the colors of both pharma and God. Silicon Valley investor Peter Thiel provides only the most famous example. Thiel has sometimes been described as an evangelical Christian, but he diverts considerable investment toward life extension technologies and has also said, “probably the most extreme form of inequality is between people who are alive and people who are dead.”
A second frontier is the unexplored worlds of the deep sea and celestial bodies like Mars. Colonizing the latter is the pet project of another Silicon Valley billionaire, Elon Musk, who has said, “If there’s a third world war we want to make sure there’s enough of a seed of human civilization somewhere else to bring it back and shorten the length of the dark ages.” Musk seems to perceive no human culpability in the unfolding of such a doomsday scenario.
Techno-optimism leaves humans little room to embrace scientific findings that contradict the view of humans as agents of their own destiny or as deserving stewards of the planet. Robert Sapolsky, a Stanford neuroscientist, proposes in his recent work, Behave, that humans, as biological animals, lack free will as commonly conceived, meaning autonomy and agency over our thoughts and actions. He has therefore urged reform of the US justice system, but it is hard to see who would fund the research needed for such reform and consequently who would benefit from a system of incarceration that is underpinned by a scientific understanding of human behavior.
Our techno-optimism has become not just a distraction from truly pressing problems facing the United States and the wider world—socioeconomic inequality, surging obesity, the disappearance of work, mass extinction—but as a potential impediment to tackling them.
Nowhere is this truer than the human struggle to mount a response to anthropogenic climate change. Despite a great deal of political and corporate obfuscation of the science, there is considerable expert consensus on the causes of climate change. There’s also strong consensus that humanity’s best hope lies with reducing greenhouse gas emissions and doing so urgently. Geoengineering technologies may have a role to play, but as David Keith, a Harvard geoengineering expert explains, “At the best, climate engineering is a supplement.”
So what are the stumbling blocks? Reducing emissions will require very difficult sacrifices from all of us. “No amount of psychological awareness will overcome people’s reluctance to lower their standard of living,” according to the Nobel Prize-winning behavioral economist Daniel Kahneman, who has detailed how human aversion to loss is far more powerful than the reward from equal gain. As Elizabeth Kolbert has put it, “When you tell people what it would actually take to radically reduce carbon emissions, they turn away. They don’t want to give up air travel or air conditioning or HDTV or going to the mall or the family car.”
In the US, where apathy about anthropogenic climate change is as strong as anywhere in the world, religion and techno-optimism combine to promote deep skepticism of the science, while free-market fundamentalists argue for market-based solutions to the externalities of climate change. It’s worth noting that all of these may serve as a pretext, a mask for the more fundamental reason provided by Kahneman: an aversion to giving up one’s standard of living.
Hostility to efforts at tackling climate change also originate from unexpected quarters. Harvard cognitive psychologist and linguist Steven Pinker recently argued that “the way to deal with pollution is not to rail against consumption.” Climate change can instead be addressed, he claims, “if we sustain the benevolent forces of modernity that have allowed us to solve problems so far, including societal prosperity, wisely regulated markets, international governance, and investments in science and technology.”
Pinker’s techno-optimism is underpinned not by religion but by a faith in the fruits of the Enlightenment and humanity’s capacity to be rational. Pinker supports the Ecomodernist manifesto, a 2015 document signed by scholars, ecologists, and technologists. It claims, absurdly, that “knowledge and technology, applied with wisdom, might allow for a good, or even great, Anthropocene…. As demand for material goods is met, developed economies see higher levels of spending directed to materially less-intensive service and knowledge sectors. This dynamic might be even more pronounced in today’s developing economies, which may benefit from being late adopters of resource-efficient technologies.”
The nature and scale of anthropogenic climate change forces a kind of cognitive dissonance on Pinker: it is the one sphere in his new book Enlightenment Now in which he can muster no charts to the effect that things are getting better, and he relies instead on sunny predictions about technological solutions and on tetchily attacking exponents of climate action for indulging in what he calls “Nazi-like comparisons of human beings to vermin, pathogens, and cancer.”
In celebrating the triumph of Enlightenment reason over all worldly challenges, Pinker and the Ecomodernists are guilty of motivated reasoning. The same is true when they advance theses of human history which are tendentiously Hobbesian: hunter gatherer life was short, unpleasant, and unremittingly violent by contrast with modern life. Increasingly we know this to be false or, at best incomplete.
Scott’s Against the Grain would surely deepen Pinker’s sense of cognitive dissonance. Grain makes a compelling argument that humans resisted for centuries joining the “civilization” of early urban centers in Mesopotamia and chose instead to remain “barbarians,” to retain their freedom and health rather than accept drudgery and disease.
Regarding these early settlements, where a number of early innovations took hold—taxation, writing, intensive agriculture—Scott quotes a speculative but illuminating passage by the anthropologist C. C. Lambert-Karlovsky: “Perhaps far from being less intellectually qualified to deal with complexity, the peripheral peoples were smart enough to avoid its oppressive command structures for at least another 500 years until it was imposed on them by military conquest.”