Sam Seitz

One of the books I read earlier in the year was Toby Ord’s The Precipice, which offers an expansive and provocative look at the threats facing humanity and reasons for prioritizing the survival of the human race above everything else. It’s worth noting at the outset that the book is really quite good, and I think Ord’s work is a worthy addition to the writings of Bostrom and others in the existential risk community. Ord is especially compelling when justifying his prioritization of human existence. As he correctly notes, the extinction of the only intelligent species known to exist in the universe would be a crushing blow because it would extinguish the possibility of so many discoveries and advances, including near utopias for our descendants. It would also permanently eliminate billions or even trillions of future lives that otherwise might have received the opportunity to experience the wonder of existence. Ord is also quite effective at communicating a broad range of risks in a clear and accessible way. For example, he correctly shows that climate change is almost certainly not an existential risk. This is not to detract from the urgency of addressing these clear challenges but rather to demonstrate the resiliency of humanity and the planet. It also suggests that our focus should be adjusted to those risks that are even more deadly and that genuinely might foreclose any future for humanity. Instead of blindly throwing resources at all manner of challenges, Ord urges us to dedicate our limited resources to the truly high risk, high impact exigencies.

But the book and its arguments are not without problems. For example, I am skeptical of Ord’s classification of risks. This is largely because many of the truly existential challenges he discusses still reside largely in the minds of futurists and science fiction writers. Their risk, therefore, is almost unknowable outside of whatever Bayesian prior you happen to hold. This is a problem for Ord because much of the book’s argument hinges on his probabilistic ranking of existential risks. We already know the base rate of natural risks (how often asteroids strike, the frequency and magnitude of super volcano explosions, etc.). We don’t, however, know the probability of Skynet initiating a nuclear holocaust. This uncertainty allows Ord to assign an arbitrarily high risk to techno-dystopian scenarios and thus make them appear far greater than the risks posed by ecological destruction or stellar explosions. It’s not entirely irrational to upgrade the threat of unknown risks relative to known, low-probability scenarios. But some of his estimates are based on sheer supposition, and yet they’re assigned many orders of magnitude more risk than natural events. The upshot is that Ord views this particular moment in time as uniquely dangerous due to the plethora of new advances in areas such as gene editing and AI that render us uniquely vulnerable to calamity; we stand, he argues, at “the precipice.” I don’t disagree with Ord’s logic – the unknown technological risks in front of us are genuinely scarier than the natural risks we already more or less understand. But given how subjective his approach to risk classification is, one is forced to wonder just how useful the book’s “objective” ranking of risks really is.

The bigger issue with the book, though, is it’s conclusion. Given the panoply of dangers identified by Ord, it’s not surprising that he chooses not to spend too much time on specific risk-reduction strategies. But to the extent that he does, the solutions generally seem sensible: raise funding for biosecurity, continue to expand asteroid tracking infrastructure, etc. His proposed remedies become much less compelling, however, when one moves to his more general admonition to slow down technological progress so that our moral frameworks might catch up with our technological abilities. The problem is that Ord never convincingly establishes that humans must inexorably become more moral creatures. There is reason to believe that humans today are less violent and more intelligent than our ancestors, but it’s not obvious that this trend will continue. Moreover, it’s far from clear that our morality would continue improving if we weren’t faced with novel ethical challenges brought on by technology. For instance, would the laws of war have been codified without the horrors of Crimea and the Western Front? It’s also not obvious that it’s morality that keeps our technological powers in check rather than technology compelling us to be more moral. By way of illustration, it seems imminently plausible that the significant decline in major wars post-1945 derives from the existence of nuclear weapons. In this case, it’s exactly the dangerous technology that makes us more peaceful creatures.

This is probably my biggest frustration with Ord and his fellow travelers (such as John Davis). They seemingly view human civilization as a vast collection of philosophers who, through sheer intellect, can (or maybe can’t) reason and surmise their way to the proper standards for governing technological progress. Thus, the solution is to sit and think for a good while before tentatively moving forward or, if you fall into the more pessimistic camp, simply give up on technological progress altogether. It seems to me that both these views are wrong. Humans can develop better frameworks for mitigating technological risk, but they can only really emerge through trial and error, not first principles. Unless one has access to clear, empirical data, it’s simply not possible to devise comprehensive standards by which to govern technology. We have to actually run the experiments and create the technology to understand the risks. Of course we should be cautious and thoughtful about our actions, but we can’t simply eliminate risks by growing in abstract wisdom. This is especially true if it’s the existential pressures of technological change that drive us toward morality and ever greater wisdom. Given that gains in life expectancy, peacefulness, and human rights law all developed in step with industrialization and scientific discovery, one wonders if the retardation of technological progress would not similarly retard moral progress.

The risks of stagnation created by Ord’s approach extend beyond a slowdown in humanity’s moral growth. After all, technology can also be a source of material good, improving human wealth and quality of life. The same CRISPR technology that could be used to engineer bioweapons can also potentially eliminate awful genetic diseases and birth defects. An artificial general intelligence might subjugate humanity, but it could also usher in major breakthroughs in a range of fields. Ord’s argument is that we must focus on the negatives of technology because we only get one existential failure. Thus, the prudent approach is to minimize downside risks even if this limits the upsides. Put differently, the survival of humanity outweighs any one human’s quality of life. It’s not so much that I disagree with this framing as it is that I doubt our ability to minimize downside risks through technological stagnation and deep philosophical conversations. And if we can’t simply reason our way out of technological tragedies, then what purpose does a slower approach to technological progress serve other than the imposition of unnecessary suffering for those who could otherwise be reaping the benefits of innovation? While everyone loses from stagnation, this potential loss affects future humans even more severely. As Tyler Cowen points out in Stubborn Attachments, the power of compound growth means that any extra wealth and innovation we can squeeze out now will redound exponentially to our descendants. By artificially constraining technological progress, therefore, we’d be significantly diminishing the wealth and quality of life available to all future humans. Given that Ord’s own purported reason for preserving human existence is the near utopian futures our children might create, the constraint his proposals place on our descendants’ wealth and resources strikes me as quite significant.

There’s one additional risk to Ord’s approach, and that’s that it not only imposes costs on human progress but actually ends up making an extinction level event more likely. Leopold Aschenbrenner makes this case particularly clearly when he writes that “Faster economic growth could initially increase risk, as feared. But it will also help us get past this time of perils more quickly.” His observation is that we’re already standing on the precipice, so the more we stagnate the longer we’re forced to occupy this precarious position. Sprinting for better technology might increase the odds of us tripping and falling off the edge, but it also ensures that we reach firmer ground more quickly. If we languish on the edge, we risk falling to our death, even if we are assiduously careful and risk averse. Aschenbrenner also makes the astute point that wealth generates risk aversion because it gives people a stake in the system. If you own a house and have a steady income, you don’t want anything to emerge that might rock the boat. If you’re barely scraping by, though, you’re more likely to risk drastic actions to improve your lot in life. The more we raise human wealth and wellbeing, the less we’ll have to worry about desperate, reckless individuals taking huge risks to improve their situation. Embracing a philosophy of stagnation would only entrap those at the bottom in a hopeless situation, perhaps precipitating among them the kind of revolutionary attitude that would condone the use of nuclear or biological weapons.

New technologies are rarely completely benign, but neither are they totally malign. The same understanding of radiation that gives us medical x-rays and clean nuclear power grants us the ability to unleash nuclear holocausts onto the planet. The invention of air travel made the world more accessible to more people, but it also raised the risk of rapid pathogen spread. The reality is that technology often serves multiple uses and thus creates new risks and new rewards. But so far, at least, we’ve always managed to invent new technologies to mitigate the problems created by our earlier, more antediluvian inventions. The alternative to technological progress is not comfort and safety but inevitable stagnation and, ultimately, extinction. Eventually one of the low probability natural risks identified by Ord will happen – an asteroid will head toward us or a super volcano will erupt – and we’ll either have the requisite technology to ensure our survival or we will cease to exist. Sitting around thinking deep thoughts won’t save us, only technology will. And perhaps the technological precipice extends out for eternity, trapping us forever in a game of Russian roulette. If this is the case, we’re in trouble. Someone will eventually make a catastrophic error. But there’s simply nothing we can do to prevent this. Consequently, the rational response is still to make the most of the present. No matter what the risks, the only path forward is more research, more technology, and more innovation.