Why longtermism is the world’s most dangerous secular credo | Aeon Essays


Scarecrows keep away migratory birds from the dangers of the tailing ponds created by the exploitation on the tar sands at Fort McMurray, Alberta, Canada. Photo by Larry Towell/Magnum

 

Edited bySam Dresser

It started as a fringe philosophical theory about humanity’s future. It’s now richly funded and increasingly dangerous

 

There seems to be a growing recognition that humanity might be approaching the ‘end times’. Dire predictions of catastrophe clutter the news. Social media videos of hellish wildfires, devastating floods and hospitals overflowing with COVID-19 patients dominate our timelines. Extinction Rebellion activists are shutting down cities in a desperate attempt to save the world. One survey even found that more than half of the people asked about humanity’s future ‘rated the risk of our way of life ending within the next 100 years at 50 per cent or greater.’

‘Apocalypticism’, or the belief that the end times are imminent, is of course nothing new: people have warned that the end is nigh for millennia, and in fact many New Testament scholars believe that Jesus himself expected the world to end during his own lifetime. But the situation today is fundamentally different than in the past. The ‘eschatological’ scenarios now being discussed are based not on the revelations of religious prophets, or secular metanarratives of human history (as in the case of Marxism), but on robust scientific conclusions defended by leading experts in fields such as climatology, ecology, epidemiology and so on.

We know, for example, that climate change poses a dire threat to civilisation. We know that biodiversity loss and the sixth mass extinction could precipitate sudden, irreversible, catastrophic shifts in the global ecosystem. A thermonuclear exchange could blot out the Sun for years or decades, bringing about the collapse of global agriculture. And whether or not SARS-CoV-2 came from a Wuhan laboratory or was cooked up in the kitchen of nature (the latter seems more probable right now), synthetic biology will soon enable bad actors to design pathogens far more lethal and contagious than anything Darwinian evolution could possibly invent. Some philosophers and scientists have also begun sounding the alarm about ‘emerging threats’ associated with machine superintelligence, molecular nanotechnology and stratospheric geoengineering, which look no less formidable.

Such considerations have led many scholars to acknowledge that, as Stephen Hawking wrote in The Guardian in 2016, ‘we are at the most dangerous moment in the development of humanity.’ Lord Martin Rees, for example, estimates that civilisation has a 50/50 chance of making it to 2100. Noam Chomsky argues that the risk of annihilation is currently ‘unprecedented in the history of Homo sapiens’. And Max Tegmark contends that ‘it’s probably going to be within our lifetimes … that we’re either going to self-destruct or get our act together.’ Consistent with these dismal declarations, the Bulletin of the Atomic Scientists in 2020 set its iconic Doomsday Clock to a mere 100 seconds before midnight (or doom), the closest it’s been since the clock was created in 1947, and more than 11,000 scientists from around the world signed an article in 2020 stating ‘clearly and unequivocally that planet Earth is facing a climate emergency’, and without ‘an immense increase of scale in endeavours to conserve our biosphere [we risk] untold suffering due to the climate crisis.’ As the young climate activist Xiye Bastida summed up this existential mood in a Teen Vogue interview in 2019, the aim is to ‘make sure that we’re not the last generation’, because this now appears to be a very real possibility.

Given the unprecedented dangers facing humanity today, one might expect philosophers to have spilled a considerable amount of ink on the ethical implications of our extinction, or related scenarios such as the permanent collapse of civilisation. How morally bad (or good) would our disappearance be, and for what reasons? Would it be wrong to prevent future generations from coming into existence? Does the value of past sacrifices, struggles and strivings depend on humanity continuing to exist for as long as Earth, or the Universe more generally, remains habitable?

Yet this is not the case: the topic of our extinction has received little sustained attention from philosophers until recently, and even now remains at the fringe of philosophical discussion and debate. On the whole, they have been preoccupied with other matters. However, there is one notable exception to this rule: over the past two decades, a small group of theorists mostly based in Oxford have been busy working out the details of a new moral worldview called longtermism, which emphasizes how our actions affect the very long-term future of the universe – thousands, millions, billions, and even trillions of years from now. This has roots in the work of Nick Bostrom, who founded the grandiosely named Future of Humanity Institute (FHI) in 2005, and Nick Beckstead, a research associate at FHI and a programme officer at Open Philanthropy. It has been defended most publicly by the FHI philosopher Toby Ord, author of The Precipice: Existential Risk and the Future of Humanity(2020). Longtermism is the primary research focus of both the Global Priorities Institute (GPI), an FHI-linked organisation directed by Hilary Greaves, and the Forethought Foundation, run by William MacAskill, who also holds positions at FHI and GPI. Adding to the tangle of titles, names, institutes and acronyms, longtermism is one of the main ‘cause areas’ of the so-called effective altruism (EA) movement, which was introduced by Ord in around 2011 and now boasts of having a mind-boggling $46 billion in committed funding.

It is difficult to overstate how influential longtermism has become. Karl Marx in 1845 declared that the point of philosophy isn’t merely to interpret the world but change it, and this is exactly what longtermists have been doing, with extraordinary success. Consider that Elon Musk, who has cited and endorsed Bostrom’s work, has donated $1.5 million dollars to FHI through its sister organisation, the even more grandiosely named Future of Life Institute (FLI). This was cofounded by the multimillionaire tech entrepreneur Jaan Tallinn, who, as I recently noted, doesn’t believe that climate change poses an ‘existential risk’ to humanity because of his adherence to the longtermist ideology.

Meanwhile, the billionaire libertarian and Donald Trump supporter Peter Thiel, who once gave the keynote address at an EA conference, has donated large sums of money to the Machine Intelligence Research Institute, whose mission to save humanity from superintelligent machines is deeply intertwined with longtermist values. Other organisations such as GPI and the Forethought Foundation are funding essay contests and scholarships in an effort to draw young people into the community, while it’s an open secret that the Washington, DC-based Center for Security and Emerging Technologies (CSET) aims to place longtermists within high-level US government positions to shape national policy. In fact, CSET was established by Jason Matheny, a former research assistant at FHI who’s now the deputy assistant to US President Joe Biden for technology and national security. Ord himself has, astonishingly for a philosopher, ‘advised the World Health Organization, the World Bank, the World Economic Forum, the US National Intelligence Council, the UK Prime Minister’s Office, Cabinet Office, and Government Office for Science’, and he recently contributed to a report from the Secretary-General of the United Nations that specifically mentions ‘long-termism’.

The point is that longtermism might be one of the most influential ideologies that few people outside of elite universities and Silicon Valley have ever heard about. I believe this needs to change because, as a former longtermist who published an entire book four years ago in defence of the general idea, I have come to see this worldview as quite possibly the most dangerous secular belief system in the world today. But to understand the nature of the beast, we need to first dissect it, examining its anatomical features and physiological functions.

he initial thing to notice is that longtermism, as proposed by Bostrom and Beckstead, is not equivalent to ‘caring about the long term’ or ‘valuing the wellbeing of future generations’. It goes way beyond this. At its core is a simple – albeit flawed, in my opinion – analogy between individual persons and humanity as a whole. […]

Continue reading: Why longtermism is the world’s most dangerous secular credo | Aeon Essays

About agogo22

Director of Manchester School of Samba at http://www.sambaman.org.uk
This entry was posted in philosophy and tagged , , . Bookmark the permalink.

2 Responses to Why longtermism is the world’s most dangerous secular credo | Aeon Essays

  1. Pingback: Why longtermism is the world’s most dangerous secular credo | Aeon Essays — msamba – Transformations

  2. ShiraDest says:

    Ah, this is good to know. My initial impressions, or maybe assumptions, of this movement were exactly as you say, with the good of humanity over the next thousand or so years, in mind. Trying to look beyond that, or even to the next hundred years, is nearly impossible. I am currently working on a book that proposes a plan for a Kinder and Safer world for all of us, over the next 70 years, if you are interested, Sir?
    Shira

    Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.