Search
Browse By Day
Browse By Person
Browse By Session Type
Browse By Research Area
Search Tips
Meeting Home Page
Personal Schedule
Change Preferences / Time Zone
Sign In
Emerging from analytic philosophy, especially the work of Nick Bostrom, the study of “existential risk” is the study of any risk that might lead to absolute human extinction, from extreme global warming to nuclear winter to malevolent AI. The goal of existential risk analysts is ambitious: to establish IPCC-like institutions to study and mitigate the probabilities of risks that can only happen once, so that, having no precedent, we will only know that they are possible when it is too late. Accordingly, for proponents such as Sir Martin Rees, these risk analysts will have earned their funding if the science-fiction-like scenarios they project fail to materialize. In recent years, existential risk has become increasingly popular. The field boasts Ted Talks with millions of views, attracts funding from tech billionaires such as Elon Musk, and features in articles published by The Economist, The New Yorker, Wired, and other high-profile venues. With my co-author Joshua Schuster, I have been studying this field in its cultural contexts. Because of its many philosophical and political blind spots, we question it in a short book entitled Three Critiques of Existential Risk. But our largely negative critiques do move in two affirmative directions: 1) existential risk can be a backdoor for understanding the role of science fiction in powerful Silicon Valley worldviews; 2) the critique of existential risk is a compelling interface between the specialist humanities and popular culture, one at which to think the place of apocalypse and eschatology in contemporary media.