Search
Browse By Day
Browse By Time
Browse By Sub Unit
Browse By Session Type
Browse By Keywords
Search Tips
Personal Schedule
Change Preferences / Time Zone
Sign In
Progress in human capabilities has provided stupefyingly impressive means of self-destruction, and technology’s relation to the ends of survival and wellbeing remains an open question. This question can be framed as a problem of stupidity, which, as thinkers of human capacities since antiquity have suspected, may essentially represent a stunning mismatch of means and ends. In contrast to analyses focused on cognitive biases or systemic constraints on capacities, this presentation rethinks existential risks and their normative consequences in terms of capacity density and contextual thickness, where the relation between means and ends is complicated and stupidity is not a bug but a feature. This framing is illustrated by the rational requisite to signal enough stupidity to self-destruct in the brinkmanship model, the need to incorporate artificial stupidity in AI applications to maximize desirable outcomes, and the persistence of sympathetic feelings expressed, even against apparent prudence, in humanitarian, ecological, and future AI rights concerns.