Search
Browse By Day
Browse By Person
Browse By Session Type
Browse By Research Area
Search Tips
Meeting Home Page
Personal Schedule
Change Preferences / Time Zone
Sign In
This paper proposes to locate the gradual displacement of biological plausibility by empirical successes as artificial intelligence’s site of legitimacy (see Glorot and Bengio 2010; Hinton, Osindero, and Teh 2006) within the institutional and national context in which its current statistical form partially emerged. While symbolic AI and other approaches to machine intelligence are generally linked back to the military-industrial setting of American (Edwards 1996), British (Pickering 2010), and Soviet (Peters 2016) Cold War science, it is Canada’s unique institutional context which provided the most conductive setting for AI’s statistical form to survive and flourish while abandoned virtually everywhere else.
Despite having been mostly devoid of any sophisticated computing capabilities until the mid-1970s, postwar Canada was the unlikely site of a wide range of debates and ideas around the nature and function of computational media. Starting with Pierre Elliott Trudeau’s adoption of computing as a solution to both federalism and the modernisation of the state’s bureaucracy (Trudeau 1965, 1969), the National Research Council and, later, the Canadian Institute for Advanced Research gradually articulated a computing policy anchored in the automation of specific administrative tasks (translation, coordination, resource management, etc.) fulfilled by state institutions. Echoing the work of Jon Agar (2003) and Edward Jones-Imhotep (2017) on the administrative underpinnings of computing, this project will provide a more granular and politically rooted account of AI’s empirical conception of intelligence by revisiting the Canadian history of computer science and, more specifically, machine learning from the perspective of their financial and institutional support.