Individual Submission Summary
Share...

Direct link:

Poster #38 - Engaging Academic and Community Stakeholders in the Responsible Design, Development, and Deployment of AI Technologies

Saturday, November 15, 12:00 to 1:30pm, Property: Hyatt Regency Seattle, Floor: 7th Floor, Room: 710 - Regency Ballroom

Abstract

The rapid evolution and deployment of artificial intelligence (AI) technologies have highlighted a fundamental polarity in computing: the drive for innovation and expedited outcomes often overshadows the imperative to address historical neglect, harm, and exploitative practices. While the use and application of AI has provided numerous benefits, such rapid advancements have neglected the potential for and adverse impacts on communities (Sweeney, 2013; Adamson and Smith, 2018; Buolamwini and Gebru, 2018; Harwell, 2019; Whittaker et al., 2019; Kasy and Abebe, 2020; McIlwain, 2020). Such oversight highlights the need for researchers to carefully consider the use and application of AI in communities to balance its benefits against potential harms. However, community-engaged research poses unique challenges, such as balancing the transient nature of projects with significant student involvement and the need for long-term commitment to ensure sustainable and meaningful collaborations (Kotturi and Hui et al., 2024). Other issues include economic challenges stemming from short-term funding and a disconnect between commercial goals and communities’ authentic needs (Harrington et al., 2019). While human-computer interaction (HCI) researchers have begun to document approaches to address such challenges, the field’s current trajectory reveals a significant gap in achieving comprehensive solutions. Such approaches often focus on specific shortcomings of our design and research methods and target the designers' decisions on often single technical systems (Constanza-Chock, 2020; Harrington et al., 2019; Wang et al., 2024). Such approaches fail to consider the process of research formation in the first place. Dillahunt et al. argue for more anticipatory approaches, which require larger networks of anticipation and aim to mitigate harm more systematically (Dillahunt et al., 2023). Our research aims to address a significant research gap: the lack of a comprehensive framework for engaging both academic and community stakeholders in the responsible design, development, and deployment of AI technologies.


 


We aim to address how academics engage with community partners and community members by focusing on the planning phases of research and community engagement, specifically in technology development in AI. It is essential for community members and representatives, especially those falling victim to discrimination and systematic disadvantage, to be actively involved in such conversations. We envision outcomes to include novel research paradigms that transcend traditional academic boundaries, offering a dual benefit: educating the public about AI's potential and pitfalls and establishing formal channels for inclusive stakeholder input throughout the research lifecycle. This approach aims to form partnerships among academic researchers, non-profits, and other stakeholders to ensure that ethical, legal, and societal considerations are integral to the research agenda, from project conception to execution and beyond.



We identify the need for educating partnering communities about advanced technologies such as AI and their impact while reviewing prior research collaborations. Such efforts will enable us to analyze and explore the multifaceted relationship between research and community, the results of which will inform future project planning, emphasizing ethical practices, stakeholder involvement, and the broader impacts of our research, which may serve as a model for other stakeholder collaborations.

Authors