Individual Submission Summary
Share...

Direct link:

Why Opaque AI Won’t Go Away: A Case for Ethical AI without Complete Transparency

Thu, September 15, 4:00 to 5:30pm, TBA

Abstract

In this paper I argue that complete transparency in AI – even when AI is deployed in areas that have substantial impact on our welfare– is undesirable as a goal in AI development. Specifically, it is undesirable because completely transparent AI is functionally tantamount to data processing that involves no AI at all. Given that there are enormous potential benefits that could arise from the proper use of AI in at least some of the commonly identified ethically significant domains (medical diagnostics, global food production, police practices) we would need powerful ethical reasons to support fully eliminating its use in these areas.
I suggest that we refocus our attention on the reason we desire transparency in AI, and see if there are alternative ways of achieving that end. Considering that we already routinely employ opaque processes in our ordinary, ethically responsible, and reliable social practices, I suggest that one of the main ethical goals in increased AI transparency is related to trust: we value transparency because it serves as a proxy for the trustworthiness of the AI system. As transparency increases, stakeholders’ trust may reasonably increase as well. To the extent that this is correct, a promising method for properly incorporating useful AI into ethically significant domains may be to adapt our current epistemic strategies for deferring to experts in areas that require expertise and have substantial impact on our welfare. I suggest that we can look to these ordinary epistemic practices as a kind model for incorporating useful, opaque, AI into ethically significant domains.

Author