Search
Browse By Day
Browse By Person
Browse By Session Type
Browse By Research Area
Search Tips
Meeting Home Page
Personal Schedule
Change Preferences / Time Zone
Sign In
The principles of transparency and explainability are guiding landmarks of the current EU regulatory policy towards artificial intelligence. Both are invoked in the policy guidelines as inspiring values that should govern algorithmic decision-making, while providing rationales for normative provisions – on information duties, access rights and control powers – established under the existing regulatory frameworks. This contribution delves into the debate on transparency and explainability from the EU consumer market perspective. First, the position of consumers relative to algorithmic decision-making is considered, and consumer risks concerning mass surveillance, exploitation, and manipulation are discussed. The concept of algorithmic opacity is analysed, distinguishing in particular technology-based opacity that is intrinsic to design choices, from relational opacity toward classes of users. The response of EU law to such problems is then considered. The emerging approach to algorithmic transparency (and explainability as a crucial aspect of it) is connected to the broader and persisting regulatory goals concerning transparency in consumer markets. It is argued that EU law focuses on adequate information being provided to lay consumers (exoteric transparency), rather than on understandability to experts (exoteric transparency). A critical discussion follows on benefits of transparency to consumers, but also on its costs, and on the extent to which transparency can be implemented without affecting performance. Finally, the merits of a transparency-based regulation of algorithms are discussed and some general insights are provided on regulating transparency and explainability within the EU law paradigm.