[ARTICLE] What future for the SREN law?
15 May 2024
[STUDENTS POLICY BRIEF] GDPR: A Critical Investigation of Data Subject Rights
27 May 2024

[STUDENTS POLICY BRIEF] Explicability in AI

By Janine Ecker, Paul Kleideinam, Claudia Leopardi, Anna Padiasek, Benjamin Saldich


The Digital, Governance and Sovereignty Chair publishes, on a regular basis, the finest essays and papers written by Sciences Po students in the course of their studies.

This Policy Brief has been selected as one of the best works written during the course taught by Prof. Suzanne Vergnolle “Comparative Approach to Big Tech Regulation” in Spring 2024.


In an era where Artificial Intelligence (AI) will play an increasingly important role in our society, it is imperative to maintain a level of human control over AI systems. Explicability—broadly defined as a level of understanding or explanation as to how AI systems function and make decisions—is a core component of this human control. And yet, academics, ethicists, and lawmakers have thus far failed to coalesce around a singular strategy for regulating explicability in the field of AI. This policy brief, produced by our European think tank, synthesizes academic insights and international regulatory approaches to propose implementable recommendations for American policymakers. Our objective is to strike a balance between ethical imperatives and practical considerations, ensuring transparency, accountability, and societal trust in AI technologies.

After examining the current understanding of notions of transparency in  “white-box” and “black-box” AI systems, the paper analyzes how organizations and countries have sought to define and regulate AI explicability, with a specific focus on the EU, China, and the United States. Out of this analysis, three main policy strategies emerge, whose strengths and limitations are considered.

Drawing inspiration from recent regulatory efforts in the EU, this paper recommends a balanced approach to AI explicability that seeks to regulate AI governance based on risk levels, acknowledging technical limitations while ensuring accountability and transparency. We propose four key policy strategies that the United States Congress should consider when crafting AI legislation:

  1. Implement a Risk-Based Approach: Adopting a structured framework akin to the EU’s AI Act ensures consistency, transparency, and proportionality in AI regulation.
  2. Mandate Binding Obligations for High-Risk Systems: Enforce transparency and human-centered approaches for high-risk AI systems, ensuring accountability and mitigating risks.
  3. Establish Clear Liability Rules: Introduce liability rules to facilitate redress for individuals harmed by AI systems, balancing preventive measures with mechanisms for addressing harm.
  4. Formation of an FTC Task Force: Establish a dedicated task force within the FTC to oversee AI governance, ensuring compliance and fostering collaboration among stakeholders.

This paper also notes the complexities and the evolving nature of the AI sector, which poses unique challenges to envisioning and implementing explicability-centric regulation. Achieving reliable explanations for AI decision-making remains a significant challenge, and must be addressed through future research.


Janine Ecker specializes in digital and technology regulation with an academic background in Public Policy, Political Science, and Business Administration. She currently works at the newly established AI Office at the European Commission in Brussels. Janine has extensive experience in digital regulation from her roles in the public policy teams at Amazon and BMW, as well as KPMG’s digital consulting practice.

Master in Public Policy at the School of Public Affairs of Sciences Po. Policy stream: Digital, New Technology and Public Policy.

Paul Kleineidam earned a bachelor’s degree in social sciences from Sciences Po Paris, specializing in politics and public policy. He also studied politics, philosophy and economics at the London School of Economics (LSE). He now works as a consultant for SILAMIR, helping businesses accelerate their digital transformation.

Master in Public Policy at the School of Public Affairs of Sciences Po. Policy stream: Digital, New Technology and Public Policy

Claudia Leopardi has a background in International Relations and Cybersecurity Governance thanks to her undergraduate studies at Leiden University and her work experiences at the Réseaux IP Européens Network Coordination Centre (RIPE NCC) and the NextGen program for the Internet Corporation for Assigned Names and Numbers (ICANN). She is currently deeply involved in Internet Governance projects both at Sciences Po and in the technical community of the Internet. 

Master in Public Policy at the School of Public Affairs of Sciences Po. Policy stream: Digital, New Technology and Public Policy.

Anna Padiasek has a background in Politics and Economics, having obtained her undergraduate degree from King’s College London. She has previously worked in the Office of the Prime Minister of Poland and in third sector organisations, specialising in digital inclusion, job automation and green finance policies. Currently, she works for the Local Employment and Skills Unit at the OECD where she supports projects related to skills shortages and local job creation.

Master in Public Policy at the School of Public Affairs of Sciences Po. Policy stream: Digital, New Technology and Public Policy.

Benjamin Saldich has an academic background in technology regulation and social media data analysis and visualization. He has a professional background in US political data, having worked on the data teams for US presidential campaigns and as a Principal at Precision Strategies, a political and digital strategy firm in New York City.

Dual Degree Master in Public Policy at the School of Public Affairs of Sciences Po and Master in Public Administration at the Columbia University School of International and Public Affairs (SIPA).