Home>[Students Policy Brief] Explicability in AI

31.05.2024

[Students Policy Brief] Explicability in AI

(credits: NicoElNino/Shutterstock)

By Janine Ecker, Paul Kleideinam, Claudia Leopardi, Anna Padiasek, Benjamin Saldich

The Digital, Governance and Sovereignty Chair publishes, on a regular basis, the finest essays and papers written by Sciences Po students in the course of their studies.

This Policy Brief has been selected as one of the best works written during the course taught by Prof. Suzanne Vergnolle “Comparative Approach to Big Tech Regulation” in Spring 2024, as part of the Digital, New Technology and Public Policy stream at the School of Public Affairs. 


In an era where Artificial Intelligence (AI) will play an increasingly important role in our society, it is imperative to maintain a level of human control over AI systems. Explicability—broadly defined as a level of understanding or explanation as to how AI systems function and make decisions—is a core component of this human control. And yet, academics, ethicists, and lawmakers have thus far failed to coalesce around a singular strategy for regulating explicability in the field of AI. This policy brief, produced by our European think tank, synthesizes academic insights and international regulatory approaches to propose implementable recommendations for American policymakers. Our objective is to strike a balance between ethical imperatives and practical considerations, ensuring transparency, accountability, and societal trust in AI technologies.

After examining the current understanding of notions of transparency in  “white-box” and “black-box” AI systems, the paper analyzes how organizations and countries have sought to define and regulate AI explicability, with a specific focus on the EU, China, and the United States. Out of this analysis, three main policy strategies emerge, whose strengths and limitations are considered.

Drawing inspiration from recent regulatory efforts in the EU, this paper recommends a balanced approach to AI explicability that seeks to regulate AI governance based on risk levels, acknowledging technical limitations while ensuring accountability and transparency. We propose four key policy strategies that the United States Congress should consider when crafting AI legislation:

  1. Implement a Risk-Based Approach: Adopting a structured framework akin to the EU’s AI Act ensures consistency, transparency, and proportionality in AI regulation.
  2. Mandate Binding Obligations for High-Risk Systems: Enforce transparency and human-centered approaches for high-risk AI systems, ensuring accountability and mitigating risks.
  3. Establish Clear Liability Rules: Introduce liability rules to facilitate redress for individuals harmed by AI systems, balancing preventive measures with mechanisms for addressing harm.
  4. Formation of an FTC Task Force: Establish a dedicated task force within the FTC to oversee AI governance, ensuring compliance and fostering collaboration among stakeholders.

This paper also notes the complexities and the evolving nature of the AI sector, which poses unique challenges to envisioning and implementing explicability-centric regulation. Achieving reliable explanations for AI decision-making remains a significant challenge, and must be addressed through future research.

Virtual Graduate Open House day, October 2024

Graduate Open House Day

On 19 October 2024: meet faculty members, students and representatives and learn more about our 30 Master's programmes.

Sign-up

Follow us