Home>Why Responsible AI Must Go Beyond Algorithms: A Conversation with Steven Vethman

22.04.2025

Why Responsible AI Must Go Beyond Algorithms: A Conversation with Steven Vethman

Steven Vethman, AI Action Summit, 2025

As artificial intelligence becomes increasingly embedded in the decisions that shape our lives—what jobs we see, how we access public services, even how we're perceived by institutions—the question of what it means to use AI responsibly has never been more urgent. Yet, discussions about "responsible AI" often focus narrowly on technical fixes, overlooking the deeper social and structural issues at play.

In this interview, Steven Vethman—researcher at the Sciences Po Law School and contributor to the European project DIVERSIFAIR—invites us to take a step back. His work challenges us to rethink not just how we build AI, but why we use it in the first place—and for whose benefit.

Could you tell us about yourself as a researcher, including your academic and professional background?

Steven Vethman

Certainly. My research focuses on Artificial Intelligence (AI), a technology whose promised potential seems to grow bigger every day—along with its impact. AI already influences many aspects of daily life: it shapes which job offers we see, which products are marketed to us, and how governments monitor citizens’ behavior. In my work, I advocate for a critical reflection on these so-called benefits, which are often tied to the efficiency that automation promises. But on what assumptions is this efficiency based? Benefits for whom? And who gets to define those assumptions?

To illustrate what’s at stake: Amnesty International recently exposed how the French Social Security Agency’s National Family Allowance Fund (CNAF) used a discriminatory risk-scoring algorithm to detect overpayments and errors in benefit distribution. The result? People with disabilities, single parents—mostly women—and individuals living in poverty were disproportionately targeted. This raises pressing questions: Do we want to automate fraud detection if it amplifies systemic bias and suspicion toward those already facing structural disadvantage?

My interest in unpacking the assumptions behind such systems stems from my academic background. I studied Economics and Econometrics at Erasmus University Rotterdam, where I developed a strong foundation in statistical reasoning, along with a healthy skepticism about the limits of quantitative methods. This critical perspective has shaped my professional path.

From 2020 to 2024, I worked as a Researcher and Consultant on the responsible use of AI at TNO, the Netherlands’ national institute for applied science. There, I collaborated in interdisciplinary teams combining social science, legal and technical perspectives on issues such as discrimination and accountability in AI—particularly within the domains of labour, healthcare, and the public sector.

I’ve felt fortunate to be valued for my critical perspective and to bring my personal interest in social equity into my professional work. Before entering this field, my engagement with questions of justice was largely personal—rooted in life experience and in the works of thinkers and artists like James Baldwin, Carrie Mae Weems, and Teju Cole. At TNO, as I deepened my technical knowledge and engaged more fully with critical scholarship, I became particularly committed to creating safe and reflective spaces for government institutions to responsibly engage with questions surrounding AI use and its implications. I organized bias workshops, authored several reports on public sector AI systems—one of which garnered attention in the House of Representatives—and participated in research groups and panels on AI governance and regulation.

A core theme in my work is the need to shift from a focus on Responsible AI—which often centers on tweaking technology to reduce risks—to Responsible Decision-Making about AI. This shift calls for interdisciplinary teams to ask a more fundamental question: Is AI even the right tool for the problem at hand?

You are working on the DIVERSIFAIR project with Professor Raphaële XENIDIS. Can you tell us more about it?

Efforts to make the impact of AI “fairer” are often defined too narrowly, seeking solutions within the technology itself. Take, for example, facial recognition systems that disproportionately fail to recognize Black women as human faces. The common response is to fix the error by adding more data—more images of Black women. But this frames the problem as a technical glitch, rather than questioning the legitimacy of surveillance in the first place. If you ask the people affected—Black women in this case—they may not want the system improved; they may want it dismantled. Yet, AI researchers and data scientists tend to focus on improving performance metrics rather than addressing the lived concerns of those impacted.

The disconnect between technical solutions and the lived experience of systemic discrimination is where DIVERSIFAIR steps in. AI practitioners often see critical research as abstract or outside their professional scope. One of my contributions to the project focuses on bridging this gap. I lead an effort to translate critical scholarship—particularly intersectional and anti-discrimination perspectives—into actionable recommendations for AI practitioners, through a series of focus groups.

DIVERSIFAIR is a three-year Erasmus+ project, co-funded by the European Commission, that aims to foster more inclusive AI practices by developing educational resources and tools. It brings together eight partners, combining academic research—including from our law faculty—with practical insights, such as from community-led AI audits. These insights are being transformed into educational content for AI professionals, alongside workshops and toolkits designed for broader audiences.

Importantly, we are not developing these resources solely for our project partners. We actively share our findings and align with stakeholders beyond the consortium to support systemic change in AI development and governance.

Curious about what our work has led to so far? Feel free to reach out. And stay tuned for our upcoming publication in the proceedings of the ACM Fairness, Accountability, and Transparency (FAccT) Conference 2025, where our paper 'Fairness Beyond the Algorithmic Frame: Actionable Recommendations for the Intersectional Approach' has been accepted.

You took part in the AI Action Summit. How did it go? Who did you meet?

Yes, on February 11–12, I had the opportunity to attend the AI Action Summit, where I was invited by our partner Women in AI to represent the DIVERSIFAIR project. Our initiative was among 50 selected AI projects showcased by the Paris Peace Forum, emphasizing that AI’s impact reaches far beyond economic productivity. We highlighted its potential to foster global equity and underscored the urgency of establishing guardrails for responsible AI innovation.

It was inspiring to be surrounded by so many sharp, action-oriented minds—especially in the breathtaking setting of the Grand Palais. In conversations with journalists, policymakers, technical professionals, and civil society organizations (CSOs), we exchanged lessons learned, gathered valuable feedback, and reflected on how our message and work-in-progress could align more closely with the concerns of diverse stakeholders.

I particularly valued connecting with representatives from civil society organizations. In a workshop on Participation, Power, and Resistance, I spoke with Caitlin Kraft Buchman of Human Rights Watch and Women@TheTable about the role of "calling in"—a strategy I’ve learned from Dr. Loretta J. Ross that emphasizes curiosity and dialogue—and how it can sometimes offer different opportunities for change compared to calling out. I also had the chance to engage with Alessandra Sala, Co-Chair of UNESCO’s Women for Ethical AI, whose panel—A Call to Rethink AI: Why Women and Human Rights Matter Now More than Ever—powerfully advocated for immediate, transparent action on gender and human rights in AI governance.

For me, identifying synergies and deepening collaboration with CSOs is essential to ensuring that DIVERSIFAIR has a lasting, meaningful impact.

What is your way forward?

Currently, we are at the midway point of the DIVERSIFAIR project, and I’ve only just started my role as a researcher at the Sciences Po Law School. Raphaële Xenidis and I are continuing our research on AI harms and the development of actionable recommendations. One of our focuses now is finding ways to involve civil society organizations (CSOs) more deeply in our work. When DIVERSIFAIR concludes in May 2026, we’ll be excited to share our findings in both research articles and shorter, more digestible takeaways through platforms like LinkedIn and the DIVERSIFAIR website.

Outside of the project, I’m continually inspired by my colleagues at Sciences Po Law School. First, I’m grateful for the opportunity to collaborate with and learn from Raphaële, whose insights into discrimination law, intersectionality, and their application to AI practice and policy have been invaluable to our work. I also particularly enjoyed Professor Marie Mercat-Bruns' recent talk based on her new book, which explores a shift in legal paradigms from discrimination to inclusion. Additionally, I gained a lot from the Transatlantic AI & Law Initiative (TALI) conference, hosted by Professor Beatriz Botero Arcila, whose recent report addresses accountability and liability in the increasingly complex AI supply chains. These experiences push me to keep expanding my understanding of human rights, discrimination law, and AI regulation—not with the goal of becoming a legal expert, but to build a shared understanding and language that makes interdisciplinary collaboration possible and meaningful.

Looking ahead, I aspire to continue collaborating with legal experts and civil society to promote responsible decision-making around AI, framing my research on discrimination and AI within the context of human rights, critical perspectives, and lived experiences. I know I might sound idealistic, but I choose to believe in this because I have also witnessed success stories: AI experts who initially felt attacked by critical work, now integrating it into their practice, and I have seen people in public bodies who, initially focused on AI’s potential, share the critical lessons we’ve learned together. I’m eager to connect with others passionate about these issues; feel free to reach out to continue the conversation and explore how we can collaborate to drive meaningful change!