Home>Research Centre>Projects & Events>TRACK AI: Transparency, Regulation, Antitrust, Contracts, Knowledge

TRACK AI: Transparency, Regulation, Antitrust, Contracts, Knowledge

Exploring Governance Gaps in AI Firms

Our Mission

The rapid rise of AI technologies in the past years has sparked widespread discussions amongst academics, policymakers, and the general public. AI-powered chatbots, image generators, and other customer-facing AI tools have become commonplace, prompting questions about their ethical, legal, and societal implications. While much of the conversation has focused on the risks and benefits of these tools, less attention has been given to the governance of AI companies themselves. “Track AI”, a joint collaboration between Sciences Po and the Stanford Center for Legal Informatics (CodeX), addresses specifically the critical but under-explored governance ecosystem of AI firms.

The emergence of AI technologies has long been heralded as a departure from the “Web2” era, dominated by companies like Amazon, Google, and Meta. These so-called Big Tech firms built their empires on proprietary software, user data collection, and network effects that encouraged centralization of market power. In contrast, AI startups were seen as potential disruptors which offered not only breakthrough technologies, but also new approaches to governance centered on putting the interests of society first and developing open-source models.
However, a closer look at the AI landscape reveals that many firms are increasingly intertwined with Web2 giants. From OpenAI to Anthropic and Mistral AI, it is hard to find an AI player that has yet to enter a deal with Big Tech players. Through partnerships with companies such as Microsoft and Amazon, AI firms get access to resources like secure cloud computing, easy access to sizable customer bases, and data. But these “AI partnerships” are not just technical collaborations—they represent a new kind of inter-firm cooperation with potential long-term implications for the entire technology ecosystem.

Recognizing the complexity of these relationships, the Track AI project seeks to explore the underlying reasons and consequences of AI partnerships across several critical dimensions:

AI companies often justify their partnerships with Big Tech players as a means to access essential resources such as computing power and large datasets. These collaborations can accelerate the development and deployment of AI models and thus benefit the public at large, but they also create dependencies that could challenge the image of AI firms as disruptive innovators.

From a consumer perspective, partnerships between AI and Web2 firms often result in smoother product integration. However, the benefits for users must be weighed against the potential risks, including concerns about data privacy, transparency, and competition in the marketplace.

The collaborations between AI players and Big Tech players raise significant antitrust concerns. An obvious concern is whether these partnerships give Big Tech players control over the comparatively smaller AI firms and should be analyzed under merger rules. At the same time, these partnerships could prove to be the only ways for AI players to reach customers and support their research efforts, both infrastructurally (in terms of access to computing power) and financially. The Track AI project will examine how these collaborations should be assessed under the US and the EU antitrust frameworks.

Another critical area of investigation is the corporate governance impact of these partnerships. When AI firms with a public mission or open-source philosophy collaborate with profit-driven Web2 firms, conflicts of interest may arise. Can AI companies maintain their commitment to openness and transparency while working closely with corporations focused on maximizing shareholder value?

The Track AI project aims to trigger a deeper discussion around what it means for AI to be “open-source.” With many AI firms pledging to develop open-source models, partnerships with proprietary Web2 platforms might complicate the realization of this goal. The project investigates how AI firms can uphold their commitment to openness in this evolving landscape.

Our Deliverables

To address these critical questions, the Track AI project aims to deliver four concrete tools:

  1. Guidelines for transparency in AI partnerships. One of the main outputs will be a set of guidelines for AI firms participating in such partnerships. These guidelines will focus on ensuring transparency in joint operations, both from a consumer and regulatory perspective. By setting clear standards, the project hopes to foster trust and accountability in the growing field of AI partnerships.
  2. AI-Powered antitrust screening tool. Additionally, the project will develop an AI-powered tool designed to assess whether AI partnerships comply with antitrust regulations. This tool will help regulators and industry stakeholders monitor potential anti-competitive behaviors, ensuring that AI partnerships contribute to a competitive and innovative market environment.
  3. An open-access edited volume exploring the potential of new governance models facilitated by AI technologies to promote the public interest.
  4. A law review article making the case that we are at an inflection point at which existing legal frameworks need to be adjusted to the current technological affordances of AI technologies. Our core argument is that, given the transformative potential of these technologies, the goal is not just to create adequate legal frameworks, but rather to design legal governance mechanisms that would have a clear public interest orientation.

Our events

Our Bios

Dina Waked holds a Doctor of Judicial Science (S.J.D’12) and a Master of Laws (LL.M’06) from Harvard Law School, a Bachelor of Law (LL.B) from Cairo University Law Faculty and a Bachelor of Arts (B.A.) in Economics from the American University in Cairo. She began teaching at Sciences Po’s University College in 2009 and in 2013 joined the Law School’s permanent faculty. She was tenured after defending her HDR at Sciences Po in 2018 and then became the Director of the Law School’s doctoral program (2018-2022). In 2022 she was elected President of Sciences Po’s Conseil de l’Institut (University Board) and in 2024 became the Dean of Sciences Po’s School of Research. She works at the intersection of several disciplines, particularly law and economics, and has been involved in interdisciplinary projects such as the co-foundation of Sciences Po’s Law and Economics Policy Initiative (LEPI).

Dr Megan Ma is a Research Fellow and the Associate Director of the Stanford Program in Law, Science, and Technology and the Stanford Center for Legal Informatics (CodeX). Her research focuses on the use and integration of generative AI in legal applications and the translation of legal knowledge to code, considering their implications in contexts of human-machine collaboration. She also teaches courses in computational law and insurance tech at the Law School.

Dr. Ma is also currently an Advisor to the PearX for AI program, Editor-in-Chief for the Cambridge Forum on AI, Law, and Governance, the Managing Editor of the MIT Computational Law Report, and a Research Affiliate at Singapore Management University in their Centre for Computational Law. Megan received her PhD in Law at Sciences Po and was a lecturer there, having taught courses in Artificial Intelligence and Legal Reasoning, Legal Semantics, and Public Health Law and Policy. She has previously been a Visiting PhD at the University of Cambridge and Harvard Law School respectively.

Teodora Groza is a Ph.D. student and lecturer in antitrust law at Sciences Po Paris. She holds a Bachelor of Law (LL.B.) from the University of Groningen, a Bachelor of Arts (BA) in Philosophy from Babes-Bolyai University of Cluj-Napoca, and an LL.M. in Economic Law from Sciences Po Paris. Her research focuses on the impact of technological changes on the organization of industry, as well as on the relationship between innovation and regulation. She is the Editor-in-Chief of the Stanford Computational Antitrust Journal.