[POLICY BRIEF] Autonomous Systems of Normative Control in Military Applications of AI, by Johannes THUMFART
20 November 2023
[RESEARCH PAPER] Taking Chaos Seriously: from Analog to Digital Constitutionalism, by Guillaume TUSSEAU
29 November 2023

[INTERVIEW] On the ongoing EU AI Act trilogues and specific provisions covering the regulation of Foundation Models (FMs): questions to Professor Hacker

By Giulia Geneletti

This interview explores the evolution of the discussion on regulating GenerativeAI models like GPT-4, the three-layered FMs governance approach proposed by Prof. Hacker, the recent Franco-German-Italian proposal on “mandatory self-regulation” and other regulatory aspects such as AI liability, DSA alignment and how to harness the innovative potential of Europe..

1- Setting the context: the European Union Artificial Intelligence (AI) Act

The EU AI Act represents a first-of-its-kind regulation proposal concerning Artificial Intelligence development and deployment in the European Union. Its primary goal is to create a comprehensive and uniform legal framework designed to safeguard fundamental rights, ensure user safety, and foster trust in AI development and adoption.

The Act categorizes AI applications into four distinct risk levels, each subject to tailored regulatory provisions:

  1. Unacceptable Risk AI: This category includes AI applications that pose a severe threat to EU values, such as social scoring systems implemented by governments. These applications will be outright banned due to the extreme risks they present.
  2. High-Risk AI: AI systems applied to critical categories – such as employment, education, law enforcement, migration management… – have a huge potential to impact people’s safety or fundamental rights. They are deemed high-risk and are subject to stringent requirements, including risk assessments and mitigation, data governance and mandatory conformity assessments governance among other provisions.
  3. Limited Risk AI: AI applications such as chatbots and emotion recognition systems, which  primarily face transparency obligations. 
  4. Minimal Risk AI: This category encompasses AI systems that are perceived to have negligible risk and need no additional obligations beyond those of existing EU and national legal framework. 

The proposal for the EU AI Act was initially put forward by the European Commission in April 2021. Since then, both the Council and the Parliament have adopted their negotiating positions in December 2022 and June 2023, respectively. The ongoing trilogue negotiations, involving these three key EU institutions, have the objective of finalizing and agreeing upon a text that balances the various interests and perspectives, and should allegedly be concluded by the end of this year.

2- The advent of Foundation Models (FMs)

The advent of Foundation Models [1] – like Open AI’s GPT-4, Meta’s LLaMA, Google’s LaMDA, Anthopic’s Claude, … – has significantly influenced the policy discussions surrounding the EU AI Act, particularly in understanding and regulating the broad spectrum of AI risks and opportunities. 

Legislators within the EU have been keenly focusing on how such models fit into the proposed EU AI Act regulatory framework, especially considering their potential impact on fundamental rights, privacy, and data security. The discussion around the classification of Foundation Models have been particularly challenging given their adaption spanning across different domains and multiple risk categories. The emphasis has been on developing a balanced approach that fosters innovation and harnesses the potential benefits of AI, while simultaneously instituting robust safeguards against potential harms. However, questions such as the implementation feasibility of the risk-framework, the repartition of responsibilities between AI developers and deployers, the release strategy of Open Source models, and the stringency of obligations to assess and mitigate risks remain still open and highly conflictual. 

  • Professor Hacker’s proposal for a three-layered Foundation Models governance approach

In this framework, Professor Hacker has been conducting extensive research and work on AI governance, the ongoing AI Act trilogue and, most notably, the regulation of Large Generative AI models (LGAIMs). In his paper « Regulating ChatGPT and other Large Generative AI Models » [2], he advocates for three layers of obligations for LGAIMs.

  1. Minimum standards for all LGAIMs (horizontal obligations for all FMs)
  2. High-risk obligations for high-risk use cases (vertical obligations for the specific application)
  3. Collaborations along the AI value chain – to enable effective compliance with the first two sets of rules

On layer 1 – the minimum standards for all LGAIMs would have developers:  

  • Broadly assess the outstanding risks of their Foundation Models.
  • Transparency obligations: disclose the provenance and curation of the training data, the model’s performance metrics and any incidents and mitigation strategies concerning harmful content and the model’s greenhouse gas (GHG) emissions.
  • Data governance and audit: developers would be responsible for training data curation duties on representativeness of protected groups and discrimination prevention. 
  • Ensure minimal requirements of cybersecurity (Art. 15 AI Act)
  • Assess and disclose information on the sustainability impacts and harms of the models.
  • Content moderation: introduce mandatory notice and action mechanisms, trusted flaggers and comprehensive audits (following the Digital Services Act -DSA- approach).

On layer 2 – Deployers should be liable for complying with the specific high-risk obligations (namely on risk management) contained in the EU AI Act when deploying the Foundation Models for the high-risk application listed in the regulation. 

On layer 3 – Mandatory collaboration and information-sharing practices should be clearly and effectively introduced in the EU AI Act to ensure compliance amongst the key actors involved in the development, deployment and usage of Foundation Models. This layer would enable effective compliance with the first two layers before mentioned. The problem of balancing collaboration and disclosure with the protection of sensitive information (trade secrets or IP rights) is a trade-off already addressed in other legal frameworks, such as the GDPR and the AI/Product Liability Directives, where thus the EU AI Act could take inspiration to establish effective and safe mechanisms. 

According to Prof. Hacker, the interest of having general requirements of Foundation Models originates from addressing problems at the root instead of managing them in the downstream applications (cost-avoider argument). The technical reasoning for having developers bearing trust & safety obligations lies in the fact that it’s very hard to establish adequate safety layers once the models have been released. 

These minimum requirements should also apply to Open-Source models. Specifically asked on Open-Source release strategy, Prof. Hacker recognized the spectrum of openness and closeness that these models could present in better addressing the well-debated safety vs innovation trade-off. However, he believes that highly capable models should not be fully open sourced to the public because of the great risks that they produce in being reverse-engineered and deployed for harmful purposes by bad-intentioned actors. Moderated access and full access to vetted researchers and actors should, on the other hand, be permitted to allow external auditing processes. 

Aware of the complexities of allocating balanced and effective responsibilities across developers and deployers, Prof. Hacker stressed that as deployers are likely smaller, less capable and less technologically sophisticated than developers, there should be a feasible allocation of responsibilities along the AI value chain. There is indeed a regulatory dilemma where, on one hand, focusing exclusively on developers entails potentially excessive and inefficient compliance obligations; on the other hand, focusing solely on deployers and users’ liability could put a higher burden of compliance to actors’ limited insight or resources. To sum it up, Prof. Hacker’s core argument is that shared and overlapping responsibilities, facilitated by legally mandated collaboration mechanisms, are needed.

  • A step back toward self-regulation: the Franco-German-Italian proposal

Just recently (on November 18, 2023), France, Germany and Italy put the trilogues on stalemate by publishing a non-paper [3] to advocate against strict regulations on Foundation Models to allegedly avoid hindering Europe’s AI development. Emphasizing “mandatory self-regulation” and codes of conduct, the proposal suggests establishing a much lighter and voluntary-based governance approach to Foundation Models that would allegedly balance innovation, fundamentally going against the core positions of the Parliament and of the Commission. 

The Franco-German-Italian proposal put the EU AI Act on an impasse that the Institutions are trying to dismantle as the interview was being conducted, especially looking ahead to the crucial upcoming trilogue of December 6th. 

Prof. Hacker has raised several concerns regarding the proposal. 

He points out a fundamental contradiction in the concept of “mandatory self-regulation,” which mixes voluntary and enforced elements, resulting in a confusing and ineffective regulatory approach​​, most notably for its implementation. He warns that the proposal might incentivize non-compliance, as entities could find it cheaper and easier to operate with less oversight, thus potentially choosing not to adhere to or withdrawing from voluntary compliance (as we have seen for example with the recent May 2023 Twitter’s withdrawal from the EU Code of Practice on Disinformation)​​. In a few words, according to Prof. Hacker, not only these voluntary commitments would go against the general approach undertaken by the EU in recent years toward mandatory due diligence (namely with regulations like the DSA and the DMA), but it would most notably not be sufficient for the type of public safety risks these models bring.  

Finally, Prof. Hacker argues that the proposal could penalize ethical organizations and downstream providers. Responsible developers and deployers of AI may face increased costs and bureaucratic challenges, while those ignoring the rules could gain competitive advantages​​.

3- It once again goes back to “Regulation vs Innovation”: what about the Product Liability Directive? 

The process of producing disclosures, primarily consisting of documenting already known information about products, lends itself well to automation, especially when no product/business alterations are required. However, the crux of the matter according to Prof. Hacker would lie more in the New Product Liability Directive [4] and in the AI Liability Directive [5] than it does to the EU AI Act. These directives pose significant challenges for small and medium-sized enterprises (SMEs), notably due to the rebuttable “presumption of causality” [6], the “logging requirement” [7] and the possibility of the Directives covering softwares in addition to AI models. These provisions will place a substantial compliance burden on companies and great legal uncertainty on their concrete implementation hasn’t yet been addressed by policymakers. Given the close ties of the two legislations, the New Product Liability Directive will likely be passed alongside the AI Act, while the policy-process of the AI Liability Directive will likely continue only after the adoption of a final text of the AI Act.

Professor Hacker emphasizes that regulatory requirements must be manageable for AI developers and deployers of all sizes. This approach is vital to prevent market monopolization, thereby safeguarding innovation and consumer welfare. However, as it would be counterproductive to exempt SMEs from the EU AI Act and the Product Liability Directive, according to Prof. Hacker, the priority lies on reshaping the EU’s narrative and approach to AI. There is a pressing need for substantial investment (talking numbers – billions of euros), akin to efforts seen in the US, the UK and China – and even Norway. Such funding could propel AI research into the production phase, fostering the development of globally competitive AI products and services.

Another point of reflection would be to address the integration of civil and military research. Some promising AI products, also in Europe, have ties to defense and are increasingly relevant in the current geopolitical climate, as observed in scenarios like the Russia/Ukraine and Israel/Hamas conflicts. A concerted effort is needed to balance addressing key risks while nurturing this ecosystem for AI development and deployment.

4- Closer scenarios of implementation – Alignment of the AI Act and the Digital Services Act

A milestone and key field of analysis when researching the EU tech policies scenario is the Digital Services Act (DSA). The DSA is an EU Regulation aimed at supporting a safer and more accountable online environment that applies to all digital services. Entered into force in November 2022, the DSA sets comprehensive content moderation and due diligence obligations for online platforms (according to a proportionality criteria) to limit the spread of illegal content and illegal products online, increase the protection of minors, give users more choice and better information. As innovative and dense of potential as this regulation is, most of its concrete efficacy and impact will come from its implementation and enforcement – which is currently ongoing. 

The interview briefly touched upon the potential areas of convergence between the two regulations, especially considering how the ongoing enforcement of the DSA could already set some legal framework for Foundation Models in the perspectives of the AI Act not entering into force before 2025/2026.

Prof. Hacker identified several key areas of convergence:

  1. Risk Assessment Alignment: For FMs operating on Very Large Online Platforms (VLOPs), it is crucial to ensure that the AI model’s risk assessment under the DSA is sufficiently broad, encompassing the specific risks posed by FMs.
  2. Decentralized Content Moderation: DSA’s mechanisms like notice and action, and trusted flaggers, should extend to Large-Scale AI Models.
  3. Application to Synthetic Media Creation: The DSA’s rules may be relevant to platforms utilizing synthetic media creation, particularly in addressing the challenges posed by LGAIMs in generating hate speech and fake news. These models complicate content moderation due to their sophistication in evading filters, raising concerns about election integrity, civic participation, and overall online safety.

Despite the fact that the DSA was not drafted with LGAIMs in mind, its scope arguably extends to them, especially when used in search engines and social media platforms for content generation.

For instance, the Delegated Regulation on Independent audits under the DSA [8] contains numerous references to Generative AI models. The text states: “Important elements to analyze when assessing compliance with risk assessment and risk mitigation obligations are, in particular, algorithmic systems such as advertising systems, content moderation technologies, recommender systems and other functionalities used by online platforms and search engines relying on novel technologies such as generative models.”

A final text by the end of the year? 

The political landscape surrounding the finalization of the EU AI Act is marked by a strong momentum, with several declarations signaling the intention to approve the final text by the end of the year. This ambition aligns with Spain’s priority to conclude the text during its presidency of the Council of the EU. However, the process has experienced some delays, notably due to the complexities brought by the legal integration of Foundation Models and by the recent French-German-Italian proposal. The outcome of these developments is still uncertain, and the trilogue meeting scheduled for December 6th is expected to be a pivotal moment, potentially clarifying many of the uncertainties highlighted in this article.

If the text won’t be approved by the end of 2023, another open question lays on the upcoming Belgian Presidency to exert sufficient influence to push for the approval of the text before the fixed deadline of the European Parliament elections in May 2023. Adding to this complexity, Professor Hacker has offered a general assessment, suggesting that the final text is likely to include broad provisions mandating secondary legislation. This approach would delegate significant responsibility to the European Commission to develop Implementing Acts, Delegated Acts, and Codes of Conduct and thus gaining some further time to concretely address some of the most technical and contentious issues that the trilogue negotiations have yet to resolve. 


Footnotes:

[1] In the context of the EU AI Act, a “foundation model” is specifically defined as an AI model that is trained on a broad set of data at scale, designed for generality of output, and can be adapted to a wide range of distinctive tasks. These models are capable of accomplishing a variety of downstream tasks, even those for which they were not specifically developed or trained. This adaptability implies that each foundation model can be reused in numerous downstream AI or general-purpose AI systems.

[2] Philipp Hacker, Andreas Engel, and Marco Mauer. 2023. Regulating Chat- GPT and other Large Generative AI Models. In 2023 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’23), June 12–15, 2023, Chicago, IL, USA. ACM, New York, NY, USA, 12 pages.

[3] Reuters, “Exclusive: Germany, France and Italy reach agreement on future AI regulation”, (November 2023)

[4] The New Product Liability Directive refers to the proposed updates to the existing European Union Product Liability Directive (85/374/EEC), which was originally enacted in 1985. This directive established the principle that producers are liable for damage caused by defects in their products. The updates aim to modernize the directive to address challenges posed by emerging technologies, particularly digital products and services, including those involving artificial intelligence (AI).

[5] The EU AI Liability Directive, proposed by the European Commission in September 2022, seeks to adapt non-contractual civil liability rules specifically for artificial intelligence. The Directive aims to modernize the EU liability framework by introducing new rules that address damages caused by AI systems. This ensures that victims harmed by AI systems receive the same level of protection as those affected by other technologies within the EU.

[6] The presumption of causality provision of the Directives is designed to ease the burden of proof on victims trying to establish that their damage was caused by an AI system and softwares. The presumption of causality applies under certain conditions. Essentially, this shift in the burden of proof means that instead of the claimant having to prove fault or defectiveness led to harm, the onus is on the AI system provider or user to prove that their system did not cause the harm.

[7] Under the Directives, providers and users of high-risk AI systems are required to maintain detailed records about the functioning and deployment of these systems. This includes logging operational data, incidents, and any other relevant information that could be crucial in the event of a malfunction or when harm is caused. The purpose of these records is to facilitate the identification of faults or failures in AI systems and to assist in any subsequent investigations or liability claims. In case of an incident, these logs can be critical in providing sufficient evidence, establishing the cause and assessing liability.

[8] European Commission, Delegated Regulation on independent audits under the Digital Services Act (October 2023).


Prof. Dr. Philipp Hacker is Research Chair for Law and Ethics of the Digital Society at European University Viadrina, Co-Lead of the RECSAI Expert Consortium and General Editor of the “AI in Society” series by Oxford University Press.

Giulia Geneletti is Research Assistant at Sciences Po Chair Digital Governance and Sovereignty


Philipp Hacker ©Heide Fest