[EVENT] Presentation of the report Regulating under Uncertainty at the Consulate General of France in San Francisco
25 October 2024
[CONVERSATION] Governance Options for Generative AI
29 October 2024

[ARTICLE] California’s SB1047 vs EU AI Act: A Comparative Analysis of AI Regulation

Image generated by Dall-E

By Florence G’sell, Ashok Ayar and Zeke Gillman

On September 29, 2024, Governor Gavin Newsom vetoed the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act” (Proposed Bill SB 1047).[1] This bill would have become the first law in the United States to regulate advanced AI models and impose obligations on those companies which develop, fine-tune or provide compute resources to train such models. Newsom’s decision was an anticlimactic end to a legislative effort that saw odd bedfellows unite both in support and in opposition to the bill. Though the bill sailed through both houses of California’s legislature, outside of Sacramento, S.B. 1047 exposed surprising cleavages on potential approaches to U.S. technology policy. Governor Newsom finally returned the bill to the legislature without his signature.[2]

S.B. 1047 represented a significant departure from not only prior federal but state-level attempts to regulate AI in the United States.[3] It would have introduced several novel requirements for developers of advanced AI models, including mandatory third-party compliance audits, the implementation of full shutdown capability (a “kill switch”), and the extension of regulatory standards to technology companies (including open-source developers). Key technology stakeholders pushed back against these new requirements. Others, meanwhile, cheered for what they saw as urgently needed restrictions. Yet the fight over S.B. 1047 was not a simple divide between industry versus lawmakers. Within each category there were considerably different views. Points of disagreement included the role of regulation in innovation; the appropriate locus for regulation (at the technology or application level, and whether some areas are higher-risk than others); the obligations and liability of developers for released software; and the empirical evidence required for regulating rapidly-evolving technologies when their future impact remains unclear. 

The supporters of the law included over 100 current and former employees of OpenAI, Google’s DeepMind, Anthropic, Meta, and other companiesProminent computer scientists and AI “doomsayers”Yosha Bengio and the newly-christened Nobel Laureate Geoffrey Hinton, who have beat the drum quite prominently about AI’s existential or catastrophic risks, also publicly argued for the bill to be passed. Anthropic, the AI research lab most associated with AI safety, did not support initial drafts of the bill because it had “substantial drawbacks that harm its [AI technology] safety aspects and could blunt America’s competitive edge in AI development.” But after further revisions, Anthropic concluded that the final bill’s sum benefits outweighed its total costs. Even Hollywood weighed in in support – specifically, the Screen Actors Guild‐American Federation of Television and Radio Artists (SAG-AFTRA). And perhaps most surprising of all, Elon Musk (CEO of xAI, among other ventures)–not known for his love of government–tweeted support for the bill, describing it as a “tough call.” 

On the opposing side of the battlefield stood equally prominent AI researchers such as Yann LeCunFei-Fei Li and Andrew Ng, who argued that the bill would stifle innovation. An open letter written by leading California university AI faculty and graduate researchers argued that the bill would have “broad negative consequences, hamper economic dynamism and weaken California’s position as a global AI hub, in the service of questionable, unscientific and hypothetical public benefits.” Naturally, most industry participants were staunchly opposed.  OpenAI and Metaexecutives issued public statements in opposition. The venture capital firm Andreessen Horowitz (a16z) ardently criticized the legislation. Perhaps the biggest surprise in this camp were several notable California federal lawmakers, like Bay Area Congress members Nancy Pelosi and Zoe Lofgren

The debate about S.B. 1047 parallels a similar public discourse surrounding the European Union’s AI Act’s implementation. For example, Meta has initiated an extensive campaign demanding “regulatory certainty” and opposing the EU’s regulatory strategy, arguing that the EU will “miss out”  on the benefits of AI innovation due to a lack of clarity regarding the types of data that can be used to train AI models. Across both fronts, the common refrain is these regulatory measures are both premature in the technology’s life cycle and would, accordingly, impede innovation on the cusp of a technological revolution. 

Despite these shared criticisms, the EU AI Act and S.B. 1047 reflect different regulatory approaches. This article seeks to analyze the similarities and differences between these two legislative frameworks and evaluates the criticisms directed toward them.

     I.         Key features of Bill SB 1047 and EU AI Act

A.    Scope

S.B.1047 aimed to regulate only the most powerful AI models. Specifically, it targeted models trained with computational power exceeding 10^26 integer or floating-point operations (FLOPs) and with costs exceeding $100 million. It also applied to models developed by fine-tuning an existing model that meets the prior definition, using computational power of at least 3 x 10^25 integer operations or floating-point operations  (FLOPs), and costing more than $10 million to train. 

The EU AI Act scope is much broader. The EU AI Act includes specific provisions targeting developers of highly capable models, introducing stringent obligations for general-purpose models deemed to pose systemic risks. Systemic-risk general-purpose models are defined, provisionally, as those trained with computational power exceeding 10^25 FLOPs, based on the EU drafters’ assumption that such models possess “high-impact capabilities.” However, the AI Act’s scope extends far beyond this focus. It also governs the activities of developers and deployers of AI applications classified as high risk, with this classification based on the sector of use. Specifically, they are considered high risk when used in sensitive sectors, such as healthcare or employee hiring.

S.B. 1047 focuses exclusively on developers—companies responsible for initially training or fine-tuning a covered model—whereas the AI Act encompasses both developers and “deployers” of AI systems. Deployers are defined as entities that operate an AI system under their authority. More specifically, the AI Act applies to providers who market or make AI systems or general-purpose AI models available within the EU, regardless of whether they are established or physically located in the EU. It also covers deployers of AI systems if they are established or located in the EU or if the AI system’s outputs are used within the EU.

Like the EU AI Act, S.B. 1047 had an extensive territorial scope. It was intended to apply to any developers making covered models available for use in California, regardless of whether those developers are based in California or if the models were developed, trained, or offered there. This approach contrasted with the typical presumption that state laws do not apply beyond state borders. Consequently, the bill would have regulated all major AI companies that make powerful models accessible in California. 

B.    Developers’ obligations

The EU AI Act establishes a range of obligations for AI providers and deployers, with requirements tailored to the type of AI models or systems they offer.[4] Providers of “general-purpose AI” (GPAI) models must comply with specific obligations, while those offering GPAI models classified as “systemic risk” face even stricter requirements. Further obligations target AI systems and depend on the AI system’s risk classification: high-risk, “transparency” risk, or low-risk. “Transparency” risk AI systems predominantly refer to generative AI tools, whereas high-risk AI systems are identified based on their application in sensitive sectors. The AI Act also specifies a list of AI systems considered excessively risky and therefore prohibited outright. Examples include social scoring AI systems and systems designed to infer a person’s emotions in workplace or educational contexts. 

Although SB1047 imposed certain obligations on developers of powerful AI models that occasionally mirrored those in the EU AI Act, its underlying rationale was distinct, focusing primarily on the prevention of “critical harms.” 

1- Duty to exercise reasonable care before training, and prohibition against using or making available unsafe models

S.B. 1047 introduced an obligation requiring developers of covered models to exercise reasonable care to reduce the risk of “critical harms” that their models might cause or “materially enable.” “Critical harms” were defined as cases where the model enables the creation or use of weapons of mass destruction—such as chemical, biological, radiological or nuclear weapons—that results in mass casualties; or where the model is used to carry out a cyberattack targeting critical infrastructure resulting in $500 million in damages or mass casualties; or where the model acts with limited human oversight and causes death, bodily injury, or property damage in a manner that would be a crime if committed by a human.[5] The bill explicitly prohibited deploying a model if there is an “unreasonable risk” that it could cause or materially enable a “critical harm.”[6] By explicitly establishing a duty of care to prevent causing or “materially enabling” critical harms, S.B. 1047 appeared to increase developer liability. In particular, it facilitated the attribution of causality by requiring developers to ensure, as much as possible, that actions and harms of their models be accurately and reliably attributable to them.[7]

In contrast, the AI Act introduces new obligations that may result in sanctions for non-compliance but does not specifically modify the general principles of civil liability law, which remain primarily under the authority of member states. The recently revised Directive on Liability for Defective Products, however, explicitly categorizes AI systems as products that can be considered defective, affecting the liability of AI providers but only for a limited set of harms experienced by private individuals: death or personal injury, medically recognized psychological harm, data destruction or corruption, or damage to any property. 

The primary reform of AI providers’ liability at the EU level lies in the forthcoming AI Liability Directive, which is currently under intense negotiation. The original proposal for this Directive provides that AI developers or deployers could be liable in case of non-compliance with “a duty of care laid down in Union or national law” and “directly intended to protect against the damage that occurred.” The proposal further provides that national courts should presume a causal link between the defendant’s fault and any harmful output (or failure to produce output) by an AI system under certain conditions, such as when the defendant’s fault has been established or when it is reasonably likely that this fault influenced the AI system’s output or lack thereof. Though drafted in relatively broad terms, these provisions limit liability to instances where a “duty of care” to prevent specific harm has been breached. Additionally, unlike SB1047, the presumption of causality does not appear to extend developer liability to cases where their models merely enable harm. Consequently, developers should not be held liable for damage caused by users who fine-tune a model for harmful purposes.

2. Implementation of controls designed to prevent covered models from causing “critical harms.”  

S.B. 1047 provided that developers should implement appropriate safeguards[8]. In particular, a developer, before beginning to initially train a covered model, must:

  • implement a “kill switch” or shutdown capabilities. S.B. 1047 required developers to implement to implement a mechanism enabling to “promptly enact a full shutdown” of all covered models and covered model derivatives in their control.[9] Although the scope of this requirement differs, Article 14(4)(e) of the EU AI Act similarly mandates that high-risk AI systems must be delivered to the deployer in a manner that allows a human to intervene in their operation. This includes the ability to interrupt the system using a “stop” button or a similar mechanism that safely brings the system to a halt.
  • Implement cybersecurity protections. Developers were required by S.B. 1047 to implement comprehensive safety and security measures to protect the training process of the model. Specifically, developers must implement “reasonable administrative, technical, and physical cybersecurity protections” to prevent unauthorized access, misuse, or “unsafe post-training modifications” of models in their control. These security measures, which must be documented, can be tiered to the risk presented by the type of model.[10] Similarly, the AI Act provides in Article 15 that high-risk AI systems “shall be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness, and cybersecurity, and that they perform consistently in those respects throughout their lifecycle”. Moreover, Article 55 of the AI Act mandates that providers of GPAI models with systemic risk must ensure an adequate level of cybersecurity protection for the model and the physical infrastructure of the model. 
  • Develop a safety and security protocol (SSP). Under S.B. 1047, this protocol should detail the measures the developer should implement to fulfill its duty to take reasonable care, such as testing procedures or circumstances under which the developer would initiate a full shutdown of the model.[11] The AI Act also provides that providers of high-risk AI systems must create and document a quality management system in the form of written policies, procedures, and instructions. It may include, among other elements, a strategy for regulatory compliance, technical specifications, systems and procedures for data management, or the details of a post-market monitoring system

3- testing, assessment, reporting and audit obligations

  • Testing and Assessment. S.B. 1047 provided that, before using a model or making it publicly available, a developer is required to assess whether there is a possibility that the model could cause critical harm and to record and retain test results from these assessments such that third-parties are capable of duplicating these tests. Under the AI Act, providers of high-risk AI systems must establish a risk management system, as provided by Article 9. This system must identify and analyze known and reasonably foreseeable risks that the high-risk AI system may pose to health, safety, or fundamental rights, and estimate and evaluate risks that could arise, particularly if the system is used under reasonably foreseeable conditions of misuse. Additionally, Article 55 requires providers of general-purpose AI (GPAI) models with systemic risks to assess and mitigate potential systemic risks and conduct model evaluations.
  • Compliance statements and third party audits. S.B. 1047 required developers to submit annual compliance statements to the Attorney General.[12] In addition, beginning in 2026, all developers of covered models would be required to undergo independent third-party audits annually to ensure compliance with S.B. 1047’s provisions. These audit reports must be retained for the duration of the model’s commercial or public use, plus an additional five years. Developers are required to publish redacted copies of their audit reports and to provide unredacted copies to the Attorney General on request. Although the EU AI Act’s implementing acts may require third-party participation, the Act does not generally mandate third-party audits, except in instances where a third-party conformity assessment is specifically required. Before deploying a high-risk AI system, providers must conduct a conformity assessment, which can be carried out either by the provider itself or by an accredited independent assessor, known as a “notified body.” Although a third-party assessment is compulsory only in specific instances, such as for biometric systems, providers are otherwise permitted to self-certify conformity, making the use of an independent third party optional.
  • Incident reporting. S.B. 1047 provided that developers must report safety incidents within 72 hours of discovery to the Attorney General.[13] An AI safety incident is defined as an incident that “demonstrably increases the risk of a critical harm occurring,” It must be reported no later than 72 hours following learning that the incident has occurred or learning facts “sufficient to establish a reasonable belief” that an incident has occurred. AI safety incidents would include: (1) a model autonomously engaging in behavior other than at the request of a user; (2) theft, misappropriation, malicious use, inadvertent release, unauthorized access, or escape of the model weights; (3) the critical failure of technical or administrative controls, including controls limiting the ability to modify a model; and (4) unauthorized use of a model to cause or materially enable critical harm.[14] Similarly, the EU AI Act mandates that providers of high-risk AI systems implement a system for reporting serious incidents. Providers of GPAI models with systemic risk must also report serious incidents and list possible corrective measures to address them. 

Overall, many of the provisions in S.B. 1047 closely resembled those in the AI Act. However, S.B. 1047 lacked the extensive transparency and disclosure requirements present in the AI Act, as its primary focus was on mitigating the risk of critical harm.

C.    Sanctions

S.B. 1047 provided that enforcement would exclusively be conducted by the Attorney General and would not include a private right of action. Penalties could be severe, with the possibility that developers face the suspension of their AI model’s commercial or public use until they can demonstrate full compliance with the bill’s safety and security protocols. Fines could amount to up to 10% of the cost of the quantity of computing power used to train a covered AI model, with penalties increasing to 30% for subsequent violations. Finally, the Attorney General could bring a civil action for violations of the bill that cause death or bodily harm; damage, theft, or misappropriation of property; or imminent public safety risks. The Attorney General may seek civil penalties, monetary damages (including punitive damages), injunctive or declaratory relief.  Civil penalties for certain violations are capped at 10% of the cost of computing power used to train the covered model.

The AI Act also imposes substantial fines for non-compliance, up to 15 million euros or 3% of the preceding year’s total worldwide annual revenue. However, it does not explicitly authorize the suspension of an AI system’s use, though national laws could potentially introduce such measures.

   II.         Criticisms

S.B. 1047 encountered a number of criticisms, perhaps best captured by letters from Y Combinator (YC) and Andreeson Horowitz (a16z) and OpenAI to Senator Wiener. These criticisms can be categorized in several ways, with some mirroring those aimed at the AI Act.

  • S.B. 1047 would be based on hypothetical, speculative risks

Opponents argued that S.B. 1047 was based on unproven assumptions about the risks associated with AI, pointing out that there is no scientific consensus on whether, and in what ways, language models or “frontier” AI systems pose a threat to public safety. They questioned whether language models truly provide more assistance to malicious actors than existing resources like search engines. Furthermore, the potential “existential” or “catastrophic” risks from AI remain the subjects of ongoing debate within the scientific community, with no definitive conclusions reached about whether these are the most salient risks at all. Meanwhile, other known harms of AI have actually materialized–e.g., deepfakes, misinformation, and discrimination—but this bill instead focused on the most hypothetical, speculative risks.  

The AI Act does not directly attract the same criticisms, as it does not specifically address “existential” risks, even though it provides a list of sensitive sectors in which AI use is classified as “high risk” and requires the assessment and mitigation of “systemic risks”. However, Mark Zuckerberg and Daniel Ek criticized the European approach to AI regulation, stating: “regulating against known harms is necessary, but pre-emptive regulation of theoretical harms for nascent technologies such as open-source AI will stifle innovation.”

  • S.B. 1047 could increase developer liability

Critics argued that S.B. 1047 introduced a liability risk for model development that significantly deviated from established legal doctrine. However, Senator Wiener disputed this claim, noting that in most states, AI developers can already face lawsuits under existing tort law if their models cause or contribute to harm.[15] Wiener even emphasized that SB 1047’s liability provisions were actually narrower than existing law, as the bill did not actually create a private right of action – only the Attorney General could file claims for violations, and even then only if a developer of a covered model fails to conduct a safety evaluation or mitigate catastrophic risks, and a catastrophe subsequently occurs.

Nevertheless, it is important to note that a duty of care for AI developers–which could render them liable for negligence associated with the use of their models–is not currently recognized in positive law nor has any court found such a duty exists. And even if such a duty existed, there is no established or coalesced industry standard regarding safety practices that would reify such a duty in the absence of this bill. Therefore, by codifying a duty in law, S.B. 1047 would have (at a minimum) affirmed an existing or created a new legal responsibility that has not yet been recognized in any statute or case. Moreover, the bill intended to hold developers liable for not only those critical harms caused by the model, but also for those which they “materially enable.” This would have expanded the causation requirement beyond that of traditional negligence law. Additionally, the bill’s imposition of a tort duty on developers would generally contravene the absolute protection that code enjoys as protected First Amendment speech – though that protection is not absolute. 

In Europe, the AI Liability Directive proposal is facing criticism and may not be adopted. Several EU countries, especially France, have raised concerns about conflicts with existing national laws, overlaps with the Product Liability Directive, and the directive’s complexity and misalignment with the AI Act, all of which could lead to legal uncertainty. 

  • S.B. 1047 would adversely affect innovation, particularly hindering the development of open models

Critics of S.B. 1047 argued that the bill’s expensive compliance requirements, the safety auditing, reporting requirements, and cybersecurity measures mandated by the bill, would be prohibitively expensive and could drive developers of all stripes to leave California and find a more hospitable environment. They also charged that these requirements would be particularly costly for open model developers. While commercial entities developing models may be able to offset these costs against revenues, that burden is far more onerous for hobbyists, non-profits, small businesses, and academic institutions, which frequently tinker with and release free, open-source software. Consequently, the bill’s provisions would likely discourage the release of open-source models and open weights – even without doing so expressly. By not offering an open model exemption, the bill risked placing substantial financial burdens on parties who perhaps could not bear it.

In his recent report for the EU Commission, Mario Draghi, former Prime Minister of Italy and European Central Bank chief, highlights that “the EU’s regulatory stance towards tech companies hampers innovation,” pointing to an excessively fragmented and complex regulatory landscape with “around 100 tech-focused laws and over 270 regulators active in digital networks across all Member States.” He criticizes the EU’s “precautionary approach,” which mandates “specific business practices ex ante to avert potential risks ex post.” For example, “the AI Act imposes additional regulatory requirements on general purpose AI models that exceed a predefined threshold of computational power – a threshold which some state-of-the-art models already exceed.”  Draghi further notes that regulatory red tape imposes substantial compliance costs. For instance, “limitations on data storing and processing create high compliance costs and hinder the creation of large, integrated data sets for training AI models,” thereby placing EU companies at a disadvantage. Overall, Draghi suggests that regulatory simplification is the answer, as he considers the EU fragmented legal landscape a significant barrier to technological innovation. He concludes that it is “crucial to reduce the regulatory burden on companies. Regulation is seen by more than 60% of EU companies as an obstacle to investment, with 55% of SMEs flagging regulatory obstacles and the administrative burden as their greatest challenge.” 

 III.         Will California finally regulate AI? 

Governor Newsom’s veto of S.B. 1047 did not stop him from signing several AI bills into law. They address issues related to deepfake nudes, celebrity AI clones, political deepfakes, and AI watermarking. For instance, three related laws targeting deepfakes, particularly those used in an election context, were adopted in September. One of these laws, A.B. 2655 (Defending Democracy from Deepfake Deception Act), requires online platforms with more than 1 million users in California to either remove or label deceptive and digitally altered deepfake content related to elections. A.B. 2355 mandates transparency for AI-generated political advertisements. A.B. 2839 targets social media users who post or share AI deepfakes that could mislead voters about upcoming elections. Other laws require watermarking and labeling of AI-generated content. For instance, S.B. 942 requires widely used generative AI systems to add watermarks to AI-generated content. And S.B. 926 makes it illegal to create and distribute sexually explicit images of a real person that appear authentic when the intent is to cause serious emotional distress to that individual. 

More importantly, Governor Newsom signed into law the Artificial Intelligence Training Data Transparency Act (A.B. 2013), which requires generative AI developers to publicly release specific documentation about the data used to train their generative artificial intelligence systems or services. Starting January 1, 2026, generative AI developers must publish this documentation on their public websites for any new or significantly updated generative AI systems released after January 1, 2022. The documentation must include details such as the sources or owners of the training data sets, whether the data sets contain any copyrighted, trademarked, or patented data, or if they are entirely in the public domain, or whether the generative AI system or service uses synthetic data generation during its development. In this respect, California is in line with the AI Act, which also requires such transparency for providers of general purpose AI models. 

Finally, Newsom does not dismiss the possibility of implementing broader regulations aimed at advanced models. In his veto statement, he acknowledged that California “cannot afford to wait for a major catastrophe to occur before taking action to protect the public.” The Governor committed to working with legislators, academics, and other partners to “find the appropriate path forward, including legislation and regulation,” stating: “safety protocols must be adopted. Proactive guardrails should be implemented, and severe consequences for bad actors must be clear and enforceable.” Senator Scott Wiener, the author of S.B. 1047, after blaming the critics for spreading misinformation, expressed his willingness to collaborate on the drafting of a new bill. 


Florence G’sell is Visiting Professor at the Stanford Cyber Policy Center, where she leads the program on Governance of Emerging Technologies.

Ashok Ayar is a Research Fellow with the Stanford Cyber Policy Center’s Program on Governance of Emerging Technologies.

Zeke Gillman is a Research Associate with the Stanford Cyber Policy Center’s Program on Governance of Emerging Technologies.


[1] Cal. S.B. 1047 (2024), https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB1047

[2] He explained that SB1047 regulated AI models based only on their cost and size, rather than function, and failed to “take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data.” See: Veto message of Gavin Newsom on September 29, 2024.

[3] See Florence G’sell, Regulating Under Uncertainty: Governance Options for Generative AI,  Stanford Cyber Policy Center, (2024), section 5.3.3.

https://cyber.fsi.stanford.edu/content/regulating-under-uncertainty-governance-options-generative-ai.

[4] G’sell, Regulating Under Uncertainty: Governance Options for Generative AI,, at 5.1.2.

[5] Sec. 22602(g)(1). 

[6] Sec. 22603(c). 

[7] Sec. 22603(b)(4). 

[8] Sec. 22603(b)(3).

[9] Sec. 22603(a)(2).

[10] Sec. 22603(a)(1).

[11] Sec. 22603(a)(3).

[12] Sec. 22603(f).

[13] Sec. 22603(g).

[14] Sec. 22602(c).

[15] This is debatable, but has currency with many legal scholars such as Lawrence Lessig, who holds that S.B. 1047 merely codifies pre-existing negligence theories of liability. See also Ketan Ramakrishnan et al., US Tort Liability for Large-Scale Artificial Intelligence Damages,  RAND Corp, pp.14-19 (Aug. 21, 2024), https://www.rand.org/pubs/research_reports/RRA3084-1.html