policy report
Published by Convergence Analysis, this series is designed to be a primer for policymakers, researchers, and individuals seeking to develop a high-level overview of the current state of AI regulation.
Structure of AI Regulations
In this section, we’ll discuss a multifaceted, high-level topic: How are current AI regulatory policies structured, and what are the advantages and disadvantages of their choices? By focusing on the existing regulatory choices of the EU, US, and China, we’ll compare and contrast key decisions in terms of classifying AI models and the organization of existing AI governance structures.
What are possible approaches to classify AI systems for governance?
Before passing any regulations, governments must answer for themselves several challenging, interrelated questions to lay the groundwork for their regulatory strategy:
Complicating the matter, even precisely defining what is an AI system is challenging: as a field, AI today encompasses many different forms of algorithms and structures. You’ll find overlapping and occasionally conflicting definitions on what constitutes “models”, “algorithms”, “AI”, “ML”, and more. In particular, the latest wave of foundational large-language models (LLMs such as ChatGPT) have varying names under different governance structures and contexts, such as “general-purpose AI (GPAI)”, “dual-use foundation models”, “frontier AI models”, or simply “generative AI”.
For the purposes of this review, we’ll rely on an extremely broad definition of AI systems from IBM: “A program that has been trained on a set of data to recognize certain patterns or make certain decisions without further human intervention.”
There are various viable approaches to classifying the development of AI models or algorithms into “regulatory boxes”. Many of these approaches may overlap with each other, or be layered to form a comprehensive, effective governance strategy. We’ll discuss some of them below:
Certain regulatory approaches may involve a combination of two or more of these classifications. For example, the US Executive Order identifies a lower compute threshold for mandatory reporting for models trained on biological data, combining compute-level and application-level classifications.
Point of Regulation
Closely tied to this set of considerations is the concept of point of regulation – where in the supply chain governments decide to target their policies and requirements. Governments must identify the most effective regulatory approaches to achieve their objectives, considering factors such as their level of influence and the ease of enforcement at the selected point.
The way AI systems are classified under a government's regulatory framework directly informs the methods they employ for regulation. That is, the classification strategy and the point of regulation are interdependent decisions that shape a government’s overall regulatory strategy for AI.
As an example:
Two important dimensions in designing regulatory structures for AI governance
How should a government structure its AI governance, and what factors might it depend on? We’ll mention several relevant considerations that will be further discussed regarding specific government’s approaches to legislation.
Centralized vs. Decentralized Enforcement
In a centralized AI governance system, a single agency or regulatory body may be responsible for implementing, monitoring, and enforcing legislation. Such a body may be able to operate more efficiently by consolidating technical expertise, resources, and jurisdiction. For example, a single agency could coordinate more easily with AI labs to design a single framework for regulating multi-functional LLMs, or be able to better fund technically complex safety evaluations by hiring leading safety researchers.
However, such an agency may fail to effectively account for the varied uses of AI technology, or lean too far towards “one-size-fits-all” regulatory strategies. For example, a single agency may be unable to simultaneously effectively regulate use-cases of LLMs in healthcare (e.g. complying with HIPAA regulations), content creation (e.g. preventing deepfakes), and employment (e.g. preventing discriminatory hiring practices), as it may become resource constrained and lack domain expertise. A single agency may also be more susceptible to regulatory capture from AI labs.
In contrast, decentralized enforcement may spread ownership of AI regulation across a variety of agencies or organizations focused on different concerns, such as the domain of application or method of oversight. This approach might significantly improve the application of governance to specific AI use-cases, but risks stretching agencies thin as they struggle to independently evaluate and regulate rapidly-developing technologies.
Decentralized governmental bodies may not take ownership of novel AI technologies without clear precedent (such as deepfakes), and key issues may “slip between the gaps” of different regulatory agencies. Alternatively, they might alternatively attempt to overfit existing regulatory structures onto novel technologies with disastrous outcomes for innovation. For example, the SEC’s attempt to map emerging cryptocurrencies onto its existing definition of securities has led to it declaring that the majority of cryptocurrency projects are unlicensed securities subject to shutdown.
Vertical vs Horizontal Regulations
A very similar set of arguments can be applied to the regulations themselves. A horizontally-integrated AI governance effort (such as the EU AI Act) applies new legislation to all use cases of AI, effectively forcing any AI models in existence to comply with a wide-ranging and non-specific set of regulations. Such an approach can provide a comprehensive, clearly defined structure for new AI development, simplifying compliance. However, horizontally-integrated policies can also be criticized for “overreaching” in scope, by applying regulations too broadly before legislators have developed expertise in managing a new field, and potentially stifling innovation as a result.
In contrast, vertical regulations may be able to target a single domain of interest precisely, focusing on a narrow domain like “recommendation algorithms”, “deepfakes”, or “text generation” as demonstrated by China’s recent AI regulatory policies. Such vertical regulations can be more straightforward to implement and enforce than a broad set of horizontal regulations, and can allow legislators to concentrate on effectively managing a narrow set of use cases and considerations. However, they may not account effectively for AI technologies that span multiple domains, and could eventually lead to piecemeal, conflicting results as different vertical “slices” take disjointed approaches to regulating AI technologies.
How are leading governments approaching AI Governance?
China
Over the past three years, China has passed a series of vertical regulations targeting specific domains of AI applications, led by the Cyberspace Administration of China (CAC). The three most relevant pieces of legislation include:
The language used by these AI regulations is typically broad, high-level, and non-specific. For example, Article 5 of the Interim Generative AI Measures states that providers should “Encourage the innovative application of generative AI technology in each industry and field [and] generate exceptional content that is positive, healthy, and uplifting”. In practice, this wording extends greater control to the CAC, allowing it to interpret its regulations as necessary to enforce its desired outcomes.
Notably, China created the first national algorithm registry in its 2021 Algorithmic Recommendation Provisions, focusing initially on capturing all recommendation algorithms used by consumers in China. By defining the concept of “algorithm” quite broadly, this registry often requires that organizations submit many separate, detailed reports for various algorithms in use by its systems. In subsequent legislation, the CAC has continually expanded the scope of this algorithm registry to include updated forms of AI, including all LLMs and AI models capable of generating content.
What are the key traits of China’s AI governance strategy?
China’s governance strategy is focused on tracking and managing algorithms by their domain of use:
China is taking a vertical, iterative approach to developing progressively more comprehensive legislation, by passing targeted regulations concentrating on a specific category of algorithms at a time:
China strongly prioritizes social control and alignment in its AI regulations:
China has demonstrated an inward focus on regulating Chinese organizations and citizens:
The EU
The European Union (EU) has conducted almost all of its AI governance initiatives within a single piece of legislation: the EU AI Act, formally adopted in March 2024. Initially proposed in 2021, this comprehensive legislation aims to regulate AI systems based on their potential risks and safeguard the rights of EU citizens.
At the core of the EU AI Act is a risk-based approach to AI regulation. The act classifies AI systems into four categories: unacceptable risk, high risk, limited risk, and minimal risk. Unacceptable risk AI systems, such as those that manipulate human behavior or exploit vulnerabilities, are banned outright. High-risk AI systems, including those used in critical infrastructure, education, and employment, are subject to strict requirements and oversight. Limited risk AI systems require transparency measures, while minimal risk AI systems are largely unregulated.
In direct response to the publicization of foundational AI models in 2022 starting with the launch of ChatGPT, the Act includes clauses specifically addressing the challenges posed by general purpose AI (GPAI). GPAI systems, which can be adapted for a wide range of tasks, are subject to additional requirements, including being categorized as high-risk systems depending on their intended domain of use.
What are the key traits of the EU’s AI governance strategy?
The EU AI Act is a horizontally integrated, comprehensive piece of legislation implemented by a centralized body:
The EU has demonstrated a clear prioritization for the protection of citizen’s rights:
The EU AI Act implements strict and binding requirements for high-risk AI systems:
The US
In large part due to legislative gridlock in the US Congress, the United States has taken an approach to AI governance centered around executive orders and non-binding declarations by the Biden administration. Though this approach has key limitations, such as the inability to allocate budget for additional programs, it has resulted in a significant amount of executive action over the past year.
Three key executive actions stand out in shaping the US approach:
What are the key traits of the US’ AI governance strategy?
The US’ initial binding regulations focus on classifying AI models by compute ability and regulating hardware:
Beyond export controls, the US appears to be pursuing a decentralized, largely non-binding approach relying on executive action:
US AI policy is strongly prioritizing its geopolitical AI arms race with China: