Governance research program
Effective AI Governance to mitigate existential risk
A research program by Convergence evaluating critical & neglected policy recommendations in AI governance
What have we recently published?
What is AI governance?
Why is it important?
AI Governance
Policies, guidelines, and practices to guide and regulate the use of cutting-edge AI technologies.
These policies most often take the form of governmental legislation, but can be implemented via international coordination, the formation of standalone agencies, universal guidelines, or even industry agreements.
AI Governance
Policies, guidelines, and practices to guide and regulate the use of cutting-edge AI technologies.
These policies most often take the form of governmental legislation, but can be implemented via international coordination, the formation of standalone agencies, universal guidelines, or even industry agreements.
AI Governance
Policies, guidelines, and practices to guide and regulate the use of cutting-edge AI technologies.
These policies most often take the form of governmental legislation, but can be implemented via international coordination, the formation of standalone agencies, universal guidelines, or even industry agreements.
Unlike more established industries, where safeguards and regulations have evolved over time, the breakneck development of frontier AI models unfolds in a near-total regulatory void. These AI systems possess staggering capabilities, fundamentally altering the world we know.
Examples of AI model capabilities
In all of these cases, there are practically no regulations governing who can develop this technology or how it can be used. With such a power vacuum, it’s essentially up to the AI labs to independently decide how they want to distribute such tremendous abilities and to whom, leaving the safety of society in private hands.
As a result, AI governance is a critical and rapidly growing field intended to manage the potential negative externalities of AI development.
examples of current AI governance initiatives
We believe that effective AI governance is the most important approach to mitigate the likelihood of a catastrophic global outcome in the next ten years.
Governance Research
What are Governance Recommendations?
For the vast majority of proposed AI governance, there is a lack of detailed analysis regarding the feasibility, implementation, effectiveness at reducing risk, or negative externalities of such a proposal.
As a result, we're launching a research agenda focused on generating comprehensive reports to analyze specific recommendations that we’ve identified to be critical & neglected.
Governance Policies we are researching:
AI Chip Registration Policies
What does the political landscape look like for the U.S. to require the registration and transfer reporting of key AI chips?
Are there precedent policies that exist?
Read the Paper
AI Chip Registration Policies
What does the political landscape look like for the U.S. to require the registration and transfer reporting of key AI chips?
Are there precedent policies that exist?
Read the Paper
AI Chip Registration Policies
What does the political landscape look like for the U.S. to require the registration and transfer reporting of key AI chips?
Are there precedent policies that exist?
Read the Paper
Emergency Powers
What types of key levers could governments be given control of to stop the distribution or training of a dangerous AI model?
Should these powers rest in the hands of the government or the private AI labs?
To Be Released
Emergency Powers
What types of key levers could governments be given control of to stop the distribution or training of a dangerous AI model?
Should these powers rest in the hands of the government or the private AI labs?
To Be Released
Emergency Powers
What types of key levers could governments be given control of to stop the distribution or training of a dangerous AI model?
Should these powers rest in the hands of the government or the private AI labs?
To Be Released
Pre-Deployment Safety Assessments
What domain-specific safety assessments are critical to identify dangerous capabilities of AI models?
How can an organization or agency systematically apply these safety assessments to different models?
To Be Released
Pre-Deployment Safety Assessments
What domain-specific safety assessments are critical to identify dangerous capabilities of AI models?
How can an organization or agency systematically apply these safety assessments to different models?
To Be Released
Pre-Deployment Safety Assessments
What domain-specific safety assessments are critical to identify dangerous capabilities of AI models?
How can an organization or agency systematically apply these safety assessments to different models?
To Be Released
KEY CRITERIA WE WILL CONSIDER:
Feasibility
Is this recommendation difficult to implement?
Is this recommendation difficult to implement?
Is this recommendation difficult to implement?
What organization or group of organizations need to have buy-in for this intervention to work? Is it easy to build consensus?
What organization or group of organizations need to have buy-in for this intervention to work? Is it easy to build consensus?
What organization or group of organizations need to have buy-in for this intervention to work? Is it easy to build consensus?
What is the cost and overhead of creating and maintaining such a policy?
What is the cost and overhead of creating and maintaining such a policy?
What is the cost and overhead of creating and maintaining such a policy?
Effectiveness
Does this recommendation meaningfully reduce existential risk or provide guardrails against negative existential outcomes?
Does this recommendation meaningfully reduce existential risk or provide guardrails against negative existential outcomes?
Does this recommendation meaningfully reduce existential risk or provide guardrails against negative existential outcomes?
Does it successfully target any alternative governance goals other than reducing existential risk?
Does it successfully target any alternative governance goals other than reducing existential risk?
Does it successfully target any alternative governance goals other than reducing existential risk?
Are there examples of similar recommendations being effective in the past?
Are there examples of similar recommendations being effective in the past?
Are there examples of similar recommendations being effective in the past?
Negative Externalities
Does this recommendation stem innovation or slow the speed of AI capabilities research?
Does this recommendation stem innovation or slow the speed of AI capabilities research?
Does this recommendation stem innovation or slow the speed of AI capabilities research?
Does it have adverse geopolitical impacts?
Does it have adverse geopolitical impacts?
Does it have adverse geopolitical impacts?
Is there likely to be significant pushback against this intervention based on these externalities?
Is there likely to be significant pushback against this intervention based on these externalities?
Is there likely to be significant pushback against this intervention based on these externalities?
In our Governance Recommendations program, we’re creating a foundational corpus of reports of key governance recommendations we believe are feasible and worth advocating for, starting with the most critical and neglected.
For each of these recommendations, we are evaluating the current societal landscape, providing a detailed proposal for the best course of implementation, and considering key evaluative criteria on the overall viability of the proposal.
We'll be launching a State of the AI Regulatory Landscape Report and a series of technical analyses on specific AI policies in early 2024.
Other Convergence Programs
Newsletter
Get research updates from Convergence
Leave us your contact info and we’ll share our latest research, partnerships, and projects as they're released.
You may opt out at any time. View our Privacy Policy.