Publications

Recent Publications

Scenario Research

A research program by Convergence that explores potential scenarios and evaluates strategies for controlling the trajectory of AI.

Governance Research

A research program by Convergence evaluating critical & neglected policy recommendations in AI governance

Advocacy & Education

As one of our key pillars of output, we’re extremely focused on conducting effective public education and advocacy to broaden the awareness of AI risks and propose realistic recommendations to mitigate them.

Existential Risk Strategy Research

Given a goal of mitigating existential risk from AI development, what is the best approach to achieving this? Strategy research emphasizes impartially evaluating possible high-level courses of conduct considering factors like resource allocation, domain of focus, or means of implementation.

Information Hazards & Downside Risks

How can we understand the risks of disseminating crucial information, such as the blueprint for thermonuclear weapons or the genetic sequence of a lethal pathogen? What are the potential negative externalities of well-intentioned actions? We’ve explored these topics in depth at Convergence.

AI Safety

What are the most effective strategies for achieving safe and aligned AI in the near future? We evaluate different techniques to measure and improve AI safety such as value modeling, AI alignment techniques, and governance strategies.

Newsletter

Get research updates from Convergence

Leave us your contact info and we’ll share our latest research, partnerships, and projects as they're released.

You may opt out at any time. View our Privacy Policy.