In 2024, Convergence Analysis shaped AI regulations, hosted the expert field-building Threshold 2030 conference, published 20 articles and reports on AI governance and AI futures, and raised public awareness of AI risks.
Published march 24th, 2025
Impact overview
2024 marked the first full year with the new Convergence Analysis 9-person team. This year we published 20 articles on understanding and governing transformative AI. Our research impacted regulatory frameworks internationally. In the US we provided consultation to the Bureau of Industry and Security that directly informed their proposed rule on reporting requirements for dual-use AI, while in the EU we saw specific recommendations incorporated into the EU AI Act GPAI Code of Practice. We led expert field-building around AI’s economic impacts through the Threshold 2030 conference, and AI scenario modeling via the AI Scenarios Network. Our work reached mainstream media, universities, and over 184,000 viewers on social platforms.
We organized our activities into three programs: AI Clarity, AI Governance, and AI Awareness.
AI Clarity: performing AI Scenario Planning
AI Governance: producing concrete AI policy recommendations
AI Awareness: raising public awareness of AI risks
Convergence’s mission
Our mission is to design a safe and flourishing future for humanity in a world with transformative AI. We consider this a sociotechnical problem: that in addition to addressing the technical considerations of AI, governing institutions and the public need to be involved in order to solve these problems. Our work, following our theory of change, cuts across three interrelated programs:
AI Clarity
We research potential AI development scenarios and their implications, to guide AI safety reasoning and discourse. We work to create new fields of inquiry, such as AI Scenario Planning and AGI Economics, through publishing guiding research, coordinating experts, and springboarding new researchers.
AI Governance
We conduct rapid-response research on emerging developments and neglected areas of AI governance, generating actionable recommendations for reducing harms from AI.
AI Awareness
We build public understanding around AI risk through strategic initiatives, including a book on AI futures and a podcast featuring discussions with thought leaders.
List of outputs
AI Clarity
AI SCENARIO PLANNING METHODS
Theories of Victory
AGI ECONOMICS
AI TIMELINES
AI Governance
TECHNICAL CONTROLS AND INFRASTRUCTURE
NATIONAL POLICY FRAMEWORKS
STRATEGIC GOVERNANCE RESEARCH
AI Awareness
Outcomes and impacts in more detail
The AI Clarity program explores future scenarios and evaluates strategies for mitigating AI risks. In 2024, AI Clarity projects (1) addressed gaps in foundational knowledge around AI scenario modelling and in gathering practitioners, (2) formalised theories of victory for AI safety work, (3) analyzed consensus on timelines to AGI, and (4) hosted the AGI economics field-building conference Threshold 2030, building on our prior work in AI scenarios. Beyond general field building of AI safety and governance, we are seeding and coordinating specific high value fields of inquiry, especially our work on AI Scenario Planning and AGI Economics.
AI Scenario Planning
Our Scenario Planning work addressed neglected challenges that traditional AI forecasting methods struggle with. We present a complimentary approach to forecasting to support decision-makers preparing for an uncertain future. Our field-building work established the AI Scenarios Network of 30+ researchers across organizations and produced several publications, listed below. This research also directly formed the basis for our Theory of Victory work and our broader research agenda, including a paper AI Emergency Preparedness with external collaborators, and the Threshold 2030 conference.
KEY OUTPUTS
Theories of Victory
The lack of clearly defined success criteria for AI governance makes long-term strategic planning difficult. In 2024 we highlighted a lack of stated theories of victory in AI governance, and examined practical preparedness for best and worst-case scenarios globally. Our work on Theories of Victory and Emergency Preparedness was well-received in the research community, receiving strong positive feedback from peers, and generating good engagement on the EA Forum and SSRN. This reception led to a follow-up post on Analysis of Global AI Governance Strategies in collaboration with Sammy Martin (Polaris Ventures). AI Emergency Preparedness was also presented at IAAA's 39th Conference
KEY OUTPUTS
AGI Economics
We hosted the Threshold 2030 conference in Boston (October 2024) to study the economic impacts of near-term TAI, together with Metaculus and FLI. The conference developed practical AI impact forecasting methods and mapped areas of expert consensus and disagreement. This work established new research priorities and cross-organizational collaborations now informing new projects at Convergence and partner organizations. A 200-page conference report on the findings was published in February 2025.
KEY OUTPUTS
AI Timelines
We evaluated forecasts, models and arguments for and against TAI timelines, and made technical approaches more accessible to researchers and policymakers, in conclusion providing further basis for taking short TAI timelines seriously.
KEY OUTPUTS
The AI Governance Program evaluates and makes critical and neglected policy recommendations in AI governance. Our governance work in 2024 produced foundational research into AI governance frameworks and specific policy recommendations.
Technical Controls & Infrastructure
We developed foundational regulatory tools for frontier AI oversight using registration systems and technical attribution mechanisms. Our technical control frameworks directly influenced policy development in multiple jurisdictions:
The Training Data Attribution report was based on research originally commissioned by FLF, who gave highly positive feedback on the commissioned work and expressed strong interest in future partnerships.
KEY OUTPUTS
National Policy Frameworks
Our 2024 research examined emerging approaches to national AI governance in the US, China and the EU. Our national policy frameworks had good traction in both academic and policy spheres in 2024. Soft Nationalization saw use from researchers at the US AI Safety Institute, Harvard AI Student Team, and LawAI, and led to us giving a presentation on the topic at Harvard. The State of the AI Regulatory Landscape report had the highest readership of our publications in 2024 and was integrated into BlueDot Impact's AI governance curriculum. This analysis also identified model registration as a neglected area of research, directly informing our subsequent report on the topic.
KEY OUTPUTS
Strategic Governance Research
Our publications in this area explored international coordination, public administration, and the power dynamics between public and private actors. We also led the publication of The Oxford Handbook of AI Governance, totalling 50 chapters by 75 leading contributors including Anthony Aguirre, Anton Korinek, Allan Dafoe, Ben Garfinkel & Jack Clark. The handbook, work on which started in 2020, has shaped a number of early conversations about AI governance.
KEY OUTPUTS
The AI Awareness program works to increase public understanding of AI risks and how to address them, through books, teaching, and media engagement. In 2024, our work to raise public awareness of AI safety reached major platforms, including coverage across 10 leading media outlets including Politico, Forbes, and CBS. Building a God received early feature coverage from Forbes Books, with additional major outlet features confirmed. We produced 23 episodes of the podcast All Thinks Considered featuring leading thinkers in AI and societal betterment, and with content generating 184,000 views on TikTok. We also led two courses at the Toronto Metropolitan University, delivered multiple lectures on 'AI and the Future of Humanity,' and received 200+ subscriptions to our newsletter.
KEY OUTPUTS
Operations
Convergence is an international AI x-risk strategy think tank, spanning the UK, US, Canada and Portugal. In 2024, we expanded our team from 8 to 9 members. We had one staff member leave, and added two new people to the team. Harry Day, our first COO left, and Michael Keough took up the mantle to lead Operations. Gwyn Glasser joined as a new Researcher. Convergence is funded by individual philanthropists and granting bodies concerned about x-risk, such as FLI and SFF.
2024 BUDGET
$950k
Salaries and associated costs
$768k
Travel
$25k
Threshold 2030 conference
$77k
Other
(including $40k for Building a God publicity and $15k for offices)
$77k
FUNDS RAISED IN 2024
$800k
Fulfilled commitments from earlier funders
$300k
Survival and Flourishing Fund
$87k
FLI Power Concentration RFP
$280k
FLI funding for Threshold 2030
$77k
New individual donations
$50k
2025 BUDGET PROJECTION
$875k
Salaries and associated costs
$825k
Travel
$25k
Other
$25k
FUNDS RAISED IN JAN-FEB 2025
$200k
Fulfilled commitments from earlier funders
$200k
2025: January and February outcomes
As this impact review is being published in March 2025, we will also here outline the major works we have released in January and February 2025:
January 2025
This paper argues that the same assumptions that motivate the US’ race to develop ASI also imply that such a race is extremely dangerous. The paper concludes that international cooperation is preferable, strategically sound and achievable. This project has already elicited strongly positive written feedback and achieved over 400 PDF downloads within a month of publication.

January 2025
In December 2024 we published the largest initiative of the AI Awareness Program so far: Building a God. In January 2025 we began the launch of the book, including holding a set of interviews with major media outlets. We are planning a book tour starting in May, including speaking appointments, interviews, town hall discussions across the US and Canada.
January 2025
This report proposes a framework for international AGI governance. The study outlines mechanisms for democratic accountability, and notes challenges to international cooperation and risks of power concentration.
February 2025
This 200-page report showcases the outcomes of the conference in detail, highlighting insights from 25 leading economists, AI policy experts, and forecasters who explored three AI advancement scenarios for 2030. The report covers three main components: Worldbuilding exercises examining AI's potential economic impacts, Economic Causal Modeling and Forecasting exercises.
February 2025
This 171-page report outlines seven plausible paths to TAI by 2035 through the two mechanisms of compute scaling and recursive improvement. The report argues that the evidence presented for several plausible TAI scenarios motivates preparing for short timelines.
2025: Ongoing initiatives
Continuing into 2025, our largest current initiative is The AGI Social Contract, and other ongoing initiatives of ours include AI and International Security, AGI is Near, AI Scenarios Network, and AI Awareness:
Building a God promotion – Engage with the media and hold a book tour starting in May 2025, with speaking appointments, interviews, and town hall discussions across major cities in the US and Canada. We will film events to create additional content for our podcast and social media platforms, and may collaborate with recognized science communicators.
All Thinks Considered podcast – Produce new episodes that explain complex AI concepts and developments for general audiences, through conversations with a diverse set of experts.
Funding Gaps and opportunities
Convergence's 2025 budget is $875,000, with funding set to run out in June 2025. We need $440,000 to continue operations through year-end and another $440,000 to build a six-month reserve. Beyond these immediate needs, we see some strong opportunities for growing our impact with additional team members:
Three funding scenarios
Below are three simplified funding scenarios: (1) Maintenance, for sustaining operations, (2) Moderate Growth for growing the team 50%, and (3) Strategic Growth, for more than doubling the team size.
Base: $880,000
This baseline funding would enable Convergence to maintain our current team of 9 members for an additional 12 months beyond our current runway, into July 2026.
Moderate Growth: $1,850,000
With this increased funding, Convergence would add 5 team members and extend our runway into January 2027. This scenario represents a balanced nearterm growth trajectory:
Strategic Growth: $4,150,000
This scenario would position Convergence to scale sustainably to add 11 team members, more than doubling our current size, and extending our runway into Jan 2028.
The case for funding Convergence
This year our small team has achieved a significant impact on AI safety through field-defining work, regulatory influence, and cross-sector engagement, on an annual budget of 950k USD. In 2025 we are improving project prioritisation, outreach, and efficiency to further boost our impact, and as of March 2025, we are off to a strong start of the year with five major publications released. With additional resources to address our funding gaps and opportunities, we believe we can significantly scale up our impact even more. If you may be able to help with supporting our projects, please get in touch at funding@convergenceanalysis.org. We accept major credit cards, PayPal, Venmo, bank transfers, cryptocurrency donations, and stock transfers.
Conclusion
In 2024 we launched a new research institute. We started the year with a new team of 8. There were some hard challenges we were attempting to solve as a research institute for x-risk reduction and future flourishing. Can we combine efficiency with doing deep intellectual research? Combine doing big picture research with actionable research? Combine open academic inquiry with the focus of a startup? And in the end, how do we have a positive impact on x-risk? With the outcomes of the past year, we think we’ve had some very promising successes.
In 2025, we are continuing our work with The AGI Social Contract and other initiatives, orienting ourselves for a world rapidly approaching transformative AI, and building on our proven research model further to make a greater positive impact.
Thank you to all our collaborators and supporters, we wouldn’t be where we are without your help!
Get research updates from Convergence
Leave us your contact info and we’ll share our latest research, partnerships, and projects as they're released.
You may opt out at any time. View our Privacy Policy.