About us

Working towards a safe and flourishing future

Our mission, history, and projects in existential risk reduction that we support

How can we ensure that human civilization survives and thrives through the AI transformation?

The development of frontier artificial intelligence poses both great opportunities and perils for humanity.

Already, we’re seeing the massive benefits of these technologies: advancements in research and development, automation of daily tasks, powerful visual tools for creatives, and many more.

However, we have yet to confront the most treacherous downsides of this nascent technology - abilities so dangerous that they could induce existentially risky outcomes such as civilizational collapse, nuclear warfare, or pandemics.

At Convergence, we believe the most critical task on Earth today is to steer the evolution of AI technology in a direction that ensures it continues to advance human productivity and well-being, while reducing the likelihood of triggering existentially risky outcomes.

Convergence exists to develop and promote insights on how to create a thriving future by minimizing the existential risk from AI.

Read our Theory of Change

We’re building a cutting-edge research institution focused on the following questions:

What scenarios about the trajectory of AI are most likely and neglected, and what does it look like to model them?

What governance strategies are the most important for humanity to focus on to ensure we arrive at our desired scenario?

How can we raise awareness of critical governance strategies to create change - in particular, by advocating to the AI safety community, policymakers, and the general public?

Our research is deeply interdisciplinary in nature and draws upon insights and methods from philosophy, computer science, mathematics, sociology, cognitive science, and psychology.

To learn about our philosophy around AI safety and how we intend to structure our work to answer these questions, you can take a look at our Theory of Change.

Read our Theory of Change

Read our Theory of Change

History

Convergence originally emerged as a research collaboration in existential risk strategy between David Kristoffersson and Justin Shovelain from 2017 to 2021, engaging a diverse group of collaborators. Throughout this period, they worked steadily on building a body of foundational research on reducing existential risk, publishing some findings on the EA Forum and LessWrong, and advising individuals and groups such as Lionheart Ventures. Past publications covered topics such as:

Through 2021 to 2023, we laid the foundation for a research institution and built a larger team. In 2024, Convergence relaunched as a strong team of 10 academics and professionals with a revamped research and impact vision. Timelines to advanced AI have shortened, and our society urgently needs clarity on the paths ahead and on the right courses of action to take.

Advising

Convergence advises organizations and individuals on the ethics of advanced AI. We address questions such as how to ethically invest in AI-related companies, how to design an AI company to minimize harms, and on the general safety of AI-related research.

Organizations that we advise directly

Fiscal Sponsorship

We are providing fiscal sponsorships for promising existential risk reduction projects. This includes helping non-profit projects receive grants, and providing some light financial and legal management assistance, enabling these teams to conduct their research more effectively. If you would be interested in receiving fiscal sponsorship from us, feel free to contact us here.

Organizations that we provide fiscal sponsorship to

Newsletter

Newsletter

Newsletter

Get research updates from Convergence

Leave us your contact info and we’ll share our latest research, partnerships, and projects as they're released.

You may opt out at any time. View our Privacy Policy.