How we work

Building A World-Class Research Institution

The core values, working style, and hiring practices we follow at Convergence

Get notified about job openings

Working at Convergence

We’re an international team spread widely across 5 countries - the US, UK, Portugal, Estonia, and Canada.

We typically meet 2-3 times per year at conferences, as well as a group retreat annually. We meet primarily in GMT afternoon hours, and have varying hours outside of that based on our time zones.

We tend to hire self-driven contributors who are capable of leading projects, kicking off new initiatives, and functioning well in an open-ended environment. We look for empathetic, decisive individuals who have a unique background and are able to produce quality output effectively.

Because of the strategic and ever-changing nature of the AI safety landscape, we heavily prioritize flexibility and dynamism in our roles - we recognize very clearly that the projects we’re working on now may need to change within the next six months.

We are looking for new team members that strongly exhibit our core values, as we believe these give us a common base to be able to execute and collaborate effectively.

How we work

Our Core Values

Impact Focus

Our number one priority is on creating tangible outcomes that significantly contribute to mitigating risks associated with AI development. This value drives us to prioritize actions and strategies that have the highest potential for real-world impact. We constantly re-evaluate the effectiveness of our efforts, ensuring that our work translates into practical interventions.

Impact Focus

Our number one priority is on creating tangible outcomes that significantly contribute to mitigating risks associated with AI development. This value drives us to prioritize actions and strategies that have the highest potential for real-world impact. We constantly re-evaluate the effectiveness of our efforts, ensuring that our work translates into practical interventions.

Transparency & Empathy

We engage in open, honest, and straightforward communication at all levels. We believe that effective output requires clear, unambiguous dialogue combined with genuine care and empathy. Communicating directly but empathetically enables us to tackle difficult questions head-on, encourage constructive feedback, and foster a culture of transparency and trust.

Transparency & Empathy

We engage in open, honest, and straightforward communication at all levels. We believe that effective output requires clear, unambiguous dialogue combined with genuine care and empathy. Communicating directly but empathetically enables us to tackle difficult questions head-on, encourage constructive feedback, and foster a culture of transparency and trust.

Adaptivity & Creativity

AI safety is a nascent field of research, with massive uncertainty around likely scenarios and effective strategies to steer the maturation of AI. Given this landscape, we emphasize regularly reassessing our strategic approach and thinking creatively to overcome challenges. Internally, we encourage a culture where unconventional ideas are welcomed and explored.

Adaptivity & Creativity

AI safety is a nascent field of research, with massive uncertainty around likely scenarios and effective strategies to steer the maturation of AI. Given this landscape, we emphasize regularly reassessing our strategic approach and thinking creatively to overcome challenges. Internally, we encourage a culture where unconventional ideas are welcomed and explored.

Interdisciplinary Thinking

We recognize that the complexities of AI safety cannot be fully addressed through a single lens. Good analysis intersects various disciplines – from hardware expertise, to ethics, to governance policy, to technical alignment. By embracing diverse perspectives and hiring across a variety of fields, we can develop more effective strategies.

Interdisciplinary Thinking

We recognize that the complexities of AI safety cannot be fully addressed through a single lens. Good analysis intersects various disciplines – from hardware expertise, to ethics, to governance policy, to technical alignment. By embracing diverse perspectives and hiring across a variety of fields, we can develop more effective strategies.

how we work

Our Key Audiences

The AI Safety Community

Within the AI safety community, our goal is to build consensus on the most critical AI scenarios, and the optimal governance interventions necessary to improve them. Though we’re focused primarily on reducing existential risk, we’re also identifying ways in which we can most effectively support scenarios that result in humanity flourishing.

To achieve this goal, we’re actively coordinating working groups, seminars, and conferences aimed at bringing together top AI safety researchers, aligning on key research, and presenting those topics externally.

The AI Safety Community

Within the AI safety community, our goal is to build consensus on the most critical AI scenarios, and the optimal governance interventions necessary to improve them. Though we’re focused primarily on reducing existential risk, we’re also identifying ways in which we can most effectively support scenarios that result in humanity flourishing.

To achieve this goal, we’re actively coordinating working groups, seminars, and conferences aimed at bringing together top AI safety researchers, aligning on key research, and presenting those topics externally.

The AI Safety Community

Within the AI safety community, our goal is to build consensus on the most critical AI scenarios, and the optimal governance interventions necessary to improve them. Though we’re focused primarily on reducing existential risk, we’re also identifying ways in which we can most effectively support scenarios that result in humanity flourishing.

To achieve this goal, we’re actively coordinating working groups, seminars, and conferences aimed at bringing together top AI safety researchers, aligning on key research, and presenting those topics externally.

Policymakers & Thought Leaders

For key individuals who will have a massive impact on the success of AI safety, our work is intended to accessibly summarize complex domains of information. In particular, we provide the critical reference hub of key information necessary to bring parties up to speed rapidly. Our work allows policymakers to compare and contrast scenarios and interventions, and learn about the AI safety consensus on important issues.

To achieve this goal, we’re actively writing policy briefs for key governance proposals, performing reach-outs to leading governance bodies, and advising several private organizations. In particular, we’re advocating for a small number of critical & neglected interventions.

Policymakers & Thought Leaders

For key individuals who will have a massive impact on the success of AI safety, our work is intended to accessibly summarize complex domains of information. In particular, we provide the critical reference hub of key information necessary to bring parties up to speed rapidly. Our work allows policymakers to compare and contrast scenarios and interventions, and learn about the AI safety consensus on important issues.

To achieve this goal, we’re actively writing policy briefs for key governance proposals, performing reach-outs to leading governance bodies, and advising several private organizations. In particular, we’re advocating for a small number of critical & neglected interventions.

Policymakers & Thought Leaders

For key individuals who will have a massive impact on the success of AI safety, our work is intended to accessibly summarize complex domains of information. In particular, we provide the critical reference hub of key information necessary to bring parties up to speed rapidly. Our work allows policymakers to compare and contrast scenarios and interventions, and learn about the AI safety consensus on important issues.

To achieve this goal, we’re actively writing policy briefs for key governance proposals, performing reach-outs to leading governance bodies, and advising several private organizations. In particular, we’re advocating for a small number of critical & neglected interventions.

The General Public

Meaningful transformation is reliant on widespread awareness and support. We recognize that in order to effectively enact governance interventions, we need the endorsement and concern of the constituents behind the decision-makers: people in government. As a result, a high priority of ours is to determine how best to distribute and popularize the lessons gained from our research. Our focus is building awareness and immediacy to the existential risk of AI.

To achieve this goal, we’re currently launching a number of initiatives, including a book on AI futures and a podcast with key public individuals on AI outcomes.

The General Public

Meaningful transformation is reliant on widespread awareness and support. We recognize that in order to effectively enact governance interventions, we need the endorsement and concern of the constituents behind the decision-makers: people in government. As a result, a high priority of ours is to determine how best to distribute and popularize the lessons gained from our research. Our focus is building awareness and immediacy to the existential risk of AI.

To achieve this goal, we’re currently launching a number of initiatives, including a book on AI futures and a podcast with key public individuals on AI outcomes.

The General Public

Meaningful transformation is reliant on widespread awareness and support. We recognize that in order to effectively enact governance interventions, we need the endorsement and concern of the constituents behind the decision-makers: people in government. As a result, a high priority of ours is to determine how best to distribute and popularize the lessons gained from our research. Our focus is building awareness and immediacy to the existential risk of AI.

To achieve this goal, we’re currently launching a number of initiatives, including a book on AI futures and a podcast with key public individuals on AI outcomes.

Newsletter

Newsletter

Newsletter

Get research updates from Convergence

Leave us your contact info and we’ll share our latest research, partnerships, and projects as they're released.

You may opt out at any time. View our Privacy Policy.