How we work

Building A World-Class Research Institution

The core values, working style, and hiring practices we follow at Convergence

Get notified about job openings

Working at Convergence

We’re an international team spread widely across 5 countries - the US, UK, Portugal, Estonia, and Canada.

We typically meet 2-3 times per year at conferences, as well as a group retreat annually. We meet primarily in GMT afternoon hours, and have varying hours outside of that based on our time zones.

We tend to hire self-driven contributors who are capable of leading projects, kicking off new initiatives, and functioning well in an open-ended environment. We look for empathetic, decisive individuals who have a unique background and are able to produce quality output effectively.

Because of the strategic and ever-changing nature of the AI safety landscape, we heavily prioritize flexibility and dynamism in our roles - we recognize very clearly that the projects we’re working on now may need to change within the next six months.

We are looking for new team members that strongly exhibit our core values, as we believe these give us a common base to be able to execute and collaborate effectively.

How we work

Our Core Values

Impact Focus

Our number one priority is on creating tangible outcomes that significantly contribute to mitigating risks associated with AI development. This value drives us to prioritize actions and strategies that have the highest potential for real-world impact. We constantly re-evaluate the effectiveness of our efforts, ensuring that our work translates into practical interventions.

Impact Focus

Our number one priority is on creating tangible outcomes that significantly contribute to mitigating risks associated with AI development. This value drives us to prioritize actions and strategies that have the highest potential for real-world impact. We constantly re-evaluate the effectiveness of our efforts, ensuring that our work translates into practical interventions.

Transparency & Empathy

We engage in open, honest, and straightforward communication at all levels. We believe that effective output requires clear, unambiguous dialogue combined with genuine care and empathy. Communicating directly but empathetically enables us to tackle difficult questions head-on, encourage constructive feedback, and foster a culture of transparency and trust.

Transparency & Empathy

We engage in open, honest, and straightforward communication at all levels. We believe that effective output requires clear, unambiguous dialogue combined with genuine care and empathy. Communicating directly but empathetically enables us to tackle difficult questions head-on, encourage constructive feedback, and foster a culture of transparency and trust.

Adaptivity & Creativity

AI safety is a nascent field of research, with massive uncertainty around likely scenarios and effective strategies to steer the maturation of AI. Given this landscape, we emphasize regularly reassessing our strategic approach and thinking creatively to overcome challenges. Internally, we encourage a culture where unconventional ideas are welcomed and explored.

Adaptivity & Creativity

AI safety is a nascent field of research, with massive uncertainty around likely scenarios and effective strategies to steer the maturation of AI. Given this landscape, we emphasize regularly reassessing our strategic approach and thinking creatively to overcome challenges. Internally, we encourage a culture where unconventional ideas are welcomed and explored.

Interdisciplinary Thinking

We recognize that the complexities of AI safety cannot be fully addressed through a single lens. Good analysis intersects various disciplines – from hardware expertise, to ethics, to governance policy, to technical alignment. By embracing diverse perspectives and hiring across a variety of fields, we can develop more effective strategies.

Interdisciplinary Thinking

We recognize that the complexities of AI safety cannot be fully addressed through a single lens. Good analysis intersects various disciplines – from hardware expertise, to ethics, to governance policy, to technical alignment. By embracing diverse perspectives and hiring across a variety of fields, we can develop more effective strategies.

How We Hire

Our hiring process contains three different stages - Entry, Core, and Extended.

The Entry stage comprises:

  • Written Interview, consisting of multiple short questions (1-2hrs);

  • a preliminary role-specific Work Task (1-2hrs);

  • and, should these two written exercises indicate you are a strong fit for the role, a General Interview with our CEO David Kristoffersson to discuss your responses and application generally (1hr).

The Core stage comprises:

  • a two-day Work Test — completing a real-world task, giving you a feel for the role and helping us to understand your skills and attitude — completed remotely, and for which you will be compensated (16hrs);

  • and, should the task submission demonstrate a good fit for the role, three interviews, specifically

    • History Interview, covering your background and general experience to date (2hrs);

    • Focused Interview, covering specific aspects of your experience relevant to the role in more detail (2hrs);

    • and an Alignment Interview, covering compatibility in attitude, philosophy, culture, and so on (1-2hrs).

The Extended stage comprises:

  • a one- to two-day collaborative Work Test — completing work together as a team, and in-person where this is possible — for which you will be compensated, and with any expenses covered (8-16hrs);

  • and our discussion with several References.

We believe that dedicating considerable time to our hiring process is essential for assembling the most robust team possible. Acknowledging that our recruitment approach is more rigorous than average, we provide financial compensation for the completion of our work tests.

This practice serves a dual purpose: it removes the economic barriers that might deter prospective candidates, and it conveys our respect for the effort and time invested by applicants in these evaluations.

We believe that dedicating considerable time to our hiring process is essential for assembling the most robust team possible. Acknowledging that our recruitment approach is more rigorous than average, we provide financial compensation for the completion of our work tests.

This practice serves a dual purpose: it removes the economic barriers that might deter prospective candidates, and it conveys our respect for the effort and time invested by applicants in these evaluations.

Interested in working with us?

Contact us at jobs@convergenceanalysis.org and we'll be in touch shortly.

Interested in working with us?

Contact us at jobs@convergenceanalysis.org and we'll be in touch shortly.

How we work

Our Research Principles

Pragmatic & Action-Oriented

Our research is grounded in impact and aimed at real-world application. We focus on generating actionable insights that can be readily implemented, such as specific governance policies, detailed threat models, and tangible recommendations.

Pragmatic & Action-Oriented

Our research is grounded in impact and aimed at real-world application. We focus on generating actionable insights that can be readily implemented, such as specific governance policies, detailed threat models, and tangible recommendations.

Pragmatic & Action-Oriented

Our research is grounded in impact and aimed at real-world application. We focus on generating actionable insights that can be readily implemented, such as specific governance policies, detailed threat models, and tangible recommendations.

Systematic & Comprehensive

We conduct our scenario and strategy research according to a systematic framework that details key considerations. When publishing, we highlight and summarize key attributes such as the feasibility, effectiveness, and negative externalities of governance policies, such that policies can be compared in an unbiased and comprehensive manner.

Systematic & Comprehensive

We conduct our scenario and strategy research according to a systematic framework that details key considerations. When publishing, we highlight and summarize key attributes such as the feasibility, effectiveness, and negative externalities of governance policies, such that policies can be compared in an unbiased and comprehensive manner.

Systematic & Comprehensive

We conduct our scenario and strategy research according to a systematic framework that details key considerations. When publishing, we highlight and summarize key attributes such as the feasibility, effectiveness, and negative externalities of governance policies, such that policies can be compared in an unbiased and comprehensive manner.

Critical & Neglected

In selecting scenarios and governance interventions to focus on, we focus on critical & neglected topics, doing research where we see important gaps in the current literature. By focusing on these neglected areas, our research is more likely to shift governance practices and the consensus within the AI safety community.

Critical & Neglected

In selecting scenarios and governance interventions to focus on, we focus on critical & neglected topics, doing research where we see important gaps in the current literature. By focusing on these neglected areas, our research is more likely to shift governance practices and the consensus within the AI safety community.

Critical & Neglected

In selecting scenarios and governance interventions to focus on, we focus on critical & neglected topics, doing research where we see important gaps in the current literature. By focusing on these neglected areas, our research is more likely to shift governance practices and the consensus within the AI safety community.

how we work

Our Key Audiences

The AI Safety Community

Within the AI safety community, our goal is to build consensus on the most critical AI scenarios, and the optimal governance interventions necessary to improve them. Though we’re focused primarily on reducing existential risk, we’re also identifying ways in which we can most effectively support scenarios that result in humanity flourishing.

To achieve this goal, we’re actively coordinating working groups, seminars, and conferences aimed at bringing together top AI safety researchers, aligning on key research, and presenting those topics externally.

The AI Safety Community

Within the AI safety community, our goal is to build consensus on the most critical AI scenarios, and the optimal governance interventions necessary to improve them. Though we’re focused primarily on reducing existential risk, we’re also identifying ways in which we can most effectively support scenarios that result in humanity flourishing.

To achieve this goal, we’re actively coordinating working groups, seminars, and conferences aimed at bringing together top AI safety researchers, aligning on key research, and presenting those topics externally.

The AI Safety Community

Within the AI safety community, our goal is to build consensus on the most critical AI scenarios, and the optimal governance interventions necessary to improve them. Though we’re focused primarily on reducing existential risk, we’re also identifying ways in which we can most effectively support scenarios that result in humanity flourishing.

To achieve this goal, we’re actively coordinating working groups, seminars, and conferences aimed at bringing together top AI safety researchers, aligning on key research, and presenting those topics externally.

Policymakers & Thought Leaders

For key individuals who will have a massive impact on the success of AI safety, our work is intended to accessibly summarize complex domains of information. In particular, we provide the critical reference hub of key information necessary to bring parties up to speed rapidly. Our work allows policymakers to compare and contrast scenarios and interventions, and learn about the AI safety consensus on important issues.

To achieve this goal, we’re actively writing policy briefs for key governance proposals, performing reach-outs to leading governance bodies, and advising several private organizations. In particular, we’re advocating for a small number of critical & neglected interventions.

Policymakers & Thought Leaders

For key individuals who will have a massive impact on the success of AI safety, our work is intended to accessibly summarize complex domains of information. In particular, we provide the critical reference hub of key information necessary to bring parties up to speed rapidly. Our work allows policymakers to compare and contrast scenarios and interventions, and learn about the AI safety consensus on important issues.

To achieve this goal, we’re actively writing policy briefs for key governance proposals, performing reach-outs to leading governance bodies, and advising several private organizations. In particular, we’re advocating for a small number of critical & neglected interventions.

Policymakers & Thought Leaders

For key individuals who will have a massive impact on the success of AI safety, our work is intended to accessibly summarize complex domains of information. In particular, we provide the critical reference hub of key information necessary to bring parties up to speed rapidly. Our work allows policymakers to compare and contrast scenarios and interventions, and learn about the AI safety consensus on important issues.

To achieve this goal, we’re actively writing policy briefs for key governance proposals, performing reach-outs to leading governance bodies, and advising several private organizations. In particular, we’re advocating for a small number of critical & neglected interventions.

The General Public

Meaningful transformation is reliant on widespread awareness and support. We recognize that in order to effectively enact governance interventions, we need the endorsement and concern of the constituents behind the decision-makers: people in government. As a result, a high priority of ours is to determine how best to distribute and popularize the lessons gained from our research. Our focus is building awareness and immediacy to the existential risk of AI.

To achieve this goal, we’re currently launching a number of initiatives, including a book on AI futures and a podcast with key public individuals on AI outcomes.

The General Public

Meaningful transformation is reliant on widespread awareness and support. We recognize that in order to effectively enact governance interventions, we need the endorsement and concern of the constituents behind the decision-makers: people in government. As a result, a high priority of ours is to determine how best to distribute and popularize the lessons gained from our research. Our focus is building awareness and immediacy to the existential risk of AI.

To achieve this goal, we’re currently launching a number of initiatives, including a book on AI futures and a podcast with key public individuals on AI outcomes.

The General Public

Meaningful transformation is reliant on widespread awareness and support. We recognize that in order to effectively enact governance interventions, we need the endorsement and concern of the constituents behind the decision-makers: people in government. As a result, a high priority of ours is to determine how best to distribute and popularize the lessons gained from our research. Our focus is building awareness and immediacy to the existential risk of AI.

To achieve this goal, we’re currently launching a number of initiatives, including a book on AI futures and a podcast with key public individuals on AI outcomes.

Newsletter

Newsletter

Newsletter

Get research updates from Convergence

Leave us your contact info and we’ll share our latest research, partnerships, and projects as they're released.

You may opt out at any time. View our Privacy Policy.