Contact Us
A core tenant of ours is effective collaboration - we recognize that in the AI safety community, all of our work overlaps and supports other organizations towards our collective goal of mitigating risk.
For general inquiries: contact@convergenceanalysis.org
Researchers & Research Organizations
At Convergence, we’re working on developing systematic, comprehensive research around AI scenarios (link here) and governance interventions (link here).
If you’re working on similar or overlapping research projects, we hope to find synergistic ways in which our research can consider, support, and integrate the work of follow researchers.
Contact us at
research@convergenceanalysis.org
Lobbying & Policy Organizations
One of the primary outputs of our research around governance interventions (link here) is better governance recommendations for policymaking bodies.
If you’re an organization that is interested in translating research into legislation, lobbying policymakers to adopt AI governance policies, or developing policies that need deep research conducted on the implications, reach out and we can find ways in which our team can support your work.
Contact us at
policy@convergenceanalysis.org
Partnerships on Workshops & Conferences
Though the AI safety community is still tiny, there’s relatively little consensus on the best strategies to mitigate the risk from AI safety initiatives. Furthermore, relevant parties are now spread widely across non-profits, industry, academia, and government.
We’re currently looking into conducting more coordination strategies such as workshops, seminars, and conferences for 2024 - if you’re looking to collaborate, we’d love to find a time to chat.
Contact us at
partnerships@convergenceanalysis.org
Funding Partners
We’re currently entering a new funding round to continue scaling our organization to achieve our stated goals. In 2024, we’re producing a foundational corpus of research around the most critical & neglected AI scenarios (link) and the most feasible governance strategies in response to these scenarios (link).
If you’re an individual or represent an organization funding Effective Altruism or AI Safety initiatives, reach out above and we’d love to share more information about our future plans.
In particular, we’re looking for funders who are focused on accelerating the production of high-quality, detailed research on AI scenarios and specific policy recommendations to better guide the direction of AI governance.
Contact us at
funding@convergenceanalysis.org