AI Awareness program

Informing the general public, policymakers, and AI researchers

The public is becoming increasingly aware of the potential risks of AI, but there’s limited understanding about how these dangers may manifest in the near future, and on what society can do to prevent them. Notably, practical solutions for governing AI remain largely unknown to the broader public. 

At Convergence, we are working to help bridge this gap by informing the public and policymakers about realistic AI scenarios and governance solutions.

Our Current Projects

The Oxford Handbook of AI Governance

Lead editor

Justin Bullock

As the capabilities of Artificial Intelligence (AI) have increased over recent years, so have the challenges of how to govern its usage. Developing a robust AI governance system requires extensive collective efforts across prominent stakeholders in academia, government, industry, and civil society.

The Oxford Handbook of AI Governance brings together a series of experts from a wide set of disciplines, areas of study, and cultural backgrounds to provide a global perspective on AI governance.

This manual delineates the scope of AI governance across the theoretical and ethical foundations of AI governance, different frameworks for developing a governance structure for AI, practical perspectives on AI governance in different policy domains, economic analyses of AI governance, and concrete lessons about the impact of AI governance domestically and internationally.

It was produced and edited by Justin Bullock, our project lead for Scenario Research.

Building a God

Project Lead

Christopher DiCarlo

We are now at the crossroads of perhaps our most unique moment in our short history as a species. For we now possess the curiosity, the capacity, and the greed to create an artificially intelligent being (or beings) so intelligent and so powerful, there is a probable likelihood that its construction may bring about the end of our own existence.

Given enough computing power, plus data, plus time, it is inevitable that at some point in our not-too-distant future, we will succeed in creating a form of intelligence that far surpasses our own. And when that time comes – and it really appears to be a matter of when, not if – how will such a being respond to us?

Christopher DiCarlo’s upcoming book “Building a God” explores the consequences of the future progress of humanity in developing an agentic, super-intelligent being via machine learning. It exists to educate and raise awareness about the future benefits, harms, and governance of AI through sound critical thinking and ethical reasoning.

Building a God is scheduled to be published in mid-2024.

All Thinks Considered

Project Lead

Christopher DiCarlo

This podcast is about ideas and issues considered and discussed through the lens of critical thinking. But what is critical thinking? Critical thinking involves the careful analysis of information – ALL information – in an effort to determine the soundness of arguments, the influence of biases, the relevance of context, the need for evidence, and the capacity to identify errors in reasoning called fallacies.

But critical thinking also embodies within us a state of humility and a genuine sense of fairness. This allows us to consider topics with a heightened curiosity and a capacity to listen to all sides of an issue – no matter how controversial or unsavory they may seem to us. And when we do this through the lens of critical thinking, we attain a more informed and responsible understanding of the ideas and issues.

We have some very interesting guests on on this podcast, from scientists discussing the future of agriculture and cancer research, to the most powerful leader of the Christian Right in Canada, to some of the leading utilitarian thinkers in the world today. You’ll hear from thought leaders such as Peter Singer, Dr. Lloyd Hawkeye Robertson, Dr. Lenore Newman, and Mick West.

All Thinks Considered is currently publishing new podcasts regularly in early 2024.