Conference Report

Threshold 2030

Threshold 2030

Modeling AI Economic Futures

Modeling AI Economic Futures

A two-day conference in Oct 2024 bringing together 30 leading economists, AI policy experts, and professional forecasters to rapidly evaluate the economic impacts of frontier AI technologies by 2030.

Part 2: Economic Causal Models

On the second day of the conference, we conducted a series of Economic Causal Modeling exercises. These exercises were intended to be a lightweight method for attendees to model the key variables, relationships, and cruxes involved in their economic worldviews.

We conducted these exercises with two primary goals in mind:

To understand the priorities and mental models of participants when evaluating economic impacts of AI systems.

To understand the priorities and mental models of participants when evaluating economic impacts of AI systems.

To understand the priorities and mental models of participants when evaluating economic impacts of AI systems.

To identify useful economic metrics and observables that effectively represent economic impacts from AI systems.

To identify useful economic metrics and observables that effectively represent economic impacts from AI systems.

To identify useful economic metrics and observables that effectively represent economic impacts from AI systems.

Through this process, we intend to lay the groundwork for future related projects that would approximate, track, and forecast such metrics and observables. We hypothesize that more visibility into these variables will allow researchers to better measure and communicate the increasing economic impacts of AI systems.

Methodology

For the economic causal modeling exercises, we asked attendees to self-select into groups based on the economic category they were most interested in: Growth, Inequality, or Quality of Life. This resulted in attendees dividing into five groups: 2 groups evaluating Growth, 2 groups evaluating Inequality (1 inter-country group, and 1 intra-country group), and 1 group evaluating Quality of Life.

Attendees stayed in these groups for the remainder of the exercises, and all work was conducted in a collaborative group format, in contrast to the individual outputs during the Worldbuilding exercises.

We conducted the following exercises in three parts, with each part building directly from the previous one. In the following sections, we’ll describe the methodology used to produce these diagrams. Then, we’ll share a summary and breakdown of the diagrams created by each of the five groups in the Results section.

Part 1: Building a High-Level Model

This is an example we shared with attendees of a simple Economic Causal Model created for Part 1 of our exercises.

Once each group was formed, group members jointly identified a relevant top-level metric that described their category most effectively. For example, we suggested the following:

Growth: GDP, GNI, Growth Rate

Growth: GDP, GNI, Growth Rate

Growth: GDP, GNI, Growth Rate

Inequality: Gini coefficient, Palma Ratio

Inequality: Gini coefficient, Palma Ratio

Inequality: Gini coefficient, Palma Ratio

Quality of Life: Human Development Index (HDI)

Quality of Life: Human Development Index (HDI)

Quality of Life: Human Development Index (HDI)

Next, groups were asked to identify a wide range of variables that would be relevant for impacting these top-level metrics. We suggested the following examples, but emphasized for attendees to incorporate any form of variable that they thought was relevant, not just traditional economic metrics:

Labor Automation

Labor Automation

Labor Automation

New Jobs from AI

New Jobs from AI

New Jobs from AI

Total Factor Productivity

Total Factor Productivity

Total Factor Productivity

Capital Productivity

Capital Productivity

Capital Productivity

Life Expectancy

Life Expectancy

Life Expectancy

Consumer Demand

Consumer Demand

Consumer Demand

Finally, we asked the groups to jointly create a directed graph that would represent the relationships between the variables and the top-level metric they selected, following the approximate shape of the diagram above. We encouraged groups to:

Use multiple levels of depth regarding variables

Use multiple levels of depth regarding variables

Use multiple levels of depth regarding variables

Think outside the box and use unconventional approaches

Think outside the box and use unconventional approaches

Think outside the box and use unconventional approaches

Identify existing economic models that could correspond to economic variables or the relationships between variables (e.g. the Solow–Swan growth model, or the Cobb-Douglas model of production).

Identify existing economic models that could correspond to economic variables or the relationships between variables (e.g. the Solow–Swan growth model, or the Cobb-Douglas model of production).

Identify existing economic models that could correspond to economic variables or the relationships between variables (e.g. the Solow–Swan growth model, or the Cobb-Douglas model of production).

Part 2: Deep-Dive Into Variables & Observables

This is an example we shared with attendees of a simple Economic Causal Model created for Part 2 of our exercises.

In Part 2, we asked groups to deep-dive into one of the most relevant intermediate variables that impacted their top-level metric. Specifically, we asked them to create a new, more detailed economic model on an intermediate variable that met the following criteria:

It would be a critical input to your economic model in Part 1. For example, labor productivity is a critical variable that leads directly to growth.

It would be a critical input to your economic model in Part 1. For example, labor productivity is a critical variable that leads directly to growth.

It would be a critical input to your economic model in Part 1. For example, labor productivity is a critical variable that leads directly to growth.

It would be a variable with a lot of uncertainty in the current literature.

It would be a variable with a lot of uncertainty in the current literature.

It would be a variable with a lot of uncertainty in the current literature.

It would be heavily influenced by upcoming AI systems.

It would be heavily influenced by upcoming AI systems.

It would be heavily influenced by upcoming AI systems.

When creating the new model, we suggested that groups could do some or all of the following to create a more detailed, useful model:

Specify the intensity of the relationship between two variables (+, +++, - - -)

Specify the intensity of the relationship between two variables (+, +++, - - -)

Specify the intensity of the relationship between two variables (+, +++, - - -)

Break a variable down into its subcomponents, such as:

Break a variable down into its subcomponents, such as:

Break a variable down into its subcomponents, such as:

Skill level (high vs. low skill levels)

Skill level (high vs. low skill levels)

Skill level (high vs. low skill levels)

Sector and occupation

Sector and occupation

Sector and occupation

Region (developing vs. developed countries)

Region (developing vs. developed countries)

Region (developing vs. developed countries)

Focus on their domains of expertise, and to not be concerned about making the models 100% comprehensive

Focus on their domains of expertise, and to not be concerned about making the models 100% comprehensive

Focus on their domains of expertise, and to not be concerned about making the models 100% comprehensive

Finally, we introduced the concept of observables: measurable variables that can be observed in the real world. As opposed to latent variables which can only be inferred through proxy measures, observables provide a direct, measurable approach to incorporate real-world changes into economic causal models. For example, “whether the accuracy of a radiological AI system is greater than an average radiologist” would be an observable that may directly impact the “number of radiologists”.

We asked that groups incorporate observables around AI capabilities or AI outcomes as the lowest layer of their economic causal models in Part 2. That is, they should suggest observables that would allow us to infer or predict changes to the economic variables higher up in their models. In the models displayed in this report, observable variables will be colored blue.

Part 3: Model Predictions

This is an example we shared with attendees of a simple Economic Causal Model created for Part 3 of our exercises.

In Part 3, we asked groups to create some quantitative predictions for variables in their models from Part 2, assuming that they were in Scenario 3 (the scenario with the most rapid timelines to powerful AI systems).

Specifically, we invited groups to make a copy of the model they developed in Part 2, and add in red several of the following components:

Specifying the dates (e.g. 2027 or 2030) when a certain observable or variable would be measured

Specifying the dates (e.g. 2027 or 2030) when a certain observable or variable would be measured

Specifying the dates (e.g. 2027 or 2030) when a certain observable or variable would be measured

Specifying the rate of diffusion (e.g. 3 years) between layers of their model

Specifying the rate of diffusion (e.g. 3 years) between layers of their model

Specifying the rate of diffusion (e.g. 3 years) between layers of their model

Specifying the intensity of the relationship between two variables (+, +++, - - -)

Specifying the intensity of the relationship between two variables (+, +++, - - -)

Specifying the intensity of the relationship between two variables (+, +++, - - -)

Specifying a prediction in Scenario 3 for the value (or percentage change) of specific observables and variables by 2030, when applicable

Specifying a prediction in Scenario 3 for the value (or percentage change) of specific observables and variables by 2030, when applicable

Specifying a prediction in Scenario 3 for the value (or percentage change) of specific observables and variables by 2030, when applicable

Finally, we invited attendees to write a few paragraphs about key takeaways, lessons, and interesting observations from conducting all of these exercises. We will share some of these observations and thoughts below in the Results section.

Results

Attendees created over 15 different economic models across 5 groups on a variety of top-level metrics of economic health. We summarized this work into a single primary model per group and 1-2 supporting submodels, where the primary models described the following top level metrics:

Group 1: Total Factor Productivity

Group 1: Total Factor Productivity

Group 1: Total Factor Productivity

Group 2: Economic Diffusion of AI

Group 2: Economic Diffusion of AI

Group 2: Economic Diffusion of AI

Group 3: Palma (Income) Inequality

Group 3: Palma (Income) Inequality

Group 3: Palma (Income) Inequality

Group 4: GDP of Developing Countries

Group 4: GDP of Developing Countries

Group 4: GDP of Developing Countries

Group 5: OECD Better Life Index

Group 5: OECD Better Life Index

Group 5: OECD Better Life Index

You can read the original texts created by attendees explaining these models in our Appendix (All Economic Causal Model Notes).

Group 1: Total Factor Productivity Model

Figure: Total Factor Productivity Model

Model Overview

Neoclassical economics has long served as the dominant framework for understanding how markets utilize technology, capital, labor and resources to produce goods and services. Building on this foundation, Group 1 developed a comprehensive Total Factor Productivity (TFP) model that examines how AI technologies influence economic productivity through various market factors, knowledge accumulation, and human capital development. The model illustrates how AI-integrated economic productivity is shaped by both macro-level factors (like market structure and resource allocation) and micro-level factors (such as human labor effectiveness and AI capabilities). Two key submodels, which will be explored in detail in subsequent sections, provide deeper analysis of specific productivity drivers.

Notably, the TFP model suggests that AI's impact on market efficiency is not uniformly positive. The authors observe that AI could either enhance or impair factors like transaction costs and economies of scale depending on implementation. The model emphasizes regulation's potential role in promoting efficient markets by fostering competition and innovation while preventing monopolistic control. It also indicates that more competitive, open industries tend to adopt AI technologies faster, leading to earlier productivity gains. These elements underscore the importance of thoughtful regulation and strategic policy initiatives.

Summary of Author’s Notes

The authors focus their analysis on two critical drivers of Total Factor Productivity: Knowledge Accumulation and Human Capital Quality. The Human Capital Quality submodel focuses on the degree to which humans are equipped with the skills, education and management quality to effectively use AI systems. A particular focus is the evolution of AI supervisory roles - the authors note that success will depend heavily on social acceptance and effective monitoring systems, with varying levels of human oversight needed based on AI reliability. They propose that AI task management capabilities could be evaluated through specific performance benchmarks.

The authors identify several key uncertainties in their model. A significant challenge is the gap between rapid technological advancement and slower real-world adoption in production and R&D, influenced by partial automation capabilities and societal constraints. They note that AI's application in R&D could create powerful feedback loops, potentially leading to recursive improvement and even technological breakthroughs. The model also grapples with complex nonlinear relationships, particularly in how AI quality interacts with market structure and capital allocation dynamics.

Model Mechanics

The parent TFP model focuses on five key areas that will influence economic productivity:

1

Human Capital Quality, which will be predominantly determined by the extent to which workers develop the necessary skills to effectively utilize AI. Key factors include education, skill matching, management quality, and institutional capital.

1

Human Capital Quality, which will be predominantly determined by the extent to which workers develop the necessary skills to effectively utilize AI. Key factors include education, skill matching, management quality, and institutional capital.

1

Human Capital Quality, which will be predominantly determined by the extent to which workers develop the necessary skills to effectively utilize AI. Key factors include education, skill matching, management quality, and institutional capital.

2

AI Capital Quality and Reliability, which will be directly enabled by advancements in robotics, sensors, training data, compute, and algorithmic progress.

2

AI Capital Quality and Reliability, which will be directly enabled by advancements in robotics, sensors, training data, compute, and algorithmic progress.

2

AI Capital Quality and Reliability, which will be directly enabled by advancements in robotics, sensors, training data, compute, and algorithmic progress.

3

Knowledge Accumulation, which will be affected by the adoption rate of new technologies, innovation and collaboration structures, and the intensity & efficiency of R&D efforts.

3

Knowledge Accumulation, which will be affected by the adoption rate of new technologies, innovation and collaboration structures, and the intensity & efficiency of R&D efforts.

3

Knowledge Accumulation, which will be affected by the adoption rate of new technologies, innovation and collaboration structures, and the intensity & efficiency of R&D efforts.

4

Market Structure, which include factors such as the general openness of trade, direct competition, and industry concentration.

4

Market Structure, which include factors such as the general openness of trade, direct competition, and industry concentration.

4

Market Structure, which include factors such as the general openness of trade, direct competition, and industry concentration.

5

Resource Allocation, which impacts productivity based on the mobility of human labor and the effective allocation of capital. AI labor mobility, which could be practically infinite, will reshape current resource allocation practices.

5

Resource Allocation, which impacts productivity based on the mobility of human labor and the effective allocation of capital. AI labor mobility, which could be practically infinite, will reshape current resource allocation practices.

5

Resource Allocation, which impacts productivity based on the mobility of human labor and the effective allocation of capital. AI labor mobility, which could be practically infinite, will reshape current resource allocation practices.

These five fundamental factors ultimately drive Total Factor Productivity, with AI capabilities playing a pivotal role through their impact on innovation, knowledge and market efficiency. However, the authors emphasize that realizing these potential gains depends heavily on government policy fostering competitive, open markets that encourage innovation and broad-based productivity growth.

Submodel: Knowledge Accumulation

Figure: Knowledge Accumulation Submodel

Knowledge Accumulation is identified as one of the most influential of the five identified TFP factors.  The accumulation of general, scientific, and technological knowledge is naturally a key driver for both economic growth, as it enables humanity to better develop and use new technologies, products, services, and research methods. This creates powerful feedback loops that compound knowledge, boost productivity, and accelerate innovation—essential elements for sustained economic growth.

The Knowledge Accumulation submodel identifies three primary channels for knowledge accumulation: technological adoption and diffusion, innovation networks, and R&D.  For example, it describes how technology adoption and diffusion are shaped by both the portability of AI solutions and AI transferability. This suggests that the easier it is to implement and adapt AI technologies across different contexts, the more widespread their adoption becomes.

The authors emphasize the central role of intellectual property (IP) frameworks, highlighting the need for balanced policies that protect innovation while enabling knowledge diffusion. It also underscores the importance of capital funding in driving AI development and adoption across research disciplines. Additionally, the model explores secondary factors such as cross-domain research impacts and breakthrough discoveries that can spawn entirely new industries or capabilities.

The authors prefaced their model by establishing two primary methodologies for measuring technological and knowledge advancement:

Direct Measurements are tangible measurable indicators of innovation. Examples would include patents and IP that protect new inventions. In some cases this would also include peer-reviewed published research articles.

Direct Measurements are tangible measurable indicators of innovation. Examples would include patents and IP that protect new inventions. In some cases this would also include peer-reviewed published research articles.

Direct Measurements are tangible measurable indicators of innovation. Examples would include patents and IP that protect new inventions. In some cases this would also include peer-reviewed published research articles.

Indirect Measurements correspond to aggregate economic productivity, using Total Factor Productivity (TFP) as a general measure of how efficiently we use economic resources such as labor, capital, and technology to produce goods and services.

Indirect Measurements correspond to aggregate economic productivity, using Total Factor Productivity (TFP) as a general measure of how efficiently we use economic resources such as labor, capital, and technology to produce goods and services.

Indirect Measurements correspond to aggregate economic productivity, using Total Factor Productivity (TFP) as a general measure of how efficiently we use economic resources such as labor, capital, and technology to produce goods and services.

The diagram reveals several important interdependencies:

Innovation networks respond dynamically to IP constraints and incentives, underlining the importance of well-designed intellectual property frameworks.

Innovation networks respond dynamically to IP constraints and incentives, underlining the importance of well-designed intellectual property frameworks.

Innovation networks respond dynamically to IP constraints and incentives, underlining the importance of well-designed intellectual property frameworks.

The fraction of labs/teams using AI and the fraction of disciplines using AI contributes positively to research workflow integration, indicating that broader adoption creates compounding benefits across domains.

The fraction of labs/teams using AI and the fraction of disciplines using AI contributes positively to research workflow integration, indicating that broader adoption creates compounding benefits across domains.

The fraction of labs/teams using AI and the fraction of disciplines using AI contributes positively to research workflow integration, indicating that broader adoption creates compounding benefits across domains.

Cross-domain research capability and breakthrough research enhance R&D efficiency, suggesting that AI's capacity for novel interdisciplinary insights will significantly drive productivity gains.

Cross-domain research capability and breakthrough research enhance R&D efficiency, suggesting that AI's capacity for novel interdisciplinary insights will significantly drive productivity gains.

Cross-domain research capability and breakthrough research enhance R&D efficiency, suggesting that AI's capacity for novel interdisciplinary insights will significantly drive productivity gains.

Research communities require enhanced methodologies, metrics, and information-sharing capabilities to better quantify technological advancement and AI's impact on innovation. Open-access research publications and open-source software platforms can facilitate both AI system development and the dissemination of AI-enabled discoveries. While patent protection remains crucial for incentivizing R&D investment, policymakers must address the risk of AI systems concentrating critical intellectual property among a small number of entities.

Submodel: Human Capital Quality

Figure: Human Capital Quality Submodel

This submodel identifies key elements that contribute to the overall quality of human capital and its impact on economic productivity. Rather than viewing AI solely as a labor replacement, the model emphasizes the importance of human-AI complementarity. It maps the dynamic interactions between institutional capital, skill differentiation, and AI system reliability, showing how these factors collectively influence economic outcomes. For example, as AI systems become more reliable and tasks can be better decomposed, the need for direct human supervision decreases, allowing workers to focus on higher-value activities. These relationships suggest that productivity gains in an AI-enabled economy extend beyond pure technical efficiency, depending heavily on how human capabilities and organizational structures adapt to new operational paradigms.

The model centers on Human Capital Quality as the top-level metric, as it is shaped by interactions across four main variables: Institutional Capital, Skill Matching, Education, and Management Quality. It demonstrates how human capital in an AI economy emerges from deeply interdependent systems, where institutional frameworks, management approaches, and educational structures create reinforcing feedback loops to sustain productivity. In detail:

Institutional Capital serves as a foundational enabler, shaping incentive structures and trust-building mechanisms while potentially reducing inefficiencies in supervision and task execution. It creates the conditions necessary for effective skill-AI matching and efficient oversight.

Institutional Capital serves as a foundational enabler, shaping incentive structures and trust-building mechanisms while potentially reducing inefficiencies in supervision and task execution. It creates the conditions necessary for effective skill-AI matching and efficient oversight.

Institutional Capital serves as a foundational enabler, shaping incentive structures and trust-building mechanisms while potentially reducing inefficiencies in supervision and task execution. It creates the conditions necessary for effective skill-AI matching and efficient oversight.

Skill Matching reflects the alignment between human capabilities and emerging AI-oriented roles. Variables like AI reliability and skill heterogeneity demonstrate how task complexity and variability influence this alignment, with more reliable AI reducing human intervention requirements while improving the performance of complex multi-agent systems.

Skill Matching reflects the alignment between human capabilities and emerging AI-oriented roles. Variables like AI reliability and skill heterogeneity demonstrate how task complexity and variability influence this alignment, with more reliable AI reducing human intervention requirements while improving the performance of complex multi-agent systems.

Skill Matching reflects the alignment between human capabilities and emerging AI-oriented roles. Variables like AI reliability and skill heterogeneity demonstrate how task complexity and variability influence this alignment, with more reliable AI reducing human intervention requirements while improving the performance of complex multi-agent systems.

Education functions as an adaptability mechanism, equipping workers with knowledge and competencies that complement AI workflows. Elements like AI-enabled teaching and adaptive testing illustrate how educational interventions can both reduce costs and accelerate integration.

Education functions as an adaptability mechanism, equipping workers with knowledge and competencies that complement AI workflows. Elements like AI-enabled teaching and adaptive testing illustrate how educational interventions can both reduce costs and accelerate integration.

Education functions as an adaptability mechanism, equipping workers with knowledge and competencies that complement AI workflows. Elements like AI-enabled teaching and adaptive testing illustrate how educational interventions can both reduce costs and accelerate integration.

Management Quality links human capital to workplace outcomes through structures like task decomposition, workplace monitoring, and AI supervision.

Management Quality links human capital to workplace outcomes through structures like task decomposition, workplace monitoring, and AI supervision.

Management Quality links human capital to workplace outcomes through structures like task decomposition, workplace monitoring, and AI supervision.

The model identifies four key observable inputs that could be measured in order to forecast future dynamics. These include:

1

The length of task chains before AI derails and becomes unusable

1

The length of task chains before AI derails and becomes unusable

1

The length of task chains before AI derails and becomes unusable

2

The tested variability of humans when using AI productively

2

The tested variability of humans when using AI productively

2

The tested variability of humans when using AI productively

3

AI social skills, abilities and aptitudes

3

AI social skills, abilities and aptitudes

3

AI social skills, abilities and aptitudes

4

Human acceptance of AI management

4

Human acceptance of AI management

4

Human acceptance of AI management

This submodel demonstrates that effective human decision-making and adaptability, bolstered by strong institutional frameworks, are fundamental to AI-driven economic growth. This framework offers organizations and policymakers a structured approach to key actionable variables, from AI-aligned education investments to inclusive management practices. By systematically addressing these factors, stakeholders can build resilient human capital systems capable of thriving alongside advancing AI technologies.

Group 2: Economic Diffusion of AI Model

Figure: Economic Diffusion of AI

Model Overview

Group 2’s model presents a framework analyzing how AI technology will be adopted throughout the U.S. economy, and how that diffusion will impact key economic factors. The authors describe the complex interplay between technological access, regulatory environments, implementation challenges, and cost considerations that will ultimately determine the proportion of cognitive work likely to be fulfilled by AI systems.

The model's holistic approach integrates both technical and socioeconomic factors, examining how differences between open-source and closed-source AI, regulatory frameworks such as FDA, and human trust in AI will influence automation trajectories. It highlights that effective AI diffusion will depend on managing implementation challenges (e.g. workforce adaptation, data digitization) alongside technological advancements.

Summary of Author’s Notes

Initially the authors began their investigation strictly focused on growth variables, but soon recognized that the most significant driver of GDP would likely be the general diffusion of AI throughout the economy. The authors realized that even if all cognitive tasks were potentially automatable, there would still be important barriers that restrict adoption. As a result, the authors focused on measuring the proportion of cognitive work that would effectively be automated via AI by 2030.

The authors found there would likely be many factors slowing AI diffusion. The group focused on both India and the United States, selecting India for its growing young population, less developed AI industry, and variation in socio-economic circumstances. The group further theorized that diffusion could be more rapid in India due to the size of its service industry.

Model Mechanics

The authors believe that the share of AI-automated cognitive work by 2030 will be most directly impacted by four key factors:

1

Access, which will be driven by factors like global geopolitics or the gap between open and closed AI models

1

Access, which will be driven by factors like global geopolitics or the gap between open and closed AI models

1

Access, which will be driven by factors like global geopolitics or the gap between open and closed AI models

2

Regulation, which could include issues such as new labor protection laws or relaxed restrictions around clinical trials

2

Regulation, which could include issues such as new labor protection laws or relaxed restrictions around clinical trials

2

Regulation, which could include issues such as new labor protection laws or relaxed restrictions around clinical trials

3

Implementation, which could be impacted by human trust in AI systems or the amount of digitized job-relevant data

3

Implementation, which could be impacted by human trust in AI systems or the amount of digitized job-relevant data

3

Implementation, which could be impacted by human trust in AI systems or the amount of digitized job-relevant data

4

Cost, which is determined by energy costs, the algorithmic efficiency of AI models, and the ratio of human to AI costs.

4

Cost, which is determined by energy costs, the algorithmic efficiency of AI models, and the ratio of human to AI costs.

4

Cost, which is determined by energy costs, the algorithmic efficiency of AI models, and the ratio of human to AI costs.

The group suggested that two contributing factors to AI Access would be the accessibility of at least one closed frontier mode and the capabilities gap between closed frontier models and minimally comparable open-source models. Accessible frontier models or small capabilities gaps would lead to more AI diffusion.

The group identified Regulation as a key driver of AI diffusion, focusing on the percent of jobs that are illegal to automate and the potential relaxation of clinical trial regulation by the FDA. Increasing the number of jobs with legal protections against AI automation would decrease the share of AI-automated cognitive work in the private sector.

Unlike other categories, Implementation was identified as a top-level category that would directly impact all other top-level categorical factors. The group distinctly identified three key factors that specifically feed into Implementation: percent of digitized job-relevant data, percent of AI specialist jobs filled within 2 months and the percent of tasks that humans trust AI’s to perform.

Lastly,the authors noted that cost-parity of AI with equivalent human labor would be a critical crux for AI diffusion, asserting that cheaper human labor would directly inhibit AI adoption. Relevant factors include energy costs of AI systems, and the cost-per-token of output.

Critical Success Factors

According to the authors, three critical cruxes for the rate of diffusion of AI include:

1

Capabilities: What types of tasks can AI systems practically replace?

1

Capabilities: What types of tasks can AI systems practically replace?

1

Capabilities: What types of tasks can AI systems practically replace?

2

Trust & Reliability: Do people trust AI to do tasks correctly? Are AI systems reliable enough to earn the trust of human parties?

2

Trust & Reliability: Do people trust AI to do tasks correctly? Are AI systems reliable enough to earn the trust of human parties?

2

Trust & Reliability: Do people trust AI to do tasks correctly? Are AI systems reliable enough to earn the trust of human parties?

3

Speed of Retooling: How quickly can businesses and industry restructure their workflows to include AI systems? What changes need to be made to leverage AI technologies?

3

Speed of Retooling: How quickly can businesses and industry restructure their workflows to include AI systems? What changes need to be made to leverage AI technologies?

3

Speed of Retooling: How quickly can businesses and industry restructure their workflows to include AI systems? What changes need to be made to leverage AI technologies?

a

The authors note that inertial factors are strong and the cost of doing nothing is typically low, other than competitive pressures.

a

The authors note that inertial factors are strong and the cost of doing nothing is typically low, other than competitive pressures.

a

The authors note that inertial factors are strong and the cost of doing nothing is typically low, other than competitive pressures.

Submodel: The Cost of Using AI Systems

Figure: The Cost of Using AI Systems

This model discusses various factors that impact the socioeconomic costs of using AI. It breaks the topic down into four main submodels: Economics, Geopolitics, Culture, and Regulation.

Economic Submodel

This submodel shows how labor dynamics, market factors, macroeconomic factors, and capabilities affect the economic cost and value of integrating AI.

Key factors impacting the economic integration of AI include access to infrastructure (e.g. power and internet access), existing technological maturity and access to credit.

The authors suggest that openness, and competitive pressures can create positive feedback loops within the economic system. They indicate that market structure and overall economic health play crucial roles in improving AI adoption and minimizing costs.

Geopolitical Submodel

This model discusses how national security interests directly impact openness and competitive pressures – for instance, by limiting the free global trade of cutting-edge AI semiconductors. The focus on access/import barriers highlights how geopolitical tensions can significantly impact technology transfer and adoption rates. AI diffusion will depend significantly on international relations alongside technical and economic factors.

Cultural and Demographic Submodels

These submodels emphasize digital literacy as a key factor for the public’s trust in AI. Similarly, the Demographics submodel discusses that talent quality, especially among younger workers, will impact the level of AI adoption of a country.

Regulatory Submodel

This model suggests that legal barriers, ease of doing business, and industry policies will be key factors shaping the regulatory environment for AI. A friendly regulatory environment to AI technologies will lead to significantly lower costs of adoption, but may have ancillary societal consequences such as increasing labor displacement or wealth inequality.

Group 3: Palma Inequality Model

Figure 3.1 - Palma Inequality Model Illustration

Model Overview

The Palma ratio was chosen as the primary metric by Group 3 because it provides a useful indicator of income inequality, particularly in societies where high-income households drive income polarization.

Information

The Palma ratio compares the income share of a society's wealthiest 10% to that of the poorest 40%. This metric provides valuable context for economic inequality: developed countries average approximately 1.5, with more egalitarian societies like Scandinavia achieving ratios around 1.0, while highly unequal regions such as Southern Africa exceed 3.0. By focusing on the extremes of income distribution, the Palma ratio effectively highlights disparities that can significantly impact social cohesion and economic stability.

In the context of AI's economic impact, the Palma ratio serves as an aggregate measure of how technological advancement affects income dynamics. The model  describes how the interactions between capital income, labor income, and technological advancements directly influence inequality, and shows that these factors will be shaped by income tax structures and the income-based distribution of labor participation.

Summary of Authors’ Description and Notes

After evaluating various inequality metrics, the group identified several critical intermediate variables affecting income disparities: elasticity of labor, labor participation, social mobility, tax rates, access to AI, changes in equity markets, capital-labor income gaps and general growth.

Their analysis of potential scenarios highlighted two primary factors driving changes in inequality: the capital-labor income gap and the elasticity of substitution between human and AI labor. As AI represents a form of capital, its capabilities and applications directly influence capital's income-generating potential, particularly through stock market valuations. The authors noted that returns on non-AI capital assets, including real estate, would also significantly impact overall capital returns.

The authors suggested that changes in labor income will largely depend on substitution elasticity between human and AI workers, which can be determined by:

The proportion of automatable job tasks

The proportion of automatable job tasks

The proportion of automatable job tasks

The cost savings from automation

The cost savings from automation

The cost savings from automation

Non-financial integration costs, including losses in tacit knowledge and increased coordination requirements

Non-financial integration costs, including losses in tacit knowledge and increased coordination requirements

Non-financial integration costs, including losses in tacit knowledge and increased coordination requirements

The group emphasized that unemployment and labor participation rates will be significantly influenced by workers' ability to reskill and adapt to evolving job markets. Additionally, they discussed how changes in labor supply and progressive taxation would affect the distribution of labor income across tax brackets, ultimately impacting overall inequality measures.

The authors specifically cite the potential vulnerability of the ‘gig’ services economy, as well as service-oriented job functions with narrow focus such as therapists, call center service representatives and professional drivers.

Model Mechanics

Group 1's Palma model examines how AI technologies affect economic inequality through their competition with human labor. The model analyzes the redistribution of economic value between labor and capital, demonstrating how AI adoption could exacerbate existing wealth disparities. The authors project that AI integration will likely suppress wages and reduce labor's share of income, while simultaneously driving capital income growth through reduced labor costs and increased productivity.

The model identifies social mobility as both a direct contributor to the Palma ratio and a key driver of total labor income, working in tandem with labor mobility. The Social Mobility Index (SMI) correlates strongly with changes in labor income distribution. When labor income becomes more concentrated among high-income workers, it can create a negative feedback loop: reduced social mobility for low-wage workers who face increasing barriers to accessing better-paying jobs, education, and advancement opportunities.

Note

The Social Mobility Index (SMI) measures individuals' ability to improve their socioeconomic status within a country over time. While its relationship with labor income and inequality measures extends beyond this model's scope, the SMI aggregates multiple factors including education, income, and occupation to provide a comprehensive assessment of social mobility. Higher SMI scores indicate greater ease of socioeconomic advancement, making it a valuable tool for identifying barriers to opportunity and informing policy interventions.

The model identifies three categories of factors influencing the Palma ratio:

1

Labor Factors, including access to AI technologies, social mobility indicators, and labor mobility measures

1

Labor Factors, including access to AI technologies, social mobility indicators, and labor mobility measures

1

Labor Factors, including access to AI technologies, social mobility indicators, and labor mobility measures

2

Capital Factors, including the marginal costs of AI implementation and stock market dynamics

2

Capital Factors, including the marginal costs of AI implementation and stock market dynamics

2

Capital Factors, including the marginal costs of AI implementation and stock market dynamics

3

Substitutability of Labor, including AI-human labor substitution rates, net labor displacement (accounting for both automation and augmentation), and changes in labor force participation

3

Substitutability of Labor, including AI-human labor substitution rates, net labor displacement (accounting for both automation and augmentation), and changes in labor force participation

3

Substitutability of Labor, including AI-human labor substitution rates, net labor displacement (accounting for both automation and augmentation), and changes in labor force participation

The model demonstrates how AI could fundamentally reshape both organizational economics and broader market dynamics. At the microeconomic level, it shows how businesses may shift investment from human labor to AI capital. At the macroeconomic level, it suggests that market forces and cash flows will increasingly favor capital owners, potentially widening the divide between wealthy and working-class populations.

Critical Success Factors

The model suggests that increasing access to AI technologies and reducing implementation ‘friction’ by the lower and middle classes may be instrumental to improving equality and social mobility.  The democratization of access to AI technologies may become central for promoting equality in future tech-enabled industries. By making AI more accessible and user-friendly for the lower and middle class, individuals would be able to leverage AI to improve their economic prospects.

In their discussions, the authors openly question the complex dynamics of inequality. They mention that incentives that contribute to overall growth oftentimes lead to greater inequality. However, such incentives may simultaneously improve economic outcomes for the poor. The authors raise the philosophical question of whether it is acceptable for inequality to increase if the lower classes also benefit economically.  The ideal but most challenging scenario is one in which economic growth is accompanied by policies that promote greater equality and social mobility.

Submodel: Capital-Labor Income Gap

Figure: Capital-Labor Income Gap

The Capital-Labor Income Gap Submodel analyzes the growing divide between returns on capital and labor income in an AI-driven economy. This widening gap reveals fundamental structural imbalances in economic value distribution as automation reshapes global markets. This analysis examines the diverging dynamics of capital income and labor income, which together define the top-level metric: the capital-labor income gap.

On the capital side, the rising value of assets and improved productivity from automation increase returns to financial and physical capital. These gains disproportionately benefit capital owners, widening the income gap. On the labor side, income dynamics are influenced by displacement and polarization caused by automation. As low and middle-skill jobs are replaced or devalued, labor income declines, especially when firms favor AI due to its decreasing marginal cost relative to human labor.

If cognitive work becomes highly substitutable with automation, labor-market conditions will fundamentally shift, tilting economic benefits toward capital. These economic shifts create cascading effects: wage suppression, diminished labor mobility, and stagnant job creation in middle-skill sectors. However, the model also identifies potential mitigating factors through active policy interventions. For example, strategic investments in worker reskilling programs could help workers adapt to new roles and reduce the income gap.

The model incorporates several quantifiable metrics drawn from public data sources. Assets indices, such as those for real-estate & stock markets (such as the S&P500), provide standardized measures of capital returns. Productivity gains could be assessed through corporate financial reports. Observable measures of labor income and job displacement could be obtained through public federal agencies. For example, unemployment rates are published monthly by the Bureau of Labor Statistics. Likewise, per-sector changes of contract vs salaried employees can be derived from public company reports as well as federally aggregated employment statistics.

The authors identify average income per employee as well as proportional changes in employment (e.g. call-center representatives) as a potential approximation of the proportion of job tasks that are substituted with AI systems. Lastly, the authors suggest tracking the marginal price of AI systems relative to human labor as a key metric for predicting automation adoption rates. These measurable inputs would enable researchers and policymakers to monitor and forecast changes in the capital-labor income gap.

Applying the Capital-Labor Income Gap to the Transportation Industry

Figure: Applying the Capital-Labor Income Gap to the Transportation Industry

This submodel examines how AI may impact the capital-labor income gap within a specific use-case: the transportation industry. It highlights a network of automation-driven economic shifts, where AI empowers capital owners (e.g.,  owners of autonomous vehicle companies) to extract greater income, while labor-intensive roles (e.g. independent vehicle drivers, dispatchers, and some traditional mechanics) may face displacement and wage suppression. The transition to AI-driven services, such as self-driving cars, automated dispatch systems, and robotic car repairs, skews economic rewards toward capital at the expense of labor.

The model suggests that gains in automated sectors, coupled with rising real estate prices, amplify capital returns and consolidate wealth among autonomous vehicle companies. Meanwhile workers face disruption as tasks tied to driving, dispatching, and car maintenance become increasingly replaceable by AI.

This transformation fundamentally changes the industry's reliance on human workers. As an example, the emergence of robotic systems (e.g. in vehicle maintenance and repair) will create new markets while enabling large, vertically integrated companies to disrupt traditionally fragmented service sectors. This consolidation threatens to further suppress wages and deepen the divide between capital and labor.

Two key factors influence the rate of automation: relative cost of AI vs. human labor and rider preferences for human-driven versus automated experiences. Lower AI costs incentivize firms to adopt automation, further reducing reliance on human workers and shrinking labor’s share of income in the sector.

Addressing the widening income gap will likely require significant interventions, from worker retraining programs to mechanisms for more equitable distribution of automation-driven gains. The transportation industry thus serves as a microcosm for broader economic shifts driven by AI advancement.

Group 4: GDP of Developing Countries Model

Figure: Model of the GDP of Developing Countries

Model Overview

The economic model created by Group 4 briefly describes the effect of AI systems on the GDP on developing countries. It discusses how developing countries may receive new revenue streams by providing many of the essential inputs required to build AI systems, such as natural energy resources, rare earth minerals, and data. It also discusses AI’s benefits to developing countries through productivity gains and the long-term tourism of remote workers.  However, the authors acknowledge the likelihood of negative economic outcomes as well, such as reduced demand for goods and services traditionally sourced from developing countries.

For both policymakers and the general public, one of the authors’ most applicable insights is that developing countries will not merely be passive recipients of AI technology, but also play central roles in the global AI supply chain.  The authors posit that developing countries would benefit from preemptive interventions such as policy incentives to stimulate activity in particular sectors. They suggest that the net impact on the GDP of developing countries will depend on how well they can balance new AI opportunities against the potential automation of their established economies.

Model Mechanics

The model identifies various complex factors that could both increase or inhibit GDP growth. For example, the cost of AI could inhibit growth and increase downstream costs. Conversely, increased AI integration may improve productivity domestically.

The model also incorporates the possibility that productivity tools in developed countries may serve as a substitute for services commonly outsourced to emerging economies. This could lead to lower international demand, posing a risk to traditional service and manufacturing sectors. Thus, it may prove crucial for developing countries to find methods to maintain a competitive advantage for their outsourced services.

The authors predict that developing countries which embrace the sale of AI inputs and leverage remote work opportunities will see superior GDP growth. However, the model cautions about the possible downsides if these countries fail to adapt to shifting demand patterns. Developing countries must balance a complex mix of factors to leverage AI and embrace productivity enhancements while mitigating disruptions to exports and diversifying their economic activity.

Critical Success Factors

The authors identify several pivotal elements and key cruxes that warrant further research. A primary consideration is the risk of increasing in-country inequality, as AI enhances productivity unevenly across sectors. While public health improvements and educational enhancements are expected, the benefits might be disproportionately accessible to urban populations, exacerbating the rural-urban divide. Another key crux is the anticipated decline in business process outsourcing (BPO) and offshoring, which are traditionally major economic contributors in many developing countries. With AI reducing outsourcing in these areas, demand could decrease, necessitating strategic adjustments in the service sector.

In order to create more equitable benefits from AI, developing countries should consider the value of investing in digital infrastructure, particularly in rural areas to bridge the digital divide. They should consider the central importance of educational programs for digital literacy and AI-related skills, especially for demographics that would otherwise fall behind.

Furthermore, developing nations stand to benefit from public-private partnerships that encourage collaboration to both stimulate employment and facilitate AI adoption in sectors that might otherwise adopt it. Joint partnerships such as these can drive R&D, fund innovation, and ensure AI projects align with public goals.

Submodel: India’s Upcoming AI Economy

Figure: India’s Upcoming AI Economy

The authors chose to use India as a test case to develop opinions on the implications of AI on inter-country inequality. They drafted extensive notes totalling 18 pages on how AI systems may impact India’s economy. The group gave particular attention to the interplay between India’s three major economic sectors - services, industries, and agriculture - which respectively contribute roughly 50%, 30%, and 20% to the country's current GDP.

The authors note the substantial size of India’s agricultural and informal sectors, as well as the significant role of general services, which constitute half of India’s total GDP.  They theorize that this position may render India more susceptible to AI disruption as compared to countries with manufacturing-dominant economies. Despite India's substantial tech-literate workforce, current service sector productivity remains below expected levels. They suggest that AI could be the catalyst needed to overcome structural barriers currently limiting India's service economy performance.

Challenges such as limited internet access and the urban-rural divide (with 70% of the population in rural areas) could restrict the adoption of AI technologies. Political and infrastructure barriers were also identified as factors limiting India's potential technological advancement, despite the country offering a considerable talent pool of chip designers and software developers.

The primary uncertainty within this model is in how global competitive pressures will influence India’s economic reforms and AI adoption rates. The degree to which international competition will compel India to embrace technology-driven change remains unclear, influenced by political dynamics, infrastructure development, and protectionist policies. Access to more comprehensive data on internet penetration, literacy, and regional policy impacts would help clarify this model’s predictions.

Group 5: Quality of Life Model

Figure: Quantifying the OECD Better Life Index

Model Overview

AI systems hold the potential to significantly influence nearly all factors relevant to our general quality of life (QOL). Covering issues such as mental health, physical health, environmental quality, and economic stability, quality of life is perhaps the singular most important measure to understand how AI can affect the human condition. Group 5 oriented their model specifically around the OECD Better Life Index, which was originally developed to provide a comprehensive understanding of quality of life beyond traditional economic indicators like GDP.

Context

The OECD Better Life Index uses data from national and international sources to provide evidence-based insights into how countries perform on various aspects of life. It covers various life dimensions, including housing, income, jobs, community, education, environment, governance, health, life satisfaction, safety, and work-life balance. The index is flexible and can accommodate subjective preferences by adjusting the importance of different dimensions.  The index not only allows for the comparison of well-being across OECD member countries, but also changes to the well-being of a country over time. It aims to encourage policy-makers to consider a wider range of indicators of well-being when designing policies.

Group 5’s model describes how advanced AI technologies could reshape fundamental aspects of human well-being over the next five years, using the OECD Better Life Index as the top-level metric. It identifies several crucial mechanisms by which AI systems could negatively impact QOL.

First, it highlights potential disruptions to human connection and mental well-being, suggesting that AI could fundamentally alter how we interact and find purpose in work and relationships. Second, it points to emerging physical safety concerns, including novel health threats and changes in law enforcement. Third, it shows how AI might transform labor markets not just through automation, but by shifting the nature of human-oriented work itself.

For the general public, perhaps the most important takeaway is that AI's impact extends far beyond simple job displacement - it could reshape nearly every aspect of life, from social support networks to our sense of purpose and agency, in ways that demand careful consideration and proactive policy responses.

Author’s Notes

The authors did not hesitate to acknowledge that they anticipate major long-term harms from AI, including mass unemployment, the spread of pathogens, nanotechnology risks, and an increase in depression and anxiety affecting mental health. They suggest that anxiety and mental health issues may become normalized. The authors also assumed a rise in people declaring mental health disabilities, particularly among younger generations who are more likely to remain living at home with their parents due to economic instability. They acknowledge AI's potential in supporting mental health through tools like cognitive behavioral therapy, but argue that these efforts only mitigate existing conditions rather than addressing underlying causes.

One author emphasized an interest in measures of self-reported agency and sense of purpose, noting that individuals no longer participating in the workforce might experience a feeling of uselessness. The authors noted that a lack of meaningful civic participation and social mobility will likely contribute to feelings of isolation and detachment. Finally, while many focus on unemployment and mental health, the authors make the distinct point that more work should be done to investigate the physical public health impacts of AI, such as those from nuclear conflict or nanotechnology.

The authors noted that the rate of AI diffusion in research was a topic of disagreement among team members,and identified several key uncertainties and areas for further exploration, including:

The likelihood of war and its impact on their model

The likelihood of war and its impact on their model

The likelihood of war and its impact on their model

The evolution of international conflict and its exacerbation by AI

The evolution of international conflict and its exacerbation by AI

The evolution of international conflict and its exacerbation by AI

The longitudinal aspect of their metrics

The longitudinal aspect of their metrics

The longitudinal aspect of their metrics

In terms of measurables that would improve their predictions, the authors suggested:

Labor participation and unemployment rates as indicators of quality of life

Labor participation and unemployment rates as indicators of quality of life

Labor participation and unemployment rates as indicators of quality of life

Civil unrest and regime stability as indicators of physical safety

Civil unrest and regime stability as indicators of physical safety

Civil unrest and regime stability as indicators of physical safety

Model Mechanics

The authors identified five key metrics impacting quality of life:

1

Physical Safety, which includes both direct risks such as lethal autonomous weapons or war / conflict, and also indirect systemic consequences such as social manipulation, geopolitical instability, and civil unrest

1

Physical Safety, which includes both direct risks such as lethal autonomous weapons or war / conflict, and also indirect systemic consequences such as social manipulation, geopolitical instability, and civil unrest

1

Physical Safety, which includes both direct risks such as lethal autonomous weapons or war / conflict, and also indirect systemic consequences such as social manipulation, geopolitical instability, and civil unrest

2

Physical Health, which may be impacted by novel health threats and/or a lack of physical movement

2

Physical Health, which may be impacted by novel health threats and/or a lack of physical movement

2

Physical Health, which may be impacted by novel health threats and/or a lack of physical movement

3

Mental Health, which may be impacted by a reduced sense of purpose, increased social safety nets, or an improved work-life balance.

3

Mental Health, which may be impacted by a reduced sense of purpose, increased social safety nets, or an improved work-life balance.

3

Mental Health, which may be impacted by a reduced sense of purpose, increased social safety nets, or an improved work-life balance.

4

Economic Health, which will be driven by job security, economic stability, earnings, and participation in the labor economy

4

Economic Health, which will be driven by job security, economic stability, earnings, and participation in the labor economy

4

Economic Health, which will be driven by job security, economic stability, earnings, and participation in the labor economy

5

Environmental Health, driven by biosphere stability, access to green spaces, and clean air and water

5

Environmental Health, driven by biosphere stability, access to green spaces, and clean air and water

5

Environmental Health, driven by biosphere stability, access to green spaces, and clean air and water

Beyond these five key areas, the model also cites eight additional factors which augment these categories. These include:

Housing and Real estate prices

Housing and Real estate prices

Housing and Real estate prices

Income and net financial wealth

Income and net financial wealth

Income and net financial wealth

Community and social support

Community and social support

Community and social support

Education and its actual value

Education and its actual value

Education and its actual value

Environmental quality and health

Environmental quality and health

Environmental quality and health

Governance and involvement

Governance and involvement

Governance and involvement

Life Satisfaction and happiness

Life Satisfaction and happiness

Life Satisfaction and happiness

Work-Life Balance

Work-Life Balance

Work-Life Balance

The model visually connects these various factors to the potential effects of AI, highlighting cause-and-effect relations. For example, it shows how economic stability is largely affected by labor dynamics, with AI automation directly associated with reduced job security, lower earnings and declines in labor participation. The authors suggest that these labor market dynamics will be closely related to mental health, as it is deeply intertwined with our sense of purpose, human connections, and self-reported agency. The availability of green spaces, clean air and water, and biosphere stability will all also be highly conducive to general mental and physical health.

Submodel: Physical Safety

Group 5 conducted a deep dive into Physical Safety as a key submodel to investigate, identifying many threats to human safety from AI systems.

Physical safety represents a significant fundamental component of general well-being. Without it, individuals cannot meaningfully pursue higher-order goals like education, professional or personal development. The threat of physical harm can create persistent stress and anxiety that severely impacts mental health, economic productivity, and social cohesion. Regions perceived as physically unsafe typically struggle to attract investment, retain talent, or maintain stable markets, creating a negative feedback loop that can depress overall quality of life.

The model cites two key areas of physical safety risks. The first area involves acute, direct risks that threaten the public’s general safety via malicious or dangerous actors. The second area discusses the indirect social and systemic consequences of living in an unsafe environment.

On the topic of acute, direct risks, the authors discuss two specific sources: technology-enabled novel health threats (e.g. nanotechnology) and bioterrorism. They argue that governments must carefully minimize risk posed by biotechnologies, while allowing for novel research that benefits health outcomes. They suggest that laboratory automation, research automation, and synthetic biology capabilities are all useful measures of the potential risks posed by AI-enabled research, such as deliberate offensive bioweapons or accidental pathogen deployment.

On the topic of indirect social and systemic consequences, the authors note that AI may be used to manipulate and undermine state sovereignty, creating new vectors for state-sponsored psychological operations that lead to conflict and geopolitical tensions. They suggest that evals and benchmarks for deceptive capabilities could serve as an early warning indicator on the ability of AI systems to deliberately undermine social cohesion.

On the topic of lethal autonomous weapons (LAWS), the authors anticipate both direct risks and indirect systemic consequences. For example, they suggest that war, conflict, and increases in domestic violent crime will serve to encourage the development of LAWS. If deployed in civilian environments to combat violent crime or to suppress political opposition, such systems would likely contribute significantly to civil unrest and the erosion of personal freedoms. The normalization of lethal autonomous systems could foster a dangerous precedent for how force is used in civil society. These weapons could enable authoritarian regimes to subjugate their citizens and expand their geopolitical power through force.

The normalization of militarized LAWS in combat could have even more far-reaching consequences. LAWS could lower the threshold for conflict initiation, increasing the likelihood and pace of escalation. Their development may destabilize geopolitical relations, and potentially trigger an AI arms race. Misuse or proliferation of LAWS in the hands of non-state actors would pose significant risks of terrorism or black-market resale. The integration of AI into military command structures and decision-support systems could lead to unintended escalation through automated decision-making, or conflict acceleration too rapid for human intervention.

Applying Scenario 3 to the Physical Safety Model

Group 5 applied Scenario 3 to their Physical Safety submodel, detailing projections for a variety of measurable outcomes circa 2030.  The group posits that the capability for AI to facilitate bioterrorism will exist by 2030, per the definition in the OpenAI Preparedness Framework. They claim that low-cost micro UAV drone systems might be produced for $100 USD or less, and that these systems could be responsible for 60,000 deaths by 2030: an estimate that reflects the existing active use of semi-autonomous lethal drones in the Russia-Ukraine war. Finally, the group forecasted a 3x (200%) increase in politically-motivated violence as of 2030, as an indirect result of AI-related social dysfunction and manipulation.