Conference Report
A two-day conference in Oct 2024 bringing together 30 leading economists, AI policy experts, and professional forecasters to rapidly evaluate the economic impacts of frontier AI technologies by 2030.
All Worldbuilding Writeups: Part 1
Attendee 1A
General Worldbuilding Exercise
Creative structure: Who is in control in 2030? Who is in charge?
Note
This is not the area of my core expertise. I am speculating what a world in 2030 could look like in each of these scenarios, under policy laissez-faire, when looked from the angle of the above questions.
Scenario 1
With increasing incidence of automated bots on social media and, increasingly, mainstream media as well, reinforced by behind-the-scenes interferences from intelligences from non-democratic nations such as China and Russia, the policy scenes in democratic countries are becoming more and more volatile and polarized. Centrist politicians increasingly fail to attract popular attention, making the political scenes divided into playgrounds of the extreme left and extreme right, both of which are largely anti-democratic. Continued erosion of democracy fuels popularity of populists across the whole democratic world, weakening the power of US and EU authorities. Without noticing, one of the key actors responsible for this is the AI algorithm in social media websites, working with the aim of maximizing people’s attention for ad revenue.
Just like we are doing it today, people are gradually passing more and more decision making authority to various AI algorithms. It is individually beneficial - at every level of management - to outsource some of the tasks that feature prediction and optimized decision making based on these predictions, to AI algorithms. At the level of individual employees, this frees cognitive resources to focus on higher-level tasks. At the level of company managers, this allows for improvements of a firm’s productivity and either reduces employment or increases market share at the expense of firms which don’t automate. But the ownership of AI algorithms is very concentrated - and so without people noticing, more and more managerial power becomes concentrated in the hands of AI agents, and indirectly by AI companies who created them.
However, legal requirements imply that for every action with real-world consequences, even if fully orchestrated by an AI, there is a human who is legally culpable if that action brings harm.
Scenario 2
There are increasingly many situations where large-scale, complex tasks can be fully automated by AI, leading to massive productivity improvements, but no one would be willing to take legal responsibility if anything goes wrong. Stakes are just too high for this, with not just money but potentially human lives on the line. But then, in reaction, there come “legal innovations” allowing human responsibility to be entirely avoided while the AI-led actions are actually fully executed. This raises economic productivity and usually improves welfare, but sometimes does backfire, causing widespread fears, reinforcing anti-democratic resentments, and amplifying legal chaos. While it is still ultimately a human decision to apply AI, the fear that the results of applying AI won’t be beneficial for the average human is growing heavier each day.
An increasing share of global GDP and energy consumption is generated by AI and consumed for AI development (including scaling AI hardware, ramping up electricity generation capacity to fuel it, and development of AI software). It is still people, not AI, who decide to continue on this path. Still, due to the overwhelmingly positive general purpose applications of AI, this has positive impacts on the economy as a whole, so that not only GDP/capita but also human consumption/capita is still growing; but inequalities keep getting deeper and deeper.
In this scenario it is still people in charge, but less and less so the democratically elected authorities, and more and more so the entrepreneurs and managers of a rather short list of frontier AI labs. In this scenario, we see a gradual shift of power from politicians to technologists.
Scenario 3
This is a radical change scenario. But even as such, it underestimates the transformative potential of AI-augmented research, and in particular the AI-augmented research on AI capabilities. In the face of a cascade of recursive self-improvements in AI capabilities, the structure of the economy shifts massively, in a non-smooth disruptive fashion. People are too slow to adapt, creating a dual economy where most of the firms and employees operate as if nothing ever happened, but quickly observe their actions becoming less and less relevant for the creation of global GDP.
With intelligence explosion underway, and assuming that the value alignment problem of AI has not been solved (as one should expect by simple extrapolation of miniscule historical progress in this area and the scarcity of resources allocated to the field), there is a substantial probability of an AI takeover by 2030. This means that not just individual-level or company-level decision making is by then increasingly (and generally voluntarily) transferred from humans to AI algorithms, but also - without any active decision or explicit consent from actual human authorities - control over key resources such as energy resources is shifted from people to ever more powerful AI agents (or potentially a singleton most powerful AI agent) as well.
There is a growing economy of the AI, by the AI and for the AI. Simultaneously the existing “legacy” economy operates only on resources which haven’t been appropriated by the AI sector. Overwhelmed by the growing intelligence superiority of the AI agents, people accept this fate. In the short run, during a few years after the AI takeover, the AI behaves in a rather friendly way, cooperating with humans who are needed as actuators in the real physical world on which the AI is dependent.
The AI sector, in turn, can no longer be controlled by the entrepreneurs and managers (or politicians), and is rather ruled by the frontier AI algorithm(s).
The future beyond these dates depends on the goals of the ruling AI, after it bootstraps itself from the level of (roughly human level) TAI to vastly superior superintelligence, and builds up robotic skills and - next - builds up the robotic capital stocks.
Deep-Dive Worldbuilding Exercise
Creative structure: Economic impacts
Scenario 1
In this “more-of-the-same” scenario I expect continuation of the major trends that are observed since the 1980s, which can be called the period of Digital Revolution:
Scenario 2
This scenario highlights the value of my hardware-software framework (see Growiec, Jabłońska and Parteka, 2024, in the Threshold2030 reading list; also Growiec, 2022, Automation: Partial and Full). As more and more complex, multi-level tasks are fully automated, within these tasks human cognitive work and the contributions of AI (and automation more broadly) shift from being complements to substitutes. This is a qualitative change. In the domain of these fully automatable tasks, people and machines now compete only with price. As AI productivity further improves, their relative price vis a vis the human worker will decline, triggering large-scale replacement of human work with automated processes.
I expect the following outcomes:
Scenario 3
This is a radical change scenario. But even as such, it underestimates the transformative potential of AI-augmented research, and in particular the AI-augmented research on AI capabilities. In the face of a cascade of recursive self-improvements in AI capabilities, the structure of the economy shifts massively, in a non-smooth disruptive fashion. Furthermore, there is a risk of AI takeover as an increasing fraction of managerial, political and strategic decision making is passed to AI algorithms.
Attendee 1B
General Worldbuilding Exercise
Scenario 1: Current AI systems, but with improved capabilities in 2030
(little agency, maybe 5% productivity gain for knowledge workers)
Government and political systems are largely similar to the world in 2020. AI is used as technology for bureaucratic processes (creating memos, compiling literature reviews, lots of drafting and summarizing), but does not substantially enter decision making processes. This is different in the private sector, where individual startups experiment with automated decision making, but few of these experiments have turned out to be successful, and none have emerged as a market leader in a major industry.
AI systems have improved decision support tools (answering voter questions about candidate positions, curating news sources for highly informed voters), but have otherwise not had an impact on elections or voting systems.
Investments into general reasoning systems, and “GPT-series” models have dried out, as scaling laws seem to have broken around the time GPT-6 / GPT-7 were released. AI-based software tools are a multi-billion dollar industry, replacing large chunks of monotonous white collar work, and have made inroads into creative industries (the first few experimental movies with largely AI-driven content and production have been successful at the box office, though human directors are still firmly in charge).
Scenario 2: Powerful, narrow AI systems that outperform humans on 95% of well-scoped tasks
Major tensions arise as fast-paced institutions quickly integrate AI agent workers into their work force, often forming teams (e.g. legal, communications), overseen by just a single human. Rapid advances in software engineering lead to large productivity gains to digital work, too. Wages for the top 10% in white collar work, as well as highly-skilled manual laborers are rapidly increasing, while overall labor participation dramatically decreases. Technological advances in the physical and medical world seem likely, but experiments on newly generated theoretical scientific work, as well as work on implementing new designs as market-ready products are highly bottlenecked. There is a lot of investment into computing and testing facilities and the industry is pushing for ever increased general reasoning.
Prices for compute are through the roof, leading to major price changes for personal devices, and compute equality is rapidly becoming a hot political issue, both on the national level and on the international stage. As the share of humans outperforming AIs in productive activity decreases, questions like AI rights, and goal alignment also become more prominent, but remain at the political fringes.
Scenario 3: Powerful, general AI systems that outperform humans on all forms of cognitive labor
All areas of human society are facing major disruption: organizations and individuals making use of new AI technology rapidly outperform those who are slower to adopt new possibilities. There are calls for the nationalization of major industries, and some countries succeed at this attempt, but technological leaders have mostly become too popular and powerful, and most political power is now concentrated in their hands. Voters and consumers
Deep-Dive Worldbuilding Exercise
Scenario 1: Current AI systems, but with improved capabilities in 2030
(little agency, maybe 5% productivity gain for knowledge workers)
Government and political systems are largely similar to the world in 2020. AI is used as technology for bureaucratic processes (creating memos, compiling literature reviews, lots of drafting and summarizing), but does not substantially enter decision making processes. This is different in the private sector, where individual startups experiment with automated decision making, but few of these experiments have turned out to be successful, and none have emerged as a market leader in a major industry.
AI systems have improved decision support tools (answering voter questions about candidate positions, curating news sources for highly informed voters), but have otherwise not had an impact on elections or voting systems.
Investments into general reasoning systems, and “GPT-series” models have dried out, as scaling laws seem to have broken around the time GPT-6 / GPT-7 were released. AI-based software tools are a multi-billion dollar industry, replacing large chunks of monotonous white collar work, and have made inroads into creative industries (the first few experimental movies with largely AI-driven content and production have been successful at the box office, though human directors are still firmly in charge).
Scenario 2: Powerful, narrow AI systems that outperform humans on 95% of well-scoped tasks
Scenario 3: Powerful, general AI systems that outperform humans on all forms of cognitive labor
All areas of human society are facing major disruption: organizations and individuals making use of new AI technology rapidly outperform those who are slower to adopt new possibilities. There are calls for the nationalization of major industries, and some countries succeed at this attempt, but technological leaders have mostly become too popular and powerful, and most political power is now concentrated in their hands.
Additional market failures
Other features
Attendee 1C
General Worldbuilding Exercise
Scenario 1: BAU
Questions:
Creative Frame: TED talk / undergrad summary of “what has been the impact of AI on the economy”
Some signettes / summaries:
Scenario 2: Powerful AI (95% of tasks)
Cruxes:
Scenario 3: Very Powerful GAI
Cruxes:
Deep-Dive Worldbuilding Exercise
Let's do:
I will try to illustrate what work and persona life looks like for some people, in these scenarios.
Scenario 1: BAU
(Nothing new here, right now. It might be similar to Scenario 2, but with fewer aspects using impressive AI capabilities, e.g. the parent might ask the future version of ChatGPT about an image, but it doesn’t connect to other systems.)
Scenario 2: Powerful AI (95% of tasks)
Let’s imagine a vignette of a healthcare experience in this world, where 95% of tasks can be done by AI systems better than humans today.
Step 1: Ask AI for help
A patient has a health issue – their baby has a rash. They send a photo to a general AI system, as they might Google for symptoms today. They ask advice. It is reassuring, but it suggests the possibility of more serious conditions too. They decide to see a human doctor.
Step 2: Coordinate an appointment
They ask their phone’s AI assistant to set it up – it can handle most of the task (human doesn’t have to deal with terrible UI), but it doesn’t realize that it needs to coordinate with the babysitter’s timing to figure out when to make an appointment. (That was a verbal conversation.)
The best consumer AI systems — and the best users (who know how to instruct their AI systems; like those users who use mail filters), — will know to ask about this availability, so the human can give a response (or say “you can ask my mom’s AI”).
Step 3: Pre-visit analysis
Before the visit, the parents was asked to send in some photos of the baby. It was analyzed by the health system’s AI, which provided a recommendation to the doctor. It looks likely to be not a serious condition, but the health system doesn’t aggressively steer parents away from in-person visit; if they want to come in and talk to a human doctor, that’s their choice. Worried parents are the customers here.
Step 4: Get to the clinic
They will drive to the clinic, most likely not yet in a self-driving car. (Plenty of people will have cars >6 years old even.) They won’t take a self-driving taxi either, because dealing with baby seats reliably is an edge case that Uber and competitors haven’t prioritized yet. [Though, reflecting on this, I could be wrong… perhaps when you have robot taxis, it’s easier to supervise them, to ensure you have a fleet that have baby seats pre-installed.]
Step 5: Check-in
At the clinic, a computer handles check-in, and one employee comes in to smile and ask if there are any problems. This is for well-outfitted places; many will still have human receptionists, using a slightly-outdated electronic health record system, or a corporate scheduling system, or even Outlook.
Step 6: The visit
If it’s a clinic at the forefront of technology, the room will already be outfitted with a camera, or the doctor will be wearing glasses that film what is happening. The doctor makes an assessment, and their device is live giving recommendations as well (which they read). The doctor checks the baby, talks to the parent. Recommends a prescription for a cream. The parent feels relieved.
Step 7: Notes and approvals
An AI system writes the doctor’s notes, and they check it and give it a greenlight. [Note: This already happens today in 2024]. The doctor has some more attestations to make (for legal reasons), but this was a normal visit, and they don’t spend much time on this paperwork.
Step 8: Coding
In the back-office, the medical billing is handled by a mix of an automated system, with humans in the loop to check when the system thinks it is ambiguous. (And, there is starting to be research suggesting these humans don’t make it much better, but health insurance companies still employ them. There have been some lawsuits / consumer action about wrong medical bills attributable to an AI system.)
Step 9: Billing
Billing still takes a long time to arrive for the parent, but when it comes, their phone interprets the bill, and tells the user what it means.
Step 10: Prescription delivery
The cream is available for physical pickup, or mailed. Some pharmacies still have a human preparing it, but big ones (in Walmart?) are starting to have robots that do the packing and prepping. So your prescription can be available in minutes, if you are at a leading pharmacy.
Step 11: Using medicine and tracking
The parent applies the rash cream manually. If they like this sort of thing, they take pictures of its progress, and their phone’s AI organizes them, and assess if the rash seems to be getting better. This data doesn’t go anywhere automatically.
Step 0: Medical progress behind this.
The rash cream is the roughly the same thing that was available 10 or 20 years ago – there have not yet been major advances in the treatment of diaper rash. Diapers have not changed a ton either, though there have been some advances in materials and costs. A human still puts on the diaper on the baby (but some people have more advanced baby monitor cameras that claim to tell you when your baby’s diaper is full).
Scenario 3: Very Powerful GAI
(I’ll do a baby rash vignette again, but focus on the bleeding edge of technology. I believe that if we are in this scenario, AI may be capable of the following things, but I expect that it will note be widely implemented.)
An AI camera alerts the parent that their baby is developing a rash, based on camera data. It also analyzes the baby’s temperature and movements, and provides data suggesting when the baby developed it. (These analyses are not always super accurate – AI systems have not magically manufactured causality, and they are based on correlational evidence. But correlation work quite will here.)
It shows the parent the photos, suggests a diagnosis, and suggests a course of action (go get this cream). If the cream is over-the-counter, it recommends to buy the ream and have it delivered.
(This level of recommendation will be for the quite rich, or for people who want to hack their own solution. Most conventional AI that consumers use will still be too timid to do things that could be construed as medical advice.)
If the cream is prescription only, it will send the info to the doctor. The doctor’s office’s medical software will analyze it, and provide analysis to a human nurse or doctor. If they are confident it is right—and if it is a very innovative health system—they will right away provide the Rx, without having to see the baby, even virtually.
More likely, a virtual appointment will still be required. The system will ask the patient’s AI to look at thier calendar, and propose some times. The appointment will be provisionally put in the system, while the parent has half a day to confirm it.
It may still take 1-2 days for the appointment to happen. (Ie, even in scenario 3, we will not eliminate medical wait time.)
The virtual appointment will typically confirm the doctor’s judgement. They will talk to the parent, see the baby’s behavior, and issue their recommendation. They will allay some of the parent’s concerns too.
The diaper cream will be delivered by walking drone within a day, or the parent has the option to go pick it up. (Some people and places will have one-hour delivery, but that will still not be cheap enough to be a suitable option to offer everyone; and not all heath / pharmacy systems have decided to prioritize offering this as a premium option.)
Though all this, the AI system will enroll the patient’s data in a study about diaper rash, to track the long-term effects of this cream. It will likely be an observational-only study. But, if a human or AI pharma developer cares enough, it can fund a RCT with parents. (This will not be an official study that will result in FDA approval of anything; paperwork / permissioning requirements to enroll patients are still way to stringent. But it will be data that, that is from voluntary participation of individuals, and that data will be made available to subscribers of this crowdsourced medical platform.)
The house cameras and AI system (perhaps on the parent’s phone, or in the cloud) analyzes the baby, and reports regularly to the nervous parent about how their child seems to be fussing less than yesterday. The parent keeps applying the diaper cream, and also will get an alert (And recommended purchase) from the AI system when it is time to buy more.
The baby’s rash goes away, because the most common cases are not severe, and regular medicine works fine for them. (The rash might have gone away on its own, without the cream, but we’ll never know.)
Attendee 1D
General Worldbuilding Exercise
Future challenges briefing: A short document outlining the most pressing challenges or opportunities that society / governments face in this future scenario, which could serve as a starting point for further discussion.
Questions:
Scenario 1
Challenges:
Opportunities:
Scenario 2
Acceleration of present trends in cognitive work (I remain skeptical that we will see the advances in robotics described here, but will roll with it for the exercise)
Challenges:
Opportunities:
Scenario 3
Above skepticism about robotics advances even stronger here, but still suspending disbelief for the exercise
Challenges:
Opportunities:
Deep-Dive Worldbuilding Exercise
Overall Questions
Economic Questions
Industry Sector-Specific Questions:
Attendee 2A
General & Deep Dive Worldbuilding Exercises (Combined)
Questions:
Creative structure
Scenario 1
2025
OpenAI releases GPT-5. It’s better in many ways, but doesn’t meet the high expectations. It still hallucinates, its agent functions are brittle and it’s more of an evolution rather than a revolution. Shortly after, Google, xAI and Anthropic release their equally disappointing new frontier models. Media outlets speak of the AI bubble bursting and funding starts drying up. Towards the end of the year, OpenAI releases o2. It scores silver on the IMO but still falls short of proper reasoning, remaining susceptible to weird errors that humans wouldn’t make.
2026
Some of the tier 2 AI labs are struggling with some of them selling out to the hyperscalers, and other dying a slow death. Frontier labs are still pursuing large training runs, but are increasingly looking at new algorithmic approaches. OpenAI tries integrating their reasoning models with their GPT series, but faces difficulties. Meanwhile, the adoption of AI systems continues to be gradual. Software engineers are seeing very large productivity gains, but in other sectors, usefulness is more limited. With progress at the frontier stalling, there is a renewed interest in wrapper companies that aim to optimize models for specific tasks.
2027
Anthropic releases a new type of AI agent that is more reliable. However, a few weeks after the launch it turns out that it’s still susceptible to jailbreaks, and after a series of incidents where computer use leads to personal data leakage, the company decides to take the agent offline again. For some use cases, the brittleness of AI agents isn’t so problematic. Offensive cyber agents based on open source models start becoming more common. Although they often fail at their task, criminals can simply put a ton of them to work, hoping a few will succeed at their task. Trust in AI systems starts to decrease as a result.
2028
With funding drying up, AI companies are no longer building ever bigger datacenters. Instead, companies pivot towards productising their models, creating more personalized solutions, better UI’s etc.
2029
Google releases their latest giant model. While it’s marginally better, people generally do not want to pay the (much) higher price. Scaling laws have ended and the industry is in need of a new paradigm
2030
Almost all decision-making requires a human in the loop. The AI agents that exist in 2030 are still too brittle and unreliable to act for sustained periods of time without human supervision. AI agents are also still susceptible to jailbreaks, which is a big problem now that they are autonomously browsing the web. A single off-white background can be sufficient to set an AI agents off towards an entirely different goal. As a result, people mostly use AI systems in Q&A ways: asking them a question and using their answer to help with some decision. All that said, AI systems have become much better reasoners and hallucinate much less nowadays. Provided enough context, they can greatly contribute to answering difficult questions. These span many different areas like healthcare, law, science, or assisting in one’s personal life. More and more people are starting to treat AI systems as their online teacher, physician or psychologist. There is a growing divide in how much people use these AI systems. Young, tech-savvy people use AI systems more than people today use Google, whereas others find the systems scary or redundant.
Employment statistics haven’t changed much by 2030. However, we do see substantial productivity gains across the cognitive economy. Almost all jobs still require a human in the loop, but those humans can handle more tasks in less time. In some industries like software engineering, these effects are so large that companies have started restructuring their internal processes. Given the large latent demand for software, this does not come at a cost of jobs. Rather, companies offload more tasks to each individual and wages grow substantially. Some people in sectors where competitive pressures are small or nonexistent, (secretly) start working less. Remote work has become very common since COVID, and as long as they get their job done, their managers do not care or notice.
AI hasn’t changed political systems much. However, there’s much more targeted political online campaigning happening by 2030. Multiple countries have banned the use of AI systems in certain, sensitive sectors like law or the civil service. These sectors are slowly getting more overwhelmed by all the AI-generated information they need to process.
Scenario 2
2025
GPT-5 is a big step up again. Compared to previous models, it hallucinates much less often, and its reasoning skills have greatly improved after training on synthetic reasoning traces generated by the o1 series. OpenAI also releases computer use functions, which start becoming economically useful in some areas, but are held back by slow speed and high costs.
2026
Some tech-forward companies automate large parts of their workstreams using agents, but most companies still either do not use AI, or only use chatbot functions. Slowly, competitive pressures to adopt AI systems start to increase. Frontier labs release iterative improvements of their models, but no groundbreaking new capabilities.
2027
The next generation of frontier models is significantly better, but the paradigm seems to be hitting diminishing returns. AI agents can now quite reliably fulfill easy, well-demarcated tasks, but still struggle with more complex tasks that require social coordination on sophisticated planning and adaptability. AI has now become an important topic in politics. Many people are afraid of job displacement as a result of AI. Surprisingly, AI hasn’t become polarised yet. But Republicans and Democrats point to both upsides and downsides of AI, although they disagree on the specific pros and cons.
2028
AI R&D is becoming more and more automated and involvement of the national security establishment increases. However, there hasn’t been anything like nationalization of AI labs, and while both the US and China have made it a priority to ‘beat each other’, they’re both still mostly focused on enabling their private sectors. Some worrying signs of misalignment lead frontier labs to slow down the automation of R&D, diverting resources to testing and monitoring.
2029
Another new generation of models is released, continuing the trend of continued but slowing down progress. Companies have by now had a lot of time to adjust to AI agents, and many have restructured their processes to make better use of them. There are rumors about an upcoming Chinese invasion of Taiwan, but these do not materialize.
2030
AI Agents have become much more capable and reliable and are now able to autonomously perform well-scoped tasks like writing a chapter of a research report, performing an algorithmic experiment or doing online groceries. People are increasingly off-loading such tasks to AI systems, although they still need to check in every 20 minutes or so. Social norms on AI use are slowly shifting, but in many sectors like law, people are not yet comfortable to admit that an AI has done their work. It’s now very common for people to pretend that they solved some task themselves, even though they know their counterpart knows they used an AI model. People don’t say the quiet part out loud. In some sectors like medicine, professionals lobby against AI automation, fearing for their jobs and power.
By 2030, we’re seeing the first signs of rising unemployment due to AI. AI systems can now greatly reduce the time it takes people to perform tasks, and in some sectors with low latent demand, this has caused mass layoffs. The pace by which this happens, means there’s been little time for the economy to adjust and people to find new jobs or upskill.
AI adoption is now becoming more polarized, but not along historical axes: there’s a growing divide between political parties that want to embed AI systems in more and more sectors to enable productivity gains, and those that oppose this. The latter camp points at practical risks, but also at society being on the verge of losing what it means to be human. Most governments have by now adopted AI systems. The general pace of life has increased and to keep up with the world, they are kind of forced to. Although AI agents are now able to automate large parts of civil cervantes’ work streams, a lack of economic pressure generally prevents governments from restructuring and firing workers. Political systems remain unchanged by 2030, although there is a growing group of public intellectuals arguing that political systems are in drastic need of future-proofing.
Scenario 3
2025
GPT-5 is launched and it’s good. It’s a clear step up in general knowledge, reasoning, controllability (fewer hallucinations, false refusals etc.) and has great agentic functions. It’s another ChatGPT moment, but way up the capabilities curve. Before the end of the year, competitors release similarly capable models that combine general world modeling with reasoning and computer use. The American national security establishment has now fully woken up and large infrastructure projects are getting greenlighted.
2026
The USG is now helping out their private sector in any way possible to stay firmly ahead of China, and is rapidly increasing cybersecurity. New checkpoints of public AI models are still released, and with each one, new tasks can be fully automated. AI R&D is now also getting rapidly automated. There are no noticeable differences yet in economic indicators, but most economists believe that this is only a matter of time.
2027
With the help of artificial AI researchers and the USG, leading labs are making rapid gains, resulting in the first transformative AI systems by the end of 2027. Although people still debate whether these really qualify as ‘AGI’, they can automate the job of a remote AI researcher and engineer, which means that AI progress is no longer bottlenecked by human talent. These new frontier models aren’t publicly released yet for national security reasons - among other things the USG want to use them to patch their own cyber vulnerabilities and to cement their military lead over China. In China, a large public-private partnership is now under way, but consensus is that they won’t be able to catch up anymore.
2028
The USG agrees to publicly release last year’s models and the general public freaks out again. Almost all cognitive jobs can now be largely automated, provided that humans take enough time to onboard their agents and teach them their tacit knowledge. Stock markets explode, and robotics companies are taking off. Towards the end of the year, there’s a visible increase in GDP growth, interest rates and unemployment in certain sectors.
2029
The world is now changing faster than ever before. New businesses pop up everywhere, and outcompete older competitors that do not yet make proper use of AI agents. Government have started to massively invest in retraining programs but by 2029 are realizing that this won’t cut it. The US is now so far ahead of China that they are able to force a treaty that will leave the CCP in control in mainland China, but substantially reduces its foreign influence (crucially, also over Taiwan). Manufacturing of robotics is now speeding up massively, mostly as a result of Musk’s logistical prowess.
2030
After the gradual intelligence explosion that started in 2027, AI systems are now able to automate virtually all cognitive work, and most physical jobs. That does not mean all jobs are automated. Legal barriers, heavy lobbying from special interest groups and a lack of competitive pressures have caused certain sectors to remain dominated by humans. Think of law, parts of medicine and politics. There are also tasks that could be left to AI systems, but people generally feel should be done by humans, because the human interaction provides some form of intrinsic value. As a result, there are still human-operated nursing homes for instance. Humans are inferior to AI systems in pretty much all ways, but are nevertheless still officially in charge of most governments and businesses. These roles have become mostly social roles and behind the scenes, decisions are made by AI systems.
Employment in the US has dropped by 40% This has created utter chaos. Governments are scrambling to rapidly expand social safety nets, and most have by now started implementing some form of UBI. There’s also experiments with universal compute access, to try and stimulate people to remain contributing to the economy, but at this point, AI systems are better at coming up with new business ideas than humans are. Those who still have cognitive jobs often only come into work once a week to check that their AI systems are handling things correctly. People with physical jobs, e.g. in nursing have started to demand large wage increases and are working fewer days a week. This seems only fair as the majority of the country is enjoying their UBI. All this free time isn’t necessarily conducive to human flourishing. Many people experience a lack of purpose. Although there’s probably ways to organize society differently to make sure that people enjoy a work-free life, society hasn't perfected it yet.
Politicians now mostly serve as the hands and feet of AI systems. Their agendas are largely crafted by AI systems to maximize voters while staying somewhat true to a specific ideology. AI has become a huge political topic by itself, swamping nearly everything else. Most countries now also have anti-AI parties, that are actually led by humans, which manage to attract sizeable numbers of voters, but who do not reach positions of power. General elections haven’t been changed yet. More and more governments have started to continually poll human preferences during their term though, as the pace of change is too fast to only rely on human feedback every 4 years.
Attendee 2B
General Worldbuilding Exercise
Deep-Dive Worldbuilding Exercise
Attendee 2C
General Worldbuilding Exercise
Scenario 1
Scenario 2
Scenario 3
Deep-Dive Worldbuilding Exercise
Our conception of AI as an independent entity. In what way will AI systems become legal entities (if at all)?
Scenario 1
Scenario 2
Scenario 3
How will the value and use of sovereign territory change? / How will AI regulation differ across territories?
Scenario 1
Scenario 2
Scenario 3
Attendee 2D
General Worldbuilding Exercise
I didn’t use the same questions as the rest of the groups.
Scenario 1
Scenario 2
Scenario 3
Attendee 3A
General Worldbuilding Exercise
Assumptions
2024
2030 Low
2030 Medium
2030 High
Frontier time saving on white collar jobs
2%
5%
20%
80%
Avg time saving
(due to imperfect diffusion)
0.5%
2%
10%
40%
Productivity enhancement rate 2025-2030
-
0.4%
2%
8%
A more granular set of assumptions here:
We assume pure TFP boost, but will vary across sector, see below.
We have precedents for such high growth rates (50% GDP/capita over 5 years):
How will production and employment change concretely?
Productivity effects: The following sectors are very roughly equal shares of employment, and so roughly equal shares of GDP:
2024
2030 Low
2030 Medium
2030 High
Government
(education, govt, USPS, IRS)
0.1%
0.5%
(better services)
20%
5%
(excess employment)
Goods production
0%
0.5%
3%
5%
Professional and business services
1%
2%
10%
40%
Leisure and Hospitality
0%
0.5%
3%
5%
Trade / Transportation / Utilities
0%
0.5%
3%
5%
Education & Health
0.1%
0.5%
3%
5%
Deep-dive Worldbuilding Exercise
Low
(Scenario 1)
Medium
(Scenario 2)
High
(Scenario 3)
White collar productivity boost
2%
10%
40%
Employment
Replaces jobs for people who do modular tasks: call center, freelance work, sales.
They are temporarily unemployed but absorbed into other groups.
Wages
Fall for modular work, higher for everything else.
White collar work
Saves 2% of time at work.
Most jobs could be done on autopilot, just surface the unusual cases.
Research/innovation
Medicine
Get somewhat better medical advice, comparable to Google.
Everyday life
It solves many problems: (1) diagnose illness; (2) repair garage door; (3) summarizes personal finances; (4) health advice (many of these things not constrained by information but by willpower).
Suggestions for all purchase and organizational decisions.
Entertainment
Productivity across sectors
Small effect on professional services, but nowhere else.
Income across countries
Hurts BPO countries (Phillippines, India).
Surveillance
Attendee 3B
General Worldbuilding Exercise
Creative structure
Future challenges briefing: A short document outlining the most pressing challenges or opportunities that society / governments face in this future scenario, which could serve as a starting point for further discussion.
Questions
Other interesting questions
Answers
Scenario 1
Question 1
No major impact on price levels or real incomes. There are some cost savings due to labor cost-cutting in a handful of professions (i.e. further cost-cutting on customer support, etc) but those cost savings often do not get passed through onto consumers as lower prices, at best as improved quality of services. On occasion, there are cost savings on consumer side when instead of paying for a doctor appointment, legal or tax consultation consumers are able to get an appropriate level of advice/expertise from an AI system, but those are too sporadic to show up in official statistics, even though there are signs that they are increasing in prevalence.
Question 2
Compared to 5 years ago, there is more and more broadly diffused recognition that AI capabilities and access to them do have implications both for the effective education process design as well as what skills/capacities should be taught. However, there hasn't been systemic change of either of those at a national, let alone international level. There are individual educators who stand out in terms of going above and beyond to make sure what and how they teach keeps us with the times and AI advancements, but those are more of exceptions than the norm. Students are generally way ahead of educators at adopting AI tools, both in very constructive and very creative, as well as more destructive ways that likely damage their educational outcomes. A lot more research is needed to understand the implications in terms of AI impact on educational outcomes and flesh out better designs, but this research is severely underfunded and overlooked.
Question 3
Societal tensions at the end of 2030 are not dramatically different from those that were already surfacing at the end of 2024. There are more people now whose jobs were materially impacted by AI advancement but the attribution of that impact is hard and muddled, making it difficult for them to identify with each other or form a group, let alone a political power. They remain largely diffused and disjointed.
Scenario 2
Question 1
Similar to scenario 1 with realized labor cost cutting opportunities as well as consumer cost savings being more prevalent and more meaningful depending on the degree of AI adoption/penetration across economic sectors. While in this scenario we might see some increase in people's real incomes, it seems still relatively unlikely that overall people feel a step level change in their well-being, i.e. that they feel a lot poorer or richer, living in larger houses in much better neighborhoods and going to expensive resorts they could never afford before (with an exception of more narrow groups that could have benefited from capturing concentrated labor cost savings in their industry in absence of pressure to pass those on to consumers)
Question 2
I believe generally the situation is probably similar to Scenario 1 as education is a notoriously slow changing field, but hopefully with a difference around the much broader recognition that it needs to change in response to AI progress and much better resourcing of the research and design efforts to facilitate that change.
Question 3
Likely similar to scenario 1 with a possibility of emergence of more clearly shaped groups that felt they have been or are poised to be affected by AI making efforts to create a space in which they can have a say over how this technology is used in their workspace or the economy more broadly similar to SAG-AFTRA strikes and unions contract negotiations in 2023 including clear demands around restricting AI use. Depending on how many sectors/professions are meaningfully affected the number of such groups can be getting larger, with them also forming broader coalitions.
Scenario 3
How have the costs of goods changed? Does the cost of services increase or decrease in comparison to physical goods?
While accepting the premise of Scenario 3 on the capabilities side, I'm someone who believes in the span of 2-3 years from the introduction of those capabilities the diffusion might still be limited throughout the economy, especially on the global scale. While some political and geopolitical impacts around the balance of power etc might have already taken place by Dec 2030, a lot of the pure economic impacts are still ahead of us, including the effect on prices, though there might have been some pretty impressive drops in sectors that were poised to adopt and adapt very quickly. The economy might be going through a time of cascading changes that feel almost out of control, but I don't think people are finding themselves in any state of bonanza with most of what they consume being dramatically cheaper.
How has the education system evolved to prepare people for this new reality? What skills are now emphasized, and how has the structure of learning changed?
Likely the practice of teaching is about the same as in Scenarios 1 and 2 described above aside from the state of panic that in this scenario would be broadly shared by educators, students and parents about a deep inadequacy of that situation. Parents will likely demand the educators and the governments for an urgent investment in figuring out how to prepare their children for the economic reality the world is entering, while the understanding of what that reality will actually be and what does it mean to adequately prepare for it being quite sparse.
What major ethical debates or societal tensions have arisen as a result of the AI developments in this scenario? How are different societies addressing these issues?
Compared to scenario 2, there are definitely more groups that are wither acutely feeling the impacts on their jobs or anticipating in in the near future and are trying very actively to figure out a way to have a say in how the change unfolds, with some resigning to it and feeling like they can't do anything about it, and some who entirely didn't participate in any of this conversations 5 years ago being now very active and involved.
Deep-Dive Worldbuilding Exercise
Questions
Answers
Scenario 1
Question 1
Question 2
Question 3
Scenario 2
Question 1
Question 2
Question 3
Scenario 3
Assumptions:
How has AI affected wealth & income distribution within and between countries?
What new forms of economic cooperation or tension have emerged between developed and developing nations due to AI advancements?
What new forms of market failure could emerge in an AI-driven economy?
Attendee 3C
General Worldbuilding Exercise
Scenario 1
AI companies have significant power in the world, and national security concerns are heavily AI-oriented.
The average person in a major city will have a tough time supporting themselves, and rely on social services and shelters more. They will rely on AI interfaces to companies and government for basic services, information services, and social services, but this AI will not be among the smartest available.
Older fiduciaries will protect their privileged status, but will use AIs to do most of their work rather than hiring younger associates.
There will be significant divergences between: corporate productivity and individual average human productivity; productivity and wages; human consumption and GDP; human welfare and economic growth; asset prices and value to humans.
The human economy and the AI economies will diverge, as the financial and main street economies have diverged in and after the financial crisis (and also in general somewhat after ~1980).
The cost of both goods and services drop, as most of the cost of goods is not from their raw materials but of the labor that goes into making and transporting them. Demand for the volume of goods and services does not make up for the decreased price levels.
New business models, services, and processes will be spun up and modified faster than regulators can deal with regulating them, and so the optimized extractive externalities they impose would be unmitigated.
Rural populations that farm will be in a much better shape than urban populations, as the land they own can pay them rent for growing food. Virtual presence is also more accepted.
Scenario 2
AI companies are en par with countries in their power. They have been soft-nationalized and remain major profit centers and national security players. The major AI powers are the new set of superpowers.
The average person in a major city will be out of work and living on welfare. They will rely on AI interfaces to companies and government for basic services, information services, and social services, but this AI will not be among the smartest available. They will drown their sorrow in entertainment generated by AI.
Older fiduciaries will protect their privileged status, but will use AIs to do most of their work rather than hiring younger associates. While official fiduciaries will still exist in much smaller numbers, new business models that leverage new forms of license terms or disclaimers will become accepted.
There will be significant divergences between: corporate productivity and individual average human productivity; productivity and wages; human consumption and GDP; human welfare and economic growth; asset prices and value to humans.
The cost of both goods and services drop precipitously, as most of the cost of goods is not from their raw materials but of the labor that goes into making and transporting them. Demand for the volume of goods and services does not make up for the decreased price levels. Services do drop faster in price than goods do, since they include fewer raw materials. Even the remaining professional services drop significantly in cost since there is so much human and AI competition.
New business models, services, and processes will be spun up and modified faster than regulators can deal with regulating them, and so the optimized extractive externalities they impose would be unmitigated..
Rural populations that farm will be in a much better shape than urban populations, as the land they own can pay them rent for growing food. Virtual presence is also more accepted.
Scenario 3
AI companies are the main power brokers in the world. They have been soft-nationalized and remain major profit centers and national security players. Nuclear umbrellas have been replaced by AI umbrellas of protection, but are much more fluid than nuclear umbrellas were.
The average person in a major city will be subsistence living as if in a favela. They will rely on AI interfaces to companies and government for basic services, information services, and social services, but this AI will not be among the smartest available.
While official fiduciaries will still exist in much smaller numbers, new business models that leverage new forms of license terms or disclaimers will become accepted.
There will be significant divergences between: corporate productivity and individual average human productivity; productivity and wages; human consumption and GDP; human welfare and economic growth; asset prices and value to humans.
The human economy and the AI economies will diverge, as the financial and main street economies have diverged in and after the financial crisis (and also in general somewhat after ~1980). An “ascended economy”, where AI-powered firms will be buying and selling goods and services to other such AI-powered firms, with a booming stock market and a booming GDP, while most humans are destitute, depressed, and on subsistence support.
The cost of both goods and services drop precipitously, as most of the cost of goods is not from their raw materials but of the labor that goes into making and transporting them. The cognitive labor that exists has figured out how to optimize robots to perform many of the most common physical jobs. Demand for the volume of goods and services does not make up for the decreased price levels. Nominal GDP falls, but so does real GDP: these price collapses do not affect real estate or commodities the same way, so those remain relatively expensive and anchor the money supply.
The market itself would be an inescapable engine of almost entirely externalities to most humans, as the extractive processes will be more optimized and more fluid than ever before possible.
Rural populations that farm will be in a much better shape than urban populations, as they will be able to fare better in subsistence economic conditions. Land is also at a premium in this new economy, as where location matters less.
Deep-Dive Worldbuilding Exercise
Scenario 1
Because of the narrow utility, it may take experts at implementing these systems properly rather than the AIs implementing themselves. This will appreciably slow adoption, bottlenecking on things like data cleanliness.
Because these systems are oracles and not dealing with jobs or even tasks end-to-end, there will be a lot of variability in what humans use these systems for, with some experts choosing more specialized questions and others choosing more general queries. Humans will probably be reluctant to transfer whatever is difficult to transfer or delegate to the machines, such as things requiring a ot of context they would need to explain.
Older fiduciaries will protect their privileged status, but will use AIs to do most of their work rather than hiring younger associates.
More humans are underemployed in part-time or gig worker jobs. Their working hours are more variable, as are their tasks. They spend the rest of their time entertained by AI.
New business models, services, and processes will be spun up and modified faster than regulators can deal with regulating them, and so the optimized extractive externalities they impose would be unmitigated.
Decision support systems, dictation/transcription, and patient interaction systems make the practice of medicine much more efficient and allow access of mediocre healthcare to more of the populace.
Scenario 2
Because the AIs accomplish tasks but need to be stitched together in context by humans, it may take experts at using these systems properly rather than the AIs driving things themselves. This will appreciably slow adoption, bottlenecking on things like worker competence.
Because these systems are limited assistants and not dealing with job end-to-end, there will be a lot of variability in what humans use these systems for, with some experts choosing more specialized tasks and others choosing higher-level ones. Humans will probably be reluctant to transfer whatever they have had poor experiences delegating previously or things requiring a lot of context they would need to explain.
Older fiduciaries will protect their privileged status, but will use AIs to do most of their work rather than hiring younger associates. While official fiduciaries will still exist in much smaller numbers, new business models that leverage new forms of license terms or disclaimers will become accepted.
More humans are underemployed in part-time or gig worker jobs. Their working hours are more variable, as are their tasks. There are professionals who constantly string together outputs from AI systems, but they often find their jobs annoying because of the new types and frequencies of context switching and managing the varying assumptions of the different agents they interact with. They spend the rest of their time entertained by AI.
New business models, services, and processes will be spun up and modified faster than regulators can deal with regulating them, and so the optimized extractive externalities they impose would be unmitigated.
Decision support systems, dictation/transcription, and patient interaction systems make the practice of medicine much more efficient and allow access of mediocre healthcare to more of the populace. Agents make phone calls to patients to remind them to take medications or to get their past medical histories. Doctors also debate the differential diagnoses with AI assistants.
Scenario 3
Because AIs will be better than humans at analyzing context and performing IT integration, workflow consulting, and management consulting, they will be able to integrate themselves into all businesses, and so adoption may take as short as a few months. Some laggard companies may purposely hold out, but they will draw down their capital reserves to do so.
Because these agents will be very competent and general purpose, considerations like privacy and security will likely dictate these decisions rather than whether the system is able to accomplish the task. Most people will not care about privacy or security though.
While official fiduciaries will still exist in much smaller numbers, new business models that leverage new forms of license terms or disclaimers will become accepted.
More humans are underemployed in part-time or gig worker jobs, and even more are completely unemployed. They spend the rest of their time entertained by AI or doing self-destructive things out of ennui.
The market itself would be an inescapable engine of almost entirely externalities to most humans, as the extractive processes will be more optimized and more fluid than ever before possible.
Virtual physicians’ assistants make the practice of medicine much more efficient, and perform many of the functions the doctor had previously taken, and allow access of mediocre healthcare to more of the populace. New human doctors, NPs, and PAs are much less needed and much less commonly hired.
Attendee 3D
General Worldbuilding Exercise
Model setup
Goods production function:

Derived production function:

LOM for capital:

Ideas production function:

Market clearing:

Limitations:
Translating scenarios into the model
Goods automation share:

Ideas automation share:

Deep-Dive Worldbuilding Exercise
Model setup
Goods production function:

Derived production function:

LOM for capital:

Ideas production function:

Market clearing:

Limitations:
Translating scenarios into the model
Goods automation share:

Ideas automation share:
