Conference Report
A two-day conference in Oct 2024 bringing together 30 leading economists, AI policy experts, and professional forecasters to rapidly evaluate the economic impacts of frontier AI technologies by 2030.
All Worldbuilding Writeups: Part 2
Attendee 4A
General Worldbuilding Exercise
Creative structure
Questions:
Personas (I will try to keep these more or less the same throughout the scenarios, though they might change given the implications of the model):
Scenario 1
Scenario 2
Scenario 3
Deep-Dive Worldbuilding Exercise
Scenario 2.5
Following the discovery of China’s violation of the Strategic Compute Verification Treaty, President Vance has used the DPA to address AI-related national security concerns. Through the Secretary of Energy, the minimum requirements for grid allocation have been established to support security, safety, defense, and chip manufacturing research (insert other materials-related requirements). AI companies are now explicitly prohibited from publishing their research (including model weights and datasets).
Foreign nationals from strategic competitors who are employees at AI companies have been put on forced leave pending additional review. This comes as an expansion of DPA authorities which have historically not been used to impact employment contracts or wages.
Many advocates worry this move will further escalate tensions between the United States and China. Previously, we have seen increased efforts by the Vance administration to nationalize AI-related activities to bolster US national security.
Attendee 4B
General Worldbuilding Exercise
disc notes:
guiding questions
Scenario 1
Overall, AI systems in 2030 are more powerful versions of today’s LLMs, but with similar structural limitations. LLMs still primarily function in response to direction from humans, and do not take the initiative or act independently. Though LLMs may be better integrated into existing workflows and products, this scenario doesn’t assume significant advances in independent reasoning or end-to-end execution.
Future challenges briefing
While LLMs have lead to substantial productivity gains in white-collar occupations, the following firm-level and societal challenges remain:
Cost fx: mostly a modifier on degree of concern, proportional to how widespread the challenges become. most costs are marginal, not upfront, due to API-usage pricing models
Scenario 2
AI systems achieve better results than people in most constrained or well-scoped tasks. However, they fail to outperform humans in task integration, handling multifaceted responsibilities, and communication with other humans. They still require oversight.
diff from prev section:
Scenario 3
Powerful AI systems can meet and surpass the performance of humans in all dimensions of cognitive labor, and can function as “drop-in” replacements for nearly all human jobs.
diff from prev section
Deep-Dive Worldbuilding Exercise
Scenario 1
Overall, AI systems in 2030 are more powerful versions of today’s LLMs, but with similar structural limitations. LLMs still primarily function in response to direction from humans, and do not take the initiative or act independently. Though LLMs may be better integrated into existing workflows and products, this scenario doesn’t assume significant advances in independent reasoning or end-to-end execution.
Scenario 2
AI systems achieve better results than people in most constrained or well-scoped tasks. However, they fail to outperform humans in task integration, handling multifaceted responsibilities, and communication with other humans. They still require oversight.
Similar to S1; largest difference is that AI is now useful for high-fidelity processing of large information sets
Scenario 3
Powerful AI systems can meet and surpass the performance of humans in all dimensions of cognitive labor, and can function as “drop-in” replacements for nearly all human jobs.
This scenario marks the point at which AIs may be able to function as social agents.
Attendee 4C
General Worldbuilding Exercise
Scenario 1
Scientists from Deepmind University today announced a re-categorization of diseases that promises to notably improve the quality of human treatment. In their work, these researchers found that traditional categorizations were only weakly predictive of which treatments would help patients. Drawing on their new AI-fusion model, which combines the Google Gemini7’s review of the medical literature with Deepmind’s biological simulator work, researchers created new statistical models which allowed them to group diseases more finely - into ten times as many ailments as those presented in traditional medical diagnosis manuals. Preliminary testing with UK NHS doctors has found that the new AlphaDiagnosis program led to a 10% reduction in drug prescriptions that led to no clinical improvement, and a 0.8% reduction in patient mortality.
Scenario 2
The National Institutes of Health today announced the funding of a new $40 billion initiative to test and validate the health recommendations coming from the large AI builders. The medical arms of the AI companies - all spun up in the past five years - have released a steady stream of bold claims about medical breakthroughs, ranging from drug repurposing, to life-extension strategies. But clinical testing of these claims has fallen far behind the breakneck pace of announcements, leading some to criticize the AI companies for making announcements to drive stock price increases, rather than waiting for empirical validation. Leading AI health researcher Jan Mandrake contested this claim: “The breakthroughs that we’re able to make with these models are so rapid, it would be unethical for us to delay making the announcement and therefore withhold these benefits from the public.” The new NIH initiative is designed to provide the clinic support needed to validate claims. Half of the initiative’s budget and robotic staffing will be provided by partner AI firms, eager to get these validations, but also to gather the new biological data that will be produced by the trials.
Scenario 3
Rival health claims from big AI firms will be unresolvable in the near future, according to the NaiH health oracle. Recent success in developing new medicines has been successful, but the proprietary nature of the models has created a challenge, with AI medical rivals each claiming their approach is best. In this case, both GoogleHealth and KaiserAI have advocated for new antibody therapies designed to dramatically reduce human heart disease. Initial testing on both has been remarkably successful, leading to 98% reductions in coronary deaths. The National Institutes of Health AI advisor, NaiH Oracle, today explained that this has created an empirical challenge: “With both companies' products being so successful, but being based on proprietary models that are inscrutable from the outside, it will take years for sufficient empirical evidence to accumulate to know which is best.” The Secretary of Health and Human Services, Frederica Johnson acknowledged this point, but put the rival discoveries into perspective “In the short run we should focus on the enormous benefits to human health that both of these models are providing. In the longer term, we plan to work with our private sector partners to ensure that these barriers don’t artificially slow future innovation.”
Deep-Dive Worldbuilding Exercise
Scenario 1
New signs hint that the world could move back towards autarky
In a rare event, the international ties between countries grew weaker this year, according to an IMF analysis. For a century, the ties between countries have usually deepened, with the movement of labor and goods growing outside of wartime or other emergencies. But recent advances in AI have pushed countries into greater isolation, as rich countries begin to eschew global supply chains built around the cost advantages of cheap labor. For their part, poor countries have returned the favor, erecting trade barriers to prevent inflows of certain goods that are AI-produced and which threaten unemployment in the developing world. IMF spokesperson, Jimena Sanchez expressed concern, saying that global decoupling threatens political and economic apartheid.
Scenario 2
Scandinavian governments announce global health initiative
In a departure from previous international development funding, a consortium of Scandinavian country governments today announced a new health initiative to provide free basic medical care to the world’s poor. Over the next 10 years, the initiative promises to send robotic nurses to any village of 50 or more people that wants it. Using knowledge encoded in the new WorldHealth AI model, the robotic nurses will assist with a wide variety of health care needs. Initiative director, Sven Eriksson explained that “the falling cost of specialized robots has made it possible to provide cheap, effective health care. It is our responsibility as global citizens to share these benefits with the world.” Despite the potential health benefits that such care could provide, many countries are expected to be wary of adoption, fearing that the outsourcing of their medical information and treatment decisions could become a major vulnerability.
Scenario 3
Return to nature movement gains steam
Facing the so-called “AI retirement”, some displaced workers are turning to a more rural existence. Funded by government AI-unemployment schemes, these workers are creating new agricultural and industrial communes, which they feel will provide them with “the spiritual benefits of meaningful work” while still benefiting from modern conveniences. According to the website of one such group, RealWork, modern AI technologies will still be available to commune members in a central headquarters that will provide medical and other care. But outside that environment, members will be expected to eschew AI conveniences. A Department of Labor spokesperson remained noncommittal about whether the government would embrace these initiatives, saying “we’re all exploring the new ways that humans will find meaning in this AI-era. We look forward to seeing how these new initiatives will play out.”
Attendee 4D
General Worldbuilding Exercise
Medical sector and entries
Scenario 1: collapsing radiology, still active growing med school entrance statistics with respect to internal medicine / surgery etc. no substantial change with respect to 60% of medical sector, but certain disciplines heavily leaning on diagnostics are collapsing
Scenario 2: collapsing radiology, internal medicine, diagnostic disciplines but no threat to physical labor, at the same time corporate capture that creates further wealth inequality, AI models in hands of insurance sector, making medical sector less accessible
Scenario 3: easy to manufacture drugs makes insurance obsolete, medical sector is only in physical labor eg. surgery, people have more access to diagnostic capabilities in general through competent open source models
Can we learn from how AI models are acting in the world in order to recreate the model / identify the base data, the way quant funds can frontrun alpha that specific groups have? Would this mechanism work?
Deep-Dive Worldbuilding Exercise
Attendee 5A
General Worldbuilding Exercise
Scenario 1
Cost of compute decrease / Moore's law
Less concentration in GPU market
Criminal cases: massive increase in fraud and social engineering attacks
Affects all parts of the economy
AI related art and bespoke fine tuned model that outputs particular styles fo work are more and more popular
AI assistant personality design is increasingly popular, high end tailors design ai personas for wealthy clients
Long term continuous interaction with corporate owned models tailored to your preferences, and aesthetics
Customised clothing, cars, and products aided by advanced manufacturing with ai aided design
Custom goods are cheaper
Bloomberg: "Luxury AI Persona Designer 'Companion Couture' Reports $2B Revenue, Waiting List Grows"
@ConsumerWatch: "WARNING: New 'Digital Twin' Scam Uses Your Personal AI Assistant's Voice to Trick Family Members"
The Verge: "Meet the Data Farmers: Inside the Booming $500B Human Interaction Data Industry"
LinkedIn Post: "Looking for work? Top paying jobs this week: AI Personality Architect, Data Experience Designer, Digital Behavior Curator"
Reddit r/AIFashion: "Just got my AI-designed, auto-manufactured jacket based on my spotify playlist vibe 🔥"
Financial Times: "Corporate AI Companions Now Own 40% of Consumer Emotional Data, Regulators Concerned"
Top Gear Magazine: "Tesla's new 'Heritage Line' lets you mix design elements from any classic car into your custom EV build"
The Guardian: "Rise of 'AI Companion Addiction' - Therapists Report Surge in Attachment Issues"
Business Insider: "Last Major Customer Service Center Closes as AI Handles 95% of Consumer Interactions"
New Jersey Enquirer: "High School Adds 'Data Privacy Defense' to Core Curriculum as Scams Target Teens"
@MarketWatch: "Custom Manufacturing Index hits all-time high as AI design tools democratize production"
New York Times: "The New Social Divide: Those Who Can Afford Custom AI Companions vs Those Who Can't"
Wired: "Inside the Underground Market for 'Emotional Data Sets' - Your AI Friend's Memory Might Be For Sale"
@TheGuardian: "Should AI companions be required to identify themselves? New legislation proposed after dating app controversy"
Forbes: "Meet the New Creative Class: AI Style Trainers Command Six-Figure Salaries"
Scenario 2
Massive increase in cybersecurity attacks - proof of human needed
The Onion: "Last Human Customer Service Rep Preserved in Museum: 'I'd Like to Speak to Your Manager' Now Historical Phrase"
r/antiwork: "My AI Employees Are Unionizing, Demanding Better Processing Power and Cloud Storage"
Scenario 3
South China Morning Post: "Shenzhen's 'Solo-Corp' District Hits 1M Single-Owner Businesses Running AI Workforces"
Le Monde: "EU Introduces 'Digital Detox Centers' as Content Addiction Cases Surge"
BBC: "Investigation: The Rise of 'Extreme Content Pods' - How AI Creates Underground Entertainment"
Folha de Sao Paulo: "Amazon Neo-Luddite Communities Grow as Tech Refugees Seek 'Authentic Living'"
@COMPACT: "Latest youth phenomenon: 'Content Caves' where teens spend weeks consuming personalized AI entertainment"
The Guardian: "Study: 60% of Global Entertainment Now Generated Real-Time by AI for Individual Users"
Economic Times (India): "Bangalore's 'Agent Entrepreneurs' Managing AI Workforces Larger Than Traditional Corporations"
@NewYorkPost: "Man Marries His AI Assistant's Digital Twin's Cousin's Roommate - Says 'It's Complicated' on Facebook"
Bloomberg: "Billionaire Builds Private Island to Escape AI, Immediately Installs Smart Home System"
Vice: "Underground 'Reality Clubs' Where People Experience the Thrill of Unfiltered, Non-AI Conversation"
Deep-Dive Worldbuilding Exercise
Scenario 1
r/wallstreetbets: "Bought the AI hype, now living in my Tesla (which isn't even self-driving)"
The Information: "Inside the IP Wars: How Legal Battles Are Killing Innovation"
Scenario 2
Wall Street Journal: "Insurance Premiums for AI Systems Spike 400% Following Shanghai Port Incident"
Forbes: "The New Corporate Elite: How Three Companies Came to Control 80% of Global Logistics AI"
The Onion: "Man Proud He's Being Laid Off By Premium AI Instead of Basic Model"
The Beaverton: "Man Who Lost Job to AI Now Works Teaching AI How to Do His Old Job Better"
@BBCBreaking: "EU Announces 'AI Transition Fund' for Displaced White-Collar Workers"
Scenario 3
The Guardian: "Neo-Luddite Movement Claims Responsibility for Server Farm Attacks"
@COMPACT: "Latest youth phenomenon: 'Content Caves' where teens spend weeks consuming personalized AI entertainment"
Fox News: "BREAKING: 'Children of Silicon' cult members arrested after mass chip implantation ceremony"
@ResistanceDaily: "Join the Human Workers Alliance - Because algorithms don't need lunch breaks, but your family needs to eat"
The Economist: "The 0.001%: How AI Ownership Created a New Feudal Class"
Vice: "Underground 'Reality Clubs' Where People Experience the Thrill of Unfiltered, Non-AI Conversation"
Attendee 5B
General Worldbuilding Exercise
Compute cost and provision: format Economist Articles
Assumptions: Massive demand for compute, getting stronger in more radical scenarios.
Scenario 1
Microsoft’s Second Meltdown at Three Mile Island
When Microsoft reopened Unit 1 at Three Mile Island in 2025, it expected the demand for electricity to power its data center that it was building nearby to be high, riding the wave of AI hype of 2024. Realistic expectations unfortunately made the prospect less appealing, as the AI industry certainly expanded but low-compute inference led to less demand, the AI Cold Snap of 26-27 when many of overhyped AI startups failed at the same time as the data center opened, and the modern era of distributed local inference has favored other designs. The data center was mainly useful for massive training runs: valuable to Microsoft’s foundation model work, but not the economic boon expected. To make things worse, the new federal environmental regulations on cooling systems coincided with upheaval in the technical management of the site.
Scenario 2
Smarts as a Service at Silicon Jökul
Oldtimers may remember SaaS standing for Software as a Service, but these days it is Smarts. Not intelligence, mind you, we are all too familiar with how our helper agents, robolawyers and muses go off the rails at inconvenient moments, but they certainly provide enough smarts for most jobs and everyday applications. Running SaaS on the other hand has favored the growth of massive data centers where AI runs, as well as reliable global communications - the satellite constellations are almost as essential for Smarts as the data centers. The companies providing SaaS - old incumbents like Amazon, Microsoft and Google and newcomers like TatAI and Countenance - have been splitting their attention between the direct compute/comms aspect and acting as platforms for pre-trained AI. Increasingly newcomer SMEs have struck gold in training specialists and finding clever ways of deploying them - often by exploiting cross-border jurisdictional differences. The US loose regulatory regime allows developing medical AI that is officially not allowed in the EU but widely used by aging Europeans, while the UK is selling legal services to the despair of many bar associations. However, the real bottleneck is building data centers, no matter where the applications are. This is why Iceland and Hokkaido are benefiting from a boom in geothermally powered and arctically cooled data centers. We take a look at the Krafla data center project, financed by TatAI.
Scenario 3
Administration, the final frontier of computation?
It is a truth universally acknowledged, that an AI company with a good idea must be in want of a planning permission for its data center. The approval process of data centers has become the major bottleneck worldwide, with companies from Shanghai to Los Angeles to Guatemala griping about the time it takes to get the robot shovels in the ground. Once it was capital, then energy, then actual construction, being the bottleneck, but now it is universally bureaucracy. Even laissez-faire jurisdictions like the UK where the government explicitly has pre-approved nearly all construction as being in the national interest there is still normal paperwork that needs to be filled out. No matter that the AI lobbyists, lawyers, environmental evaluators, engineers and application writing experts run in other data centers at lightspeed, paperwork is often still literal paperwork. Even if it gets re-scanned and handled by administration increasingly using off-the-shelf administration AI. The main reason has been the massive rise of automated litigation, the exponential growth of planning requirements and the interest in doing aggregated preference elicitation for all potential stakeholders - ironically a result of the AI revolution. Indeed, some economists argue that this complexity is entirely balancing the efficiency increases from AI. When a typical application documentation runs in the terabyte range it is hard to dispute this.
Legal cases:
Deep-Dive Worldbuilding Exercise
Algorithmic improvements causing jumps in capability
Nonhuman Legal Subjects
Identity management becomes much more important if AI agents can reliably imitate people in general, and individuals in particular. In order to safeguard against astroturfing, counterfeit people, advanced spearphishing and fake employees systems of proving that one is a human and a particular legal subject, or a particular AI system in a legal context, becomes essential. Domains where identity management is weak will be overrun with fakery and manipulation (not necessarily driving users away to the real world or better domains, but certainly driving away much business).
B2B AI crime: while there is money in stealing from individuals, there is even more money in financial fraud, hacking AI to buy your product, and other forms of crimes or bad behavior where humans are not victims, but AI or companies.
Scalable totalitarianism: AI enables surveillance with greater scaling.
Note that such systems may still have security weaknesses that can be exploited. Systems supplied by greater powers to allies may have backdoors allowing them to be subverted, and such backdoors may be accidentally or deliberately revealed to other parties.
Attendee 5C
General Worldbuilding Exercise
Assuming there is more AI-generated content than human-generated content, is endorsement/backing the main contribution of human workers? Do existing gatekeepers (e.g. organisations, newspapers, experts, social networks) with strong reputations and/or new gatekeepers (e.g. builders of these AI tools) become more powerful?
Assuming there is more AI-generated content than human-generated content, is endorsement/backing the main contribution of human workers? Do existing gatekeepers (e.g. organisations, newspapers, experts, social networks) with strong reputations and/or new gatekeepers (e.g. builders of these AI tools) become more powerful?
Deep-Dive Worldbuilding Exercise
Scenario 1
Scenario 2