Existential Risk Strategy Research

Holistic Strategy Research

The ‘far future’ is not just the far future

Published

Jan 16, 2020

Existential Risk Strategy Research

Holistic Strategy Research

The ‘far future’ is not just the far future

Published

Jan 16, 2020

Existential Risk Strategy Research

Holistic Strategy Research

The ‘far future’ is not just the far future

Published

Jan 16, 2020

Table of Contents

Title

Title

It’s a widely held belief in the existential risk reduction community that we are likely to see a great technological transformation in the next 50 years [1]. A technological transformation will either cause flourishing, existential catastrophe, or other forms of large change for humanity. The next 50 years will matter directly for most currently living people. Existential risk reduction and handling the technological transformation are therefore not just questions of the ‘far future’ or the ‘long-term’; it is also a ‘near-term’ concern.

The far future, the long term, and astronomical waste

Often in EA, the importance of the ‘far future’ is used to motivate existential risk reduction and other long-term oriented work such as AI safety. ‘Long term’ in itself is used even more commonly, and while it is more ambiguous, it often carries or contains the same meaning as ‘far future’. Here are some examples: Influencing the Far Future, The Importance of the Far Future, Assumptions About the Far Future and Cause Priority, The Long Term Future, Longtermism.

'The importance of the far future' argument builds on the postulate that there are many possible good lives in the future, many more than currently exist. This long-term future can stretch hundreds, thousands, or billions of years or even further into the future. Nick Bostrom's Astronomical Waste makes a compelling presentation of the argument:

Given these estimates, it follows that the potential for approximately 10^38 human lives is lost every century that colonization of our local supercluster is delayed; or equivalently, about 10^29 potential human lives per second.

The existential risk reduction position is not predicated on astronomical waste

However, while astronomical waste is a very important argument, strong statements of its type are not necessary to take the existential risk position. The vast majority of work in existential risk reduction is based on the plausibility that we'll see technologically driven events of immense impact on humanity, a technological transformation, occur within the next 50 years. Currently living people and our children and grandchildren would be drastically affected by such a technological transformation[2].

Similarly, Bostrom, in his astronomical waste essay, argues that even with a ‘person-affecting utilitarian’ view, reducing existential risk is a priority:

Now, if these assumptions are made, what follows about how a person-affecting utilitarian should act? Clearly, avoiding existential calamities is important, not just because it would truncate the natural lifespan of six billion or so people, but also – and given the assumptions this is an even weightier consideration – because it would extinguish the chance that current people have of reaping the enormous benefits of eventual colonization.

The 'far future' is the ‘future’

The arguments about the value of future lives and the possible astronomical value of the future of humanity are very important. But our work in existential risk reduction is meant to help future, near-future, and currently existing people. Distinguishing between these mostly doesn’t seem to be decision-relevant if a technological transformation is likely to happen within the next 50 years. And speaking of existential risk reduction and flourishing in terms of the ‘far future’ too often seems like it’s likely to make people focus overly much on the general difficulties of imagining affecting that far future.

I propose we avoid calling what we are doing 'far future' work (or other similar terms), excepting cases where we think it will almost only affect events occurring beyond the next 50 years. So what should we say instead? The fates of currently living people, near-term future living people, and long-term future living people are all a question of the 'future'. Perhaps we should just call it ‘future’ oriented work.

Notes

  1. When does the existential risk reduction community think we may see a technological transformation? In the Walsh 2017 survey, the median estimate of AI experts was that there’s a 50% chance we will have human level AI by 2061. My assessment is that people in the existential risk reduction community have similar views to the AI experts. I'm not aware of any direct surveys in the community. People in AI safety appear to generally have shorter timelines than the AI experts polled. Paul Christiano: "human labor being obsolete... within 20 years is something within the ballpark of 35% ... I think compared to the community of people who think about this a lot, I’m more somewhere in, I’m still on the middle of the distribution". ↩︎

  2. Who will be alive in 20 or 50 years? Likely you, and likely your children and grandchildren. The median age in the world is currently 29.6 years. World life expectancy at birth is 72.2 years, US life expectancy is 78 years, and Canada is 82 years. Even without further life span improvements, the average currently living person will be alive 40 years from now. Improving medicine globally will push countries closer to the level of Canada in the next few decades. Standard medicine doesn't seem likely to lead to a great difference beyond that. However, direct aging prevention or reversal interventions such as senolytics could cause a phase change in life expectancy by adding decades. Interventions of this form may hit the market in the next few decades. ↩︎

Newsletter

Newsletter

Newsletter

Get research updates from Convergence

Leave us your contact info and we’ll share our latest research, partnerships, and projects as they're released.

You may opt out at any time. View our Privacy Policy.