Existential Risk Strategy Research

Mitigating Existential Risks

Differential progress / intellectual progress / technological development

Published

Apr 24, 2020

Existential Risk Strategy Research

Mitigating Existential Risks

Differential progress / intellectual progress / technological development

Published

Apr 24, 2020

Existential Risk Strategy Research

Mitigating Existential Risks

Differential progress / intellectual progress / technological development

Published

Apr 24, 2020

Table of Contents

Title

Title

In 2002, Nick Bostrom introduced the concept of differential technological development. This concept is highly relevant to many efforts to do good in the world (particularly, but not only, from the perspective of reducing existential risks). Other writers have generalised Bostrom’s concept, using the terms “differential intellectual progress” or just “differential progress”. I think these generalisations are actually even more useful.

I’m thus glad to see that many (aspiring) effective altruists and rationalists seem to know about and refer to these concepts. However, it also seems that people referring to these concepts often don’t clearly define/explain them, and, in particular, don’t clarify how they differ from and relate to each other.

Thus, this post seeks to summarise and clarify how people use these terms, and to outline how I see them as fitting together. The only (potentially somewhat) original analysis is in the final section; the other parts of this post present no new ideas of my own.

Differential technological development

In his 2002 paper on existential risks, Bostrom writes:

If a feasible technology has large commercial potential, it is probably impossible to prevent it from being developed. At least in today’s world, with lots of autonomous powers and relatively limited surveillance, and at least with technologies that do not rely on rare materials or large manufacturing plants, it would be exceedingly difficult to make a ban 100% watertight. [...]

What we do have the power to affect (to what extent depends on how we define “we”) is the rate of development of various technologies and potentially the sequence in which feasible technologies are developed and implemented. Our focus should be on what I want to call differential technological development: trying to retard the implementation of dangerous technologies and accelerate implementation of beneficial technologies, especially those that ameliorate the hazards posed by other technologies. [...] In the case of biotechnology, we should seek to promote research into vaccines, anti-bacterial and anti-viral drugs, protective gear, sensors and diagnostics, and to delay as much as possible the development (and proliferation) of biological warfare agents and their vectors. [bolding added]

Of course, in reality, it’ll often be unclear in advance whether a technology will be more dangerous or more beneficial. This uncertainty can occur for a wide variety of reasons, including the fact that how dangerous a given technology is may later change due to the development of other technologies (see here and here for related ideas), and that many technologies are dual-use.

Note that technology is sometimes defined relatively narrowly (e.g., “the use of science in industry, engineering, etc., to invent useful things or to solve problems”), and sometimes relatively broadly (e.g., “means to fulfill a human purpose”). Both Bostrom and other writers on differential technological development seem to use a narrow definition of technology. This excludes things such as philosophical insights or shifts in political institutions; as discussed below, I would argue that those sorts of changes fit instead within the broader categories of differential intellectual progress or differential progress.

Differential intellectual progress

Other writers later introduced generalisations of Bostrom’s concept, applying the same idea to more than just technological developments (narrowly defined). For example, Muehlhauser and Salamon quote Bostrom’s above recommendation, and then write:

But good outcomes from intelligence explosion [basically, a rapid advancement of AI to superintelligence] appear to depend not only on differential technological development but also, for example, on solving certain kinds of problems in decision theory and value theory before the first creation of AI (Muehlhauser 2011). Thus, we recommend a course of differential intellectual progress, which includes differential technological development as a special case.

Differential intellectual progress consists in prioritizing risk-reducing intellectual progress over risk-increasing intellectual progress. As applied to AI risks in particular, a plan of differential intellectual progress would recommend that our progress on the scientific, philosophical, and technological problems of AI safety outpace our progress on the problems of AI capability such that we develop safe superhuman AIs before we develop (arbitrary) superhuman AIs. [bolding added]

Tomasik also takes up the term “differential intellectual progress”, and tentatively suggests, based on that principle, actions such as speeding up the advancement of social sciences and cosmopolitanism relative to the advancement of technology. (Note that Tomasik is seeing technology as a whole as relatively risk-increasing [on average and with caveats], which is something that the original concept of “differential technological development” couldn’t have captured.) Tomasik provides in this context the following quote from Isaac Asimov:

The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.

Note that, while the term “differential intellectual progress” was introduced in relation to AI alignment, and is often discussed in that context, it can be about increases and decreases in risks more broadly, not just those from AI.

Also note that we’re using “progress” here as a neutral descriptor, referring essentially to relatively lasting changes that could be either good or bad; the positive connotations that the term often has should be set aside. "Differential intellectual development" might therefore have been a less misleading term, given that it's somewhat easier to interpret "development" as being neutral rather than necessarily positive.[1] I will continue to use the established terms "differential intellectual progress" and "differential progress", but when defining and explaining them I'll say "development" rather than "progress".

Differential progress (and tying it all together)

Some writers have also used the seemingly even broader term differential progress (e.g., here and here). I haven’t actually seen anyone make explicit what slightly different concept this term is meant to point to, or whether it’s simply meant as an abbreviation that’s interchangeable with “differential intellectual progress”.[2]

But personally, I think that a useful distinction can be made, and I propose that we define differential progress as “prioritizing risk-reducing developments over risk-increasing developments”. This definition merely removes the word “intellectual” (in two places) from Muehlhauser and Salamon’s above-quoted definition of differential intellectual progress, and replaces "progress" with "developments" (which I would personally also do in defining their original concept, to avoid what I see as misleading connotations).

Thus, I propose we think of “differential progress” as a broad category, which includes differential intellectual progress as a subset, but also includes advancing types of risk-reducing developments that aren’t naturally considered as intellectual. These other types of risk-reducing developments might (perhaps) include the spread of democracy, cosmopolitanism, pacifism, emotional stability, happiness, or economic growth. (I don’t know if any of these are actually risk-reducing; I’m merely offering what seem to be plausible examples, to help illustrate the concept.)

Intellectual developments were obviously a necessary prerequisite for some of those forms of development. For example, someone had to come up with the idea of democracy in the first place. And intellectual developments could help advance some of those forms of development. For example, more people learning about democracy might increase its spread, and the development of better psychotherapy might aid in spreading emotional stability and happiness.

But these forms of development do not necessarily depend on further intellectual developments. For example, democracy has now already been invented, and there may be places where it’s already widely known about, but just not implemented. The uptake of democracy in such places may be better thought of as something like “political developments” or “activism-induced developments”, rather than as “intellectual developments”.

Differential progress also includes slowing risk-increasing developments. Such risk-increasing developments can include intellectual progress. But they can also include types of development that (again) aren’t naturally considered as intellectual, such as (perhaps) nationalism, egoism, or economic growth. Development along these dimensions could be slowed by intellectual means, but could also (perhaps) be slowed by things like trying to make certain norms or ideologies less prominent or visible, or being less economically productive yourself.[3]

(In fact, because Tomasik’s essay discusses many such types of developments, I think (with respect) that his essay would be better thought of as being about “differential progress”, rather than as being about “differential intellectual progress”.)

How does differential technological development fit into this picture? It’s a subset of differential intellectual progress, which (I propose) should itself be a subset of differential progress as a whole.[4] We can thus represent all three concepts in a Venn diagram (along with some possible examples of activities that these principles might suggest one should do):

I’d also like to re-emphasise that my aim here is to illustrate the concepts, not make claims about what specific actions these concepts suggest one should take. But one quick, tentative claim is that, when possible, it may often be preferable to focus on advancing risk-reducing developments, rather than on actively slowing risk-increasing developments via methods like trying to shut things down, remove funding, or restrict knowledge. One reason is that people generally like technological development, funding, access to knowledge, etc., so actively limiting such things may result in a lot more pushback than just advancing some beneficial thing would.[5] (This possibility of pushback means my examples for slowing risk-increasing developments are especially uncertain.)

Relationship to all "good actions"

One final thing to note is that differential progress refers to “prioritizing risk-reducing developments over risk-increasing developments”. It does not simply refer to “prioritizing world-improving developments over world-harming developments”, or "prioritizing good things over bad things". In other words, the concepts of "differential progress" and "good actions" are highly related, and many actions will fit inside both concepts, but they are not synonymous.[6]

More specifically, there may be some "good actions" that are good for reasons unrelated to risks (by which we particularly mean catastrophic or existential risks). For example, it's plausible that something like funding the distribution of bednets in order to reduce infant deaths is net beneficial, due to improvements in present people's health and wellbeing, rather than due to changes in the risks humanity faces.[7]

Meanwhile, differential progress will always have at least some positive effects, in expectation, as it always results in a net reduction in risks. But there may be times when differential progress is sufficiently costly in other ways, unrelated to risks, that it's not beneficial on balance. For example, slowing the advancement of a technology that is incredibly slightly risky, and that would substantially increase the present population's wellbeing once developed, could classify as differential progress but not as a "good action".

Thus, we can represent differential progress, differential intellectual progress, and differential technological development as highly overlapping with, but not perfect subsets of, "good actions":

Closing remarks

I hope you’ve found this post clear and useful. If so, you may also be interested in my prior post, which similarly summarises and clarifies Bostrom’s concept of information hazards, and/or my upcoming post, which will summarise his concept of the unilateralist’s curse and discuss how it relates to information hazards and differential progress.

My thanks to David Kristoffersson and Justin Shovelain for feedback on a draft of this post.

Other sources relevant to this topic are listed here.

Notes

  1. It's also possible that I am extending the concepts of differential intellectual progress or differential progress more generally beyond what they were originally intended to mean, and that this is what makes the positive connotations of "progress" seem inappropriate. Personally, I think that, in any case, my way of interpreting and using of these concepts is useful. Further discussion in the comments. ↩︎

  2. It’s very possible that someone has made that explicit, and I’ve just missed it. ↩︎

  3. Again, my focus here is on clarifying concepts that allow us to have such discussions, rather than defending any specific claims about what increases vs decreases risks. I included economic growth in both lists of examples because its effects are particularly debated; see here for a recent analysis. ↩︎

  4. I later wrote another, possibly clearer explanation of the distinction I have in mind between these three concepts in these comments. ↩︎

  5. Two other potential reasons:

    • Things like restricting access to knowledge or banning certain lines or research seems to have a pretty patchy track record. If this is true, then we should probably have a higher standard of proof for thinking that that's a good idea than for thinking something like "Increase funding to AI safety" is a good idea.

      There can be various benefits from “progress” that aren’t related to existential risks, or that reduce some risks while increasing others. As such, advancing risk-reducing progress may have both a positive intended effect and positive side effects, whereas slowing risk-increasing progress may have negative side effects, even if it has a positive intended effect and is indeed somewhat good on balance.But I think that this is a complicated question, and that these claims require more justification and evidence that I can provide in this post; I raise these claims merely as potential considerations. ↩︎

  6. Here I'm using "differential progress" as if it refers to a principle or action. I would personally also be happy to use it to refer to an outcome or effect, such as saying that we have "achieved" or are "working towards" differential progress if the outcome that we aim for is to have more risk-reducing developments and less risk-increasing ones. ↩︎

  7. Again, my point is to illustrate concepts; an assessment of how bednet distribution affects risks, and how this in turns affects the overall value of bednet distribution, is far beyond the scope of this post. ↩︎

Newsletter

Newsletter

Newsletter

Get research updates from Convergence

Leave us your contact info and we’ll share our latest research, partnerships, and projects as they're released.

You may opt out at any time. View our Privacy Policy.