Planned summary for the Alignment Newsletter:
This post considers three different kinds of “discontinuity” that we might imagine with AI development. First, there could be a sharp change in progress or the rate of progress that breaks with the previous trendline (this is the sort of thing <@examined@>(@Discontinuous progress in history: an update@) by AI Impacts). Second, the rate of progress could either be slow or fast, regardless of whether there is a discontinuity in it. Finally, the calendar time could either be short or long, regardless of the rate of progress.
The post then applies these categories to three questions. Will we see AGI coming before it arrives? Will we be able to “course correct” if there are problems? Is it likely that a single actor obtains a decisive strategic advantage?
There have already been numerous posts and discussions related to disambiguating the term "discontinuity". Here is my attempt.
For the purposes of the following discussion I’m going to distinguish between (a) continuous vs. discontinuous progress in AI research, where discontinuity refers specifically to a sharp jump or change in the AI research progress curve relative to the previous curve; (b) slow vs. fast rate of progress, referring to the steepness of the progress curve slope, regardless of whether or not it’s discontinuous; and (c) long vs. short clock time – i.e., whether progress takes a long or short time relative to absolute time and not relative to previous trend lines. What exactly counts as discontinuous / fast / short will depend on what purpose we are using them for, as below.
There seem to be three or four primary AI-risk-related issues that depend on whether or not there will be a discontinuity / fast takeoff speed:
Let’s deal with each of these individually:
Thanks especially to Sammy Martin and Issa Rice for discussions of this post and for helping me to clarify my thinking on this.