I'm a bit confused by your response. First, the meat of the argument:
You are implicitly comparing two models: Mfast and Mslow, which make predictions about the world. Each model makes several claims, including the shape of the function governing AI improvement and about how the shape of that function comes about[1]. So far as I can tell, a typical central claim of people who endorse Mfast is that AIs working on themselves will allow their capabilities to grow hyper-exponentially. Those who endorse Mslow don't seem to dispute that self-improvement will occur, but expect it to be par for the course of a new technology and continue to be well modeled by exponential growth.
So, it seems to me that the existence of recursive self-improvement without an observed fast takeoff is evidence against Mfast. I presume you disagree, but I don't see how from a model selection framework. Mfast predicts either the data we observe now or a fast takeoff, whereas Mslow predicts only the exponential growth we are currently observing (do you disagree that we're in a time of exponential growth?). By the laws of probability, Mslow places higher probability on the current data than Mfast. Due to Bayes' rule, Mslow is therefore favored by the existing evidence (i.e. the Bayes factor indicates that you should update towards Mslow). Now, you might have a strong enough prior that you still favor Mfast, but if your model placed less probability mass on the current data than another model, you should update towards that other model.
Second (and lastly), a quibble:
Yitz's response uses the terms hard/soft takeoff, was that edited? Otherwise your argument against "continuous" as opposed to slow or soft comes off as a non-sequitor; that you're battling for terminological ground that isn't even under contention.
Different people will have different versions of each of these models. Some may even oscillate between them as is convenient for argumentative purposes (a-la motte and bailey).
I'm a bit confused by your response. First, the meat of the argument:
You are implicitly comparing two models: Mfast and Mslow, which make predictions about the world. Each model makes several claims, including the shape of the function governing AI improvement and about how the shape of that function comes about[1]. So far as I can tell, a typical central claim of people who endorse Mfast is that AIs working on themselves will allow their capabilities to grow hyper-exponentially. Those who endorse Mslow don't seem to dispute that self-improvement will occur, but expect it to be par for the course of a new technology and continue to be well modeled by exponential growth.
So, it seems to me that the existence of recursive self-improvement without an observed fast takeoff is evidence against Mfast. I presume you disagree, but I don't see how from a model selection framework. Mfast predicts either the data we observe now or a fast takeoff, whereas Mslow predicts only the exponential growth we are currently observing (do you disagree that we're in a time of exponential growth?). By the laws of probability, Mslow places higher probability on the current data than Mfast. Due to Bayes' rule, Mslow is therefore favored by the existing evidence (i.e. the Bayes factor indicates that you should update towards Mslow). Now, you might have a strong enough prior that you still favor Mfast, but if your model placed less probability mass on the current data than another model, you should update towards that other model.
Second (and lastly), a quibble:
Yitz's response uses the terms hard/soft takeoff, was that edited? Otherwise your argument against "continuous" as opposed to slow or soft comes off as a non-sequitor; that you're battling for terminological ground that isn't even under contention.
Different people will have different versions of each of these models. Some may even oscillate between them as is convenient for argumentative purposes (a-la motte and bailey).