This is am interesting take on framing Takeoff Dynamic Timelines -- and I understand why it became your emphasis -- the ability to align the AI is the outcome that really matters in that situation.
That said, I see two major challenges with defining time through any form of operationalization standards:
Operations can change (and should improve). Often humans shock themselves with what is possible to operationalize in crisis times. (See output during WWII as a simple, clear example). This means that your time scale will fluctuate based on the of the operational effectiveness and scale -- or, we don't have a sense of the scale of the operational input in the 'transition period'.
We're in the nascent stages where we can not predict how much operational input is required for successful alignment. In other words, we have no way to judge how much operational input will get us the outcome (operational output) we desire.
Without either a clear operational input (capacity and effectiveness will likely fluctuate wildly over the coming years / decades) or clear operational output (we do not have a clear sense of the amount of input to get our desired outcome), its not a highly effective measure.
Specifically, not knowing either of these would lead to the conclusion that "we want more clock time" and perhaps more "crisis time" specifically, to maximize our operational inputs and thus greatest probability of a successful alignment outcome.
-- All that said, I think you hit on a key idea here — we need to make sure that we are measuring and tracking what actually matters: our ability to safely align the AI in question.
This is am interesting take on framing Takeoff Dynamic Timelines -- and I understand why it became your emphasis -- the ability to align the AI is the outcome that really matters in that situation.
That said, I see two major challenges with defining time through any form of operationalization standards:
Without either a clear operational input (capacity and effectiveness will likely fluctuate wildly over the coming years / decades) or clear operational output (we do not have a clear sense of the amount of input to get our desired outcome), its not a highly effective measure.
Specifically, not knowing either of these would lead to the conclusion that "we want more clock time" and perhaps more "crisis time" specifically, to maximize our operational inputs and thus greatest probability of a successful alignment outcome.
--
All that said, I think you hit on a key idea here — we need to make sure that we are measuring and tracking what actually matters: our ability to safely align the AI in question.
Curious if you think I missed something here?