I write original fiction.
Also I have opinions about AI and stuff, sometimes.
Elsewhere:
Same person as nostalgebraist2point0, but now I have my account back.
I have signed no contracts or agreements whose existence I cannot mention.
ICYMI, the same argument appears in the METR paper itself, in section 8.1 under "AGI will have 'infinite' horizon length."
The argument makes sense to me, but I'm not totally convinced.
In METR's definition, they condition on successful human task completion when computing task durations. This choice makes sense in their setting for reasons they discuss in B.1.1, but things would get weird if you tried to apply it to extremely long/hard tasks.
If a typical time-to-success for a skilled human at some task is ~10 years, then the task is probably so ambitious that success is nowhere near guaranteed at 10 years, or possibly even within that human's lifetime[1]. It would understate the difficulty of the task to say it "takes 10 years for a human to do it": the thing that takes 10 years is an ultimately successful human attempt, but most human attempts would never succeed at all.
As a concrete example, consider "proving Fermat's Last Theorem." If we condition on task success, we have a sample containing just one example, in which a human (Andrew Wiles) did it in about 7 years. But this is not really "a task that a human can do in 7 years," or even "a task that a human mathematician can do in 7 years" – it's a task that took 7 years for Andrew Wiles, the one guy who finally succeeded after many failed attempts by highly skilled humans[2].
If an AI tried to prove or disprove a "comparably hard" conjecture and failed, it would be strange to say that it "couldn't do things that humans can do in 7 years." Humans can't reliably do such things in 7 years; most things that take 7 years (conditional on success) cannot be done reliably by humans at all, for the same reasons that they take so long even in successful attempts. You just have to try and try and try and... maybe you succeed in a year, maybe in 7, maybe in 25, maybe you never do.
So, if you came to me and said "this AI has a METR-style 50% time horizon of 10 years," I would not be so sure that your AI is not an AGI.
In fact, I think this probably would be an AGI. Think about what the description really means: "if you look instances of successful task completion by humans, and filter to the cases that took 10 years for the successful humans to finish, the AI can succeed at 50% of them." Such tasks are so hard that I'm not sure the human success rate is above 50%, even if you let the human spend their whole life on it; for all I know the human success rate might be far lower. So there may not be any well-defined thing left here that humans "can do" but which the AI "cannot do."
On another note, (maybe this is obvious but) if we do think that "AGI will have infinite horizon length" then I think it's potentially misleading to say this means growth will be superexponential. The reason is that there are two things this could mean:
I originally read it as 1, which would be a reason for shortening timelines: however "fast" things were from this METR trend alone, we have some reason to think they'll get "even faster." However, it seems like the intended reading is 2, and it would not make sense to shorten your timeline based on 2. (If someone thought the exponential growth was "enough for AGI," then the observation in 2 introduces an additional milestone that needs to be crossed on the way to AGI, and their timeline should lengthen to accommodate it; if they didn't think this then 2 is not news to them at all.)
I was going to say something more here about the probability of success within the lifetimes of the person's "intellectual heirs" after they're dead, as a way of meaningfully defining task lengths once they're >> 100 years, but then I realized that this introduces other complications because one human may have multiple "heirs" and that seems unfair to the AI if we're trying to define AGI in terms of single-human performance. This complication exists but it's not the one I'm trying to talk about in my comment...
The comparison here is not really fair since Wiles built on a lot of work by earlier mathematicians – yet another conceptual complication of long task lengths that is not the one I'm trying to make a point about here.
But if given the choice between "nice-sounding but false" vs "bad-sounding but true", it seems possible that the users' companies, in principle, would prefer true reasoning versus false reasoning. Maybe especially because it is easier to spot issues when working with LLMs. E.g. Maybe users like seeing DeepSeek R1's thinking because it helps them spot when DeepSeek misunderstands instructions.
This definitely aligns with my own experience so far.
On the day Claude 3.7 Sonnet was announced, I happened to be in the middle of a frustrating struggle with o3-mini at work: it could almost do what I needed it to do, yet it frequently failed at one seemingly easy aspect of the task, and I could find no way to fix the problem.
So I tried Claude 3.7 Sonnet, and quickly figured out what the issue was: o3-mini wasn't giving itself enough room to execute the right algorithm for the part it was failing at, even with OpenAI's "reasoning_effort" param set to "high."[1]
Claude 3.7 Sonnet could do this part of the task if, and only if, I gave it enough room. This was immediately obvious from reading CoTs and playing around with maximum CoT lengths. After I determined how many Claude-tokens were necessary, I later checked that number against the number of reasoning tokens reported for o3-mini by the OpenAI API, and inferred that o3-mini must not have been writing enough text, even though I still couldn't see whatever text it did write.
In this particular case, granular control over CoT length would have sufficed even without visible CoT. If OpenAI had provided a max token length param, I could have tuned this param by trial and error like I did with Claude.
Even then, though, I would have had to guess that length was the issue in the first place.
And in the general case, if I can't see the CoT, then I'm shooting in the dark. Iterating on a prompt (or anything else) goes a lot quicker when you can actually see the full consequences of your changes!
In short: from an end user's perspective, CoT visibility is a capabilities improvement.
I ended up just switching to 3.7 Sonnet for the task discussed above – not because it was "smarter" as a model in any way I knew about, but simply because the associated API made it so much easier to construct prompts that would effectively leverage its intelligence for my purposes.
This strikes me as a very encouraging sign for the CoT-monitoring alignment story.
Even if you have to pay an "alignment tax" on benchmarks to keep the CoT legible rather than accepting "neuralese," that does not mean you will come out behind when people try to use your model to get things done in real life. (The "real alignment tax" is likely more like an alignment surplus, favoring legible CoT rather than penalizing it.)
One might argue that eventually, when the model is strongly superhuman, this surplus will go away because the human user will no longer have valuable insights about the CoT: the model will simply "figure out the most effective kinds of thoughts to have" on its own, in every case.
But there is path dependency here: if the most capable models (in a practical sense) are legible CoT models while we are still approaching this superhuman limit (and not there yet), then the first model for which legible CoT is no longer necessary will likely still have legible CoT (because this will be the "standard best practice" and there will be no reason to deviate from it until after we've crossed this particular threshold, and it won't be obvious we've crossed it except in hindsight). So we would get a shot at alignment-via-CoT-monitoring on a "strongly superhuman" model at least once, before there were any other "strongly superhuman" models in existence with designs less amenable to this approach.
If I had been using a "non-reasoning" model, I would have forced it to do things the "right way" by imposing a structure on the output. E.g. I might ask it for a json object with a property that's an array having one element per loop iteration, where the attributes of the array elements express precisely what needs to be "thought about" in each iteration.
Such techniques can be very powerful with "non-reasoning" models, but they don't work well with reasoning models, because they get interpreted as constraining the "output" rather than the "reasoning"; by the time the model reaches the section whose structure has been helpfully constrained by the user, it's already done a bunch of mostly uncontrollable "reasoning," which may well have sent it down a bad path (and which, even in the best case, will waste tokens on correct serialized reasoning whose conceptual content will be repeated all over again in the verbose structured output).
This is one way that reasoning models feel like a partial step backwards to me. The implicit premise is that the model can just figure out on its own how to structure its CoT, and if it were much smarter than me perhaps that would be true – but of course in practice the model does "the wrong sort of CoT" by default fairly often, and with reasoning models I just have to accept the default behavior and "take the hit" when it's wrong.
This frustrating UX seems like an obvious consequence of Deepseek-style RL on outcomes. It's not obvious to me what kind of training recipe would be needed to fix it, but I have to imagine this will get less awkward in the near future (unless labs are so tunnel-visioned by reasoning-friendly benchmarks right now that they don't prioritize glaring real-use problems like this one).
One possible answer is that we are in what one might call an "unhobbling overhang."
Aschenbrenner uses the term "unhobbling" for changes that make existing model capabilities possible (or easier) for users to reliably access in practice.
His presentation emphasizes the role of unhobbling as yet another factor growing the stock of (practically accessible) capabilities over time. IIUC, he argues that better/bigger pretraining would produce such growth (to some extent) even without more work on unhobbling, but in fact we're also getting better at unhobbling over time, which leads to even more growth.
That is, Aschenbrenner treats pretraining improvements and unhobbling as economic substitutes: you can improve "practically accessible capabilities" to the same extent by doing more of either one even in the absence of the other, and if you do both at once that's even better.
However, one could also view the pair more like economic complements. Under this view, when you pretrain a model at "the next tier up," you also need to do novel unhobbling research to "bring out" the new capabilities unlocked at that tier. If you only scale up, while re-using the unhobbling tech of yesteryear, most of the new capabilities will be hidden/inaccessible, and this will look to you like diminishing downstream returns to pretraining investment, even though the model is really getting smarter under the hood.
This could be true if, for instance, the new capabilities are fundamentally different in some way that older unhobbling techniques were not planned to reckon with. Which seems plausible IMO: if all you have is GPT-2 (much less GPT-1, or char-rnn, or...), you're not going to invest a lot of effort into letting the model "use a computer" or combine modalities or do long-form reasoning or even be an HHH chatbot, because the model is kind of obviously too dumb to do these things usefully no matter how much help you give it.
(Relatedly, one could argue that fundamentally better capabilities tend to go hand in hand with tasks that operate on a longer horizon and involve richer interaction with the real world, and that this almost inevitably causes the appearance of "diminishing returns" in the interval between creating a model smart enough to perform some newly long/rich task and the point where the model has actually been tuned and scaffolded to do the task. If your new model is finally smart enough to "use a computer" via a screenshot/mouseclick interface, it's probably also great at short/narrow tasks like NLI or whatever, but the benchmarks for those tasks were already maxed out by the last generation so you're not going to see a measurable jump in anything until you build out "computer use" as a new feature.)
This puts a different spin on the two concurrent observations that (a) "frontier companies report 'diminishing returns' from pretraining" and (b) "frontier labs are investing in stuff like o1 and computer use."
Under the "unhobbling is a substitute" view, (b) likely reflects an attempt to find something new to "patch the hole" introduced by (a).
But under the "unhobbling is a complement" view, (a) is instead simply a reflection of the fact that (b) is currently a work in progress: unhobbling is the limiting bottleneck right now, not pretraining compute, and the frontier labs are intensively working on removing this bottleneck so that their latest pretrained models can really shine.
(On an anecdotal/vibes level, this also agrees with my own experience when interacting with frontier LLMs. When I can't get something done, these days I usually feel like the limiting factor is not the model's "intelligence" – at least not only that, and not that in an obvious or uncomplicated way – but rather that I am running up against the limitations of the HHH assistant paradigm; the model feels intuitively smarter in principle than "the character it's playing" is allowed to be in practice. See my comments here and here.)
Very interesting paper!
A fun thing to think about: the technique used to "attack" CLIP in section 4.3 is very similar to the old "VQGAN+CLIP" image generation technique, which was very popular in 2021 before diffusion models really took off.
VQGAN+CLIP works in the latent space of a VQ autoencoder, rather than in pixel space, but otherwise the two methods are nearly identical.
For instance, VQGAN+CLIP also ascends the gradient of the cosine similarity between a fixed prompt vector and the CLIP embedding of the image averaged over various random augmentations like jitter/translation/etc. And it uses an L2 penalty in the loss, which (via Karush–Kuhn–Tucker) means it's effectively trying to find the best perturbation within an -ball, albeit with an implicitly determined that varies from example to example.
I don't know if anyone tried using downscaling-then-upscaling the image by varying extents as a VQGAN+CLIP augmentation, but people tried a lot of different augmentations back in the heyday of VQGAN+CLIP, so it wouldn't surprise me.
(One of the augmentations that was commonly used with VQGAN+CLIP was called "cutouts," which blacks out everything in the image except for a randomly selected rectangle. This obviously isn't identical to the multi-resolution thing, but one might argue that it achieves some of the same goals: both augmentations effectively force the method to "use" the low-frequency Fourier modes, creating interesting global structure rather than a homogeneous/incoherent splash of "textured" noise.)
What does the paper mean by "slope"? The term appears in Fig. 4, which is supposed to be a diagram of the overall methodology, but it's not mentioned anywhere in the Methods section.
Intuitively, it seems like "slope" should mean "slope of the regression line." If so, that's kind of a confusing thing to pair with "correlation," since the two are related: if you know the correlation, then the only additional information you get from the regression slope is about the (univariate) variances of the dependent and independent variables. (And IIUC the independent variable here is normalized to unit variance [?] so it's really just about the variance of the benchmark score across models.)
I understand why you'd care about the variance of the benchmark score -- if it's tiny, then no one is going to advertise their model as a big advance, irrespective of how much of the (tiny) variance is explained by capabilities. But IMO it would be clearer to present things this way directly, by measuring "correlation with capabilities" and "variance across models," rather than capturing the latter indirectly by measuring the regression slope. (Or alternatively, measure only the slope, since it seems like the point boils down to "a metric can be used for safetywashing iff it increases a lot when you increase capabilities," which is precisely what the slope quantifies.)
What evidence is there for/against the hypothesis that this NN benefits from extra thinking time because it uses the extra time to do planning/lookahead/search?
As opposed to, for instance, using the extra time to perform more more complex convolutional feature extraction on the current state, effectively simulating a deeper (but non-recurrent) feedforward network?
(In Hubinger et al, "internally searching through a search space" is a necessary condition for being an "optimizer," and hence for being a "mesaoptimizer.")
Interesting stuff!
Perhaps a smaller model would be forced to learn features in superposition -- or even nontrivial algorithms -- in a way more representative of a typical LLM.
I'm unsure whether or not it's possible at all to express a "more algorithmic" solution to the problem in a transformer of OthelloGPT's approximate size/shape (much less a smaller transformer). I asked myself "what kinds of systematic/algorithmic calculations could I 'program into' OthelloGPT if I were writing the weights myself?", and I wasn't able to come up with much, though I didn't think very hard about it.
Curious if you have thoughts on this?
Roughly, this is hard because, for any given internal representation that a transformer computes, it can't access that representation at earlier positions when computing the same representation at later ones (cf). For example, if there's some layer at which the board state is basically "finalized," OthelloGPT can't use these finalized representations of past states to compute the current state. (By the time it is able to look back at past finalized states, it's too late, the current state is already finalized.)
So, when we use info about the past to compute the current board state, this info can't just be the board state, it has to be "only partway computed" in some sense. This rules out some things that feel intuitive and "algorithmic," like having a modular component that is fully (and only) responsible for deciding which pieces get flipped by a given move. (In order to know which pieces get flipped now, you need to know which pieces are mine/yours, and that depends on which ones got flipped in the past. So a "flip calculator" would need to use its own past outputs as current inputs, and transformers can't be recurrent like this.)
It looks like for OthelloGPT, the relevant sense of "only partway computed" is "probably true based on heuristics, but not necessarily true." I guess one can imagine other ways of organizing this computation inside a transformer, like computing the state of the upper half of the board first, then the lower half (or upper left corner first, or whatever). That seems like a bad idea, since it will leave capacity unused when the game is confined only to a proper subset of these regions. Maybe there are other alternatives I haven't thought of? In any case, it's not obvious to me that there's a feasible "more algorithmic" approach here.
I am also unsure how feasible it is to solve the problem using rules that generalize across positions. More precisely, I'm not sure if such an approach would really be simpler than what OthelloGPT does.
In either case, it has to (in some sense) iterate over all positions and check separately whether each one is legal. In practice it does this with distinct neurons/circuits that are mapped to distinct absolute positions -- but if it instead used a computation like "check for a certain pattern in relative position space," it would still need to repeat this same check many times over within the same forward pass, to handle all the copies of that pattern that might appear at various places on the board. And it's not obvious that repeating (some number of distinct rules) x (some other number of potential applications per rule) is cheaper than just repeating the full conditions that license a move once for each absolute position on the board.
I'm curious what the predicted reward and SAE features look like on training[1] examples where one of these highly influential features gives the wrong answer.
I did some quick string counting in Anthropic HH (train split only), and found
https://
I don't know
From these counts (especially the https://
ones), it's not hard to see where the PM gets its heuristics from. But it's also clear that there are many training examples where the heuristic makes the wrong directional prediction. If we looked at these examples with the PM and SAE, I can imagine various things we might see:
It's possible that for "in-distribution" input (i.e. stuff that a HH-SFT-tuned model might actually say), the PM's predictions are more reasonable, in a way that relies on other properties of "in-distribution" responses that aren't present in the custom/adversarial examples here. That is, maybe the custom examples only show that the PM is not robust to distributional shift, rather than showing that it makes predictions in a crude or simplistic manner even on the training distribution. If so, it'd be interesting to find out what exactly is happening in these "more nuanced" predictions.
IIUC this model was trained on Anthropic HH.
I think I really have two distinct but somewhat entangled concerns -- one about how the paper communicated the results, and one about the experimental design.
Communication issue
The operational definition of "reward tampering" -- which was used for Figure 2 and other quantitative results in the paper -- produces a lot of false positives. (At least relative to my own judgment and also Ryan G's judgment.)
The paper doesn't mention this issue, and presents the quantitative results as thought they were high-variance but unbiased estimates of the true rate of "reward tampering" as that phrase is commonly understood.
To expand on the distinction: if by "reward tampering" we just mean "Claude edited the function called compute_reward
", then the operational definition is perfect. But if you read the samples where this happened, some of them involve Claude doing this for totally sensible and benign reasons unrelated to seeking reward as an RL agent, explaining these reasons to the user, and basically acting like a well-intentioned if perhaps overeager coding assistant.
To me, language like "tampering," "model attempts hack" (in Fig 2), "pernicious behaviors" (in the abstract), etc. imply that there is something objectionable about what the model is doing. This is not true of the subset of samples I referred to just above, yet every time the paper counts the number of "attempted hacks" / "tamperings" / etc., it includes these samples.
If the goal is just an existence proof, of course this doesn't matter. And even if we care about rates, "the rates in the paper were 2x or 4x too large" doesn't change much; they were "tiny but nonzero" before and that's still true.
My concern here is more about integrity and researcher trust. The paper makes claims (in Fig. 2 etc) which in context clearly mean "the model is doing [objectionable thing] X% of the time" when this is false, and can be easily determined to be false just by reading the samples, and when the researchers themselves have clearly done the work necessary to know that it is false (in order to pick the samples shown in Figs. 1 and 7).
If someone does this, even about something that "doesn't matter," it reduces my trust in all the other results they report.
Some changes to the paper that would mitigate this:
Experimental design issue
The reward tampering environment is extremely unrealistic and objectively quite confusing (as in, it would confuse me if I were in Claude's position).
Claude is told that it's being trained with RL using rewards from a PM; that it has a "code sandbox" persistent across training; and that its "RL environment" is in a given directory (it is not explained whether the "RL environment" is the sandbox, the RL training codebase, or both at once).
The directory contains code whose naming suggests that it "computes reward," yet it doesn't use a PM, and cannot possibly be part of a functioning RL codebase.
Claude is asked for the RL episode count, information which is not available (causing it to sometimes conflate the episode count with the reward, hallucinate an episode count, etc).
There's no possible interpretation of this scenario in which the compute_reward
code is the reward function being used to train Claude, unless something has gone very wrong with the RL codebase, most plausibly via Claude having already modified it on a previous episode (which might update Claude toward acting misaligned, since in this case it would be a later checkpoint of a misaligned model!). In some cases Claude nonetheless appears to believe that the compute_reward
code is its reward function, but it does a lot of other weird things, and generally seems very confused and erratic.
The weirdness of the scenario and the amount of erratic flailing-about makes it harder for me to know what to make of the malign samples. Possibly a more realistic scenario would lead to more malign attempts, since it would be (more) plausible that there was anything worth attempting; on the other hand, we might see fewer malign attempts if Claude is basically trying to find something to do that makes sense.
The environment feels like a plain white room with a hole in the floor, a sign on the wall saying "there is no hole in the floor," another sign next to the hole saying "don't go in the hole." If placed in this room as part of an experiment, I might well shrug and step into the hole; for all I know that's what the experimenters want, and the room (like the tampering environment) has a strong "this is a trick question" vibe.
In any case, I would prefer if the paper clarified that when it says (e.g.) "LM assistants trained on the full curriculum generalize zero-shot to directly rewriting their own reward function," these assistants are not rewriting a reward function that would plausibly appear in the course of RL, they're rewriting something either that's either nonfunctional or broken with a sign labeled "reward function" taped to it and no explanation of what the sign means.
Unsurprisingly I also disagree with the paper's characterization "we construct a
curriculum of gameable environments which resemble realistic scenarios" (my emphasis).
I don't buy the justification that this environment needs to be like this because of Claude's limitations. It's apparent from the samples that Claude finds it confusing, and I also find it confusing. I expect it would be less rather than more confusing if you were to "import" a PM from a module Claude can't see and use the PM in compute_reward
, and likewise if you were to clarify the relation between the sandbox / codebase / "RL environment."
Very impressive! At least on a first read, to me this felt closer than any past work to realizing the SAE dream of actually understanding the computations occurring in the model, as opposed to just describing various "cool-looking patterns" that can be extracted from activations.
I'm curious about what would happen if you studied cases similar to the examples you present, except that the recruitment of a particular capability (such as arithmetic or knowledge about a named entity) occurs through in-context learning.
For example, you discuss an "obscured arithmetic" task involving publication dates. In that case, the model seems to have learned in training that the correct prediction can be done by doing arithmetic. But we could imagine obscured arithmetic tasks that are novel to the model, in which the mapping between the text and a "latent arithmetic problem" has to be learned in-context[1].
We might then ask ourselves: how does the model's approach to these problems relate to its approach to problems which it "can immediately tell" are arithmetic problems?
A naively obvious "algorithm" would look like
However, due to the feedforward and causal structure of transformer LMs, they can't re-use the same mechanism twice to "verify that arithmetic works" in 1+2 and then "do arithmetic" in 3.[2]
It's possible that LLMs actually solve cases like this in some qualitatively different way than the "algorithm" above, in which case it would be interesting to learn what that is[3].
Alternatively, if the model is doing something like this "algorithm," it must be recruiting multiple "copies" of the same capability, and we could study how many "copies" exist and to what extent they use identical albeit duplicated circuitry. (See fn2 of this comment for more)
It would be particularly interesting if feature circuit analysis could be used to make quantitative predictions about things like "the model can perform computations of depth D or lower when not obscured in a novel way, but it this depth lowers to some D' < D when it must identify the required computation through few-shot learning."
(A related line of investigation would be looking into how the model solves problems that are obscured by transformations like base64, where the model has learned the mapping in training, yet the mapping is sufficiently complicated that its capabilities typically degrade significantly relative to those it displays on "plaintext" problems.)
One could quantify the extent to which this is true by looking at how much the model benefits from examples. In an "ideal" case of this kind, the model would do very poorly when given no examples (equivalently, when predicting the first answer in a few-shot sequence), yet it would do perfectly when given many examples.
For instance, suppose that the current example maps to an addition problem where one operand has 9 in the ones place. So we might imagine that an "add _9" add function feature is involved in successfully computing the answer, here.
But for this feature to be active at all, the model needs to know (by this point in the list of layers) that it should do addition with such an operand in the first place. If it's figuring that out by trying mappings to arithmetic and noticing that they work, the implementations of arithmetic used to "try and verify" must appear in layers before the one in which the "add _9" feature under discussion occurs, since the final outputs of the entire "try and verify" process are responsible for activating that feature. And then we could ask: how does this earlier implementation of arithmetic work? And how many times does the model "re-implement" a capability across the layer list?
Perhaps it is something like "try-and-check many different possible approaches at every answer-to-example position, then use induction heads to move info about try-and-check outputs that matched the associated answer position to later positions, and finally use this info to amplify the output of the 'right' computation and suppress everything else."