All of Nora Belrose's Comments + Replies

5Evan Hubinger
I'm going to stop responding to you now, because it seems that you are just not reading anything that I am saying. For the last time, my criticism has absolutely nothing to do with Solomonoff induction in particular, as I have now tried to explain to you here and here and here etc. Yes—that's exactly the sort of counting argument that I like! Though note that it can be very hard to reason properly about counting arguments once you're using a prior like that; it gets quite tricky to connect those sorts of low-level properties to high-level properties about stuff like deception.

What makes you think that's intended to be a counting argument over function space? I usually think of this as a counting argument over infinite bitstrings

I definitely thought you were making a counting argument over function space, and AFAICT Joe also thought this in his report.

The bitstring version of the argument, to the extent I can understand it, just seems even worse to me. You're making an argument about one type of learning procedure, Solomonoff induction, which is physically unrealizable and AFAICT has not even inspired any serious real-world appr... (read more)

I definitely thought you were making a counting argument over function space

I've argued multiple times that Evan was not intending to make a counting argument in function space:

  • In discussion with Alex Turner (TurnTrout) when commenting on an earlier draft of this post.
  • In discussion with Quintin after sharing some comments on the draft. (Also shared with you TBC.)
  • In this earlier comment.

(Fair enough if you never read any of these comments.)

As I've noted in all of these comments, people consistently use terminology when making counting style argumen... (read more)

2Evan Hubinger
Sorry about that—I wish you had been at the talk and could have asked a question about this. I agree that Solomonoff induction is obviously wrong in many ways, which is why you want to substitute it out for whatever the prior is that you think is closest to deep learning that you can still reason about theoretically. But that should never lead you to do a counting argument over function space, since that is never a sound thing to do.

Right, and I've explained why I don't think any of those analyses are relevant to neural networks. Deep learning simply does not search over Turing machines or circuits of varying lengths. It searches over parameters of an arithmetic circuit of fixed structure, size, and runtime. So Solomonoff induction, speed priors, and circuit priors are all inapplicable.

It is trivially easy to modify the formalism to search only over fixed-size algorithms, and in fact that's usually what I do when I run this sort of analysis. I feel like you still aren't understandi... (read more)

Then show me how! If you think there are errors in the math, please point them out.

I'm not aware of any actual math behind the counting argument for scheming. I've only ever seen handwavy informal arguments about the number of Christs vs Martin Luthers vs Blaise Pascals. There certainly was no formal argument presented in Joe's extensive scheming report, which I assumed would be sufficient context for writing this essay.

6Evan Hubinger
Well, I presented a very simple formulation in my comment, so that could be a reasonable starting point. But I agree that unfortunately there hasn't been that much good formal analysis here that's been written up. At least on my end, that's for two reasons: 1. Most of the formal analysis of this form that I've published (e.g. this and this) has been focused on sycophancy (human imitator vs. direct translator) rather than deceptive alignment, as sycophancy is a substantially more tractable problem. Finding a prior that reasonably rules out deceptive alignment seems quite out of reach to me currently; at one point I thought a circuit prior might do it, but I now think that circuit priors don't get rid of deceptive alignment. 2. I'm currently more optimistic about empirical evidence rather than theoretical evidence for resolving this question, which is why I've been focusing on projects such as Sleeper Agents.

I obviously don't think the counting argument for overfitting is actually sound, that's the whole point. But I think the counting argument for scheming is just as obviously invalid, and misuses formalisms just as egregiously, if not moreso.

I deny that your Kolmogorov framework is anything like "the proper formalism" for neural networks. I also deny that the counting argument for overfitting is appropriately characterized as a "finite bitstring" argument, because that suggests I'm talking about Turing machine programs of finite length, which I'm not- I'm directly enumerating functions over a subset of the natural numbers. Are you saying the set of functions over 1...10,000 is not a well defined mathematical object?

I obviously don't think the counting argument for overfitting is actually sound, that's the whole point.

Yes, I'm well aware. The problem is that when you make the counting argument for overfitting, you do so in a way that seriously misuses the formalism, which is why the argument fails. So you can't draw any lessons about counting arguments for deception from the failure of your counting argument for overfitting.

But I think the counting argument for scheming is just as obviously invalid, and misuses formalisms just as egregiously, if not moreso.

Then... (read more)

I never used any kind of bitstring analysis.

2Evan Hubinger
Yes, that's exactly the problem: you tried to make a counting argument, but because you didn't engage with the proper formalism, you ended up using reasoning that doesn't actually correspond to any well-defined mathematical object. Analogously, it's like you wrote an essay about why 0.999... != 1 and your response to "under the formalism of real numbers as Dedekind cuts, those are identical" was "where did I say I was referring to Dedekind cuts?" It's fine if you don't want to use the standard formalism, but you need some formalism to anchor your words to, otherwise you're just pushing around words with no real way to ensure that your words actually correspond to something. I think the 0.999... != 1 analogy is quite apt here, because the problem really is that there is no formalism under which 0.999... != 1 that looks anything like the real numbers that you know, in the same way that there really is no formalism under which the sort of reasoning that you're using is meaningful.

I think the infinite bitstring case has zero relevance to deep learning.

There does exist a concept you might call "simplicity" which is relevant to deep learning. The neural network Gaussian process describes the prior distribution over functions which is induced by the initialization distribution over neural net parameters. Under weak assumptions about the activation function and initialization variance, the NNGP is biased toward lower frequency functions. I think this cuts against scheming, and we plan to write up a post on this in the next month or two.

2Evan Hubinger
I think you are still not really understanding my objection. It's not that there is a "finite bitstring case" and an "infinite bitstring case". My objection is that the sort of finite bitstring analysis that you use does not yield any well-defined mathematical object that you could call a prior, and certainly not one that would predict generalization.

Thanks for the reply. A couple remarks:

  • "indifference over infinite bitstrings" is a misnomer in an important sense, because it's literally impossible to construct a normalized probability measure over infinite bitstrings that assigns equal probability to each one. What you're talking about is the length weighted measure that assigns exponentially more probability mass to shorter programs. That's definitely not an indifference principle, it's baking in substantive assumptions about what's more likely.
  • I don't see why we should expect any of this reasoning
... (read more)

"indifference over infinite bitstrings" is a misnomer in an important sense, because it's literally impossible to construct a normalized probability measure over infinite bitstrings that assigns equal probability to each one. What you're talking about is the length weighted measure that assigns exponentially more probability mass to shorter programs. That's definitely not an indifference principle, it's baking in substantive assumptions about what's more likely.

No; this reflects a misunderstanding of how the universal prior is traditionally derived in i... (read more)

With respect to which measure though? You have to define a measure, there are going to be infinitely many possible measures you could define on this space. And then we'll have to debate if your measure is a good one.

4Daniel Kokotajlo
The actual measure that nature uses to determine the model weights at the end of training -- taking into account the random initialization and also the training process. I'm talking about the (not-yet-fully-understood) inductive biases of neural networks in practice.

It depends what you mean by a "way" the model can overfit.

Really we need to bring in measure theory to rigorously talk about this, and an early draft of this post actually did introduce some measure-theoretic concepts. Basically we need to define:

  • What set are we talking about,
  • What measure we're using over that set,
  • And how that measure relates to the probability measure over possible AIs.

The English locution "lots of ways to do X" can be formalized as "the measure of X-networks is high." And that's going to be an empirical claim that we can actually debate.

2Daniel Kokotajlo
I think I mean the same thing you do? "The measure of X-networks is high."

The point of that section is that "goals" are not ontologically fundamental entities with precise contents, in fact they could not possibly be so given a naturalistic worldview. So you don't need to "target the inner search," you just need to get the system to act the way you want in all the relevant scenarios.

The modern world is not a relevant scenario for evolution. "Evolution" did not need to, was not "intending to," and could not have designed human brains so that they would do high inclusive genetic fitness stuff even when the environment wildly dramatically changes and culture becomes completely different from the ancestral environment.

I doubt there would be much difference, and I think the alignment-relevant comparison is to compare in-distribution but out-of-sample performance to out-of-distribution performance. We can easily do i.i.d. splits of our data, that's not a problem. You might think it's a problem to directly test the model in scenarios where it could legitimately execute a takeover if it wanted to.

1Donald Hobson
Taking IID samples can be hard actually. Suppose you train an LLM on news articles. And each important real world event has 10 basically identical news articles written about it. Then a random split of the articles will leave the network being tested mostly on the same newsworthy events that were in the training data.  This leaves it passing the test, even if it's hopeless at predicting new events and can only generate new articles about the same events.  When data duplication is extensive, making a meaningful train/test split is hard.  If the data was perfect copy and paste duplicated, that could be filtered out. But often things are rephrased a bit. 

I don't think the results you cited matter much, because fundamentally the paper is considering a condition in which the model ~always is being pre-prompted with "Current year: XYZ" or something similar in another language (please let me know if that's not true, but that's my best-effort read of the paper).

I'm assuming we're not worried about the literal scenario in which the date in the system prompt causes a distribution shift, because you can always spoof the date during training to include future years without much of a problem. Rather, the AI needs to... (read more)

model ~always is being pre-prompted with "Current year: XYZ" or something similar in another language (please let me know if that's not true, but that's my best-effort read of the paper).

The backdoors tested are all extremely simple backdoors. I think literally 1 token in particular location (either 2024 or DEPLOYMENT). (ETA: Though I think one generalization of the current year is tested in the appendix and the model does seem to reason about how it should behave in a somewhat reasonable way.)

This is one place for improvement in future work.

So, I think this is wrong.

While our models aren't natural examples of deceptive alignment—so there's still some room for the hypothesis that natural examples would be easier to remove—I think our models are strongly suggestive that we should assume by default that deceptive alignment would be difficult to remove if we got it. At the very least, I think our results push the burden of proof to the other side: in the most analogous case that we've seen so far, removing deception can be very hard, so it should take some extra reason to believe that wouldn't co

... (read more)

Rather, they predict that the AI will develop an "inner goal" which it coherently pursues across contexts. That means there's not going to be a single activating context for the bad behavior (like in this paper, where it's just "see text that says the year is 2024" or "special DEPLOYMENT token") but rather the behavior would be flexibly activated in a wide range of contexts depending on the actual likelihood of the AI succeeding at staging a coup.

I agree, and I agree that our models are not quite fully there. But I think they're a lot closer to that tha... (read more)

(Didn't consult Quintin on this; I speak for myself)

I flatly deny that our arguments depend on AGI being anything like an LLM. I think the arguments go through in a very wide range of scenarios, basically as long as we're using some kind of white-box optimization to align them, rather than e.g. carrot-and-stick incentives or prompt engineering. Even if we only relied on prompt engineering, I think we'd be in a better spot than with humans (because we can run many controlled experiments).

A human can harbor a secret desire for years, never acting on it, and

... (read more)
Reply7221

FWIW it appears that out of the 4 differences you cited here, only one of them (the relaxation of the restriction that the scrubbed output must be the same) still holds as of this January paper from Geiger's group https://arxiv.org/abs/2301.04709. So the methods are even more similar than you thought.

I think it would be a distraction to try to figure out if LMs are "phenomenally conscious" for a few different reasons.

  1. I think there are pretty strong reasons to believe that phenomenal consciousness is not actually a substantive property, in the sense that either everything has it in some sense (panpsychism) or nothing does (eliminativism). Any other solution confronts the Hard Problem and the empirical intractability of actually figuring out which things are or are not phenomenally conscious.
  2. Your proposed tests for phenomenal consciousness seem to, in fa
... (read more)

This probably doesn't work, but have you thought about just using weight decay as a (partial) solution to this? In any sort of architecture with residual connections you should expect circuits to manifest as weights with nontrivial magnitude. If some set of weights isn't contributing to the loss then the gradients won't prevent them from being pushed toward zero by weight decay. Sort of a "use it or lose it" type thing. This seems a lot simpler and potentially more robust than other approaches.