Wiki Contributions

Comments

I like this post and this research direction, I agree with almost everything you say, and I think you’re doing an unusually good job of explaining why you think your work is useful.

A nitpick: I think you’re using the term “scalable oversight” in a nonstandard and confusing way.

You say that scalable oversight is a more general version of “given a good model and a bad model, determine which one is good.” I imagine that more general sense you wanted is something like: you can implement some metric that tells you how “good” a model is, which can be applied not only to distinguish good from bad models (by comparing their metric values) but also can hopefully be used to train the models.

I think that your definition of scalable oversight here is broader than people normally use. In particular, I usually think of scalable oversight as the problem of making it so that we’re better able to make a procedure that tell us how good a model’s actions are on a particular trajectory; I think of it as excluding the problem of determining whether a model’s behaviors would be bad on some other trajectory that we aren’t considering. (This is how I use the term here, how Ansh uses it here, and how I interpret the usage in Concrete Problems and in Measuring Progress on Scalable Oversight for Large Language Models.)

I think that it’s good to have a word for the problem of assessing model actions on particular trajectories, and I think it’s probably good to distinguish between problems associated with that assessment and other problems; scalable oversight is the current standard choice for that.

Using your usage, I think scalable oversight suffices to solve the whole safety problem. Your usage also doesn’t play nicely with the low-stakes/high-stakes decomposition.

I’d prefer that you phrased this all by saying:
 

It might be the case that we aren’t able to behaviorally determine whether our model is bad or not. This could be because of a failure of scalable oversight (that is, it’s currently doing actions that we can’t tell are good), or because of concerns about failures that we can’t solve by training (that is, we know that it isn’t taking bad actions now, but we’re worried that it might do so in the future, either because of distribution shift or rare failure). Let’s just talk about the special case where we want to distinguish between two models which and we don’t have examples where the two models behaviorally differ. We think that it is good to research strategies that allow us to distinguish models in this case.

  • If you are very good at cyber and extremely smart, you can hide vulnerabilities in 10k-lines programs in a way that less smart specialists will have trouble discovering even after days of examination - code generation/analysis is not really defense favored;

Do you have concrete examples?

The goal of alignment research is not to grow some sentient AIs, and then browbeat or constrain them into doing things we want them to do even as they'd rather be doing something else.

I think this is a confusing sentence, because by "the goal of alignment research" you mean something like "the goal I want alignment research to pursue" rather than "the goal that self-identified alignment researchers are pushing towards".

@So8res  I'd be really interested in how you thought about these, especially the house elf example.

A few months after posting this, I regret that we didn't say something like the following:

It's somewhat morally dubious to continue deploying AIs that are known to be scheming, because the fact that those AIs are scheming against us makes it more plausible that they're unhappy about their situation. I'd like us to have thought about this more before using the plan described here.

I agree re time-awareness, with two caveats:

  • The kind of mechanism you listed probably only allows the AIs to have a rough idea of what time it is.
  • We can keep some of the instances of our AI very unaware of the time, by restricting their between-episode memory. For example, we might do this for the instances responsible for untrusted monitoring, to reduce collusion.

I think this post was quite helpful. I think it does a good job laying out a fairly complete picture of a pretty reasonable safety plan, and the main sources of difficulty. I basically agree with most of the points. Along the way, it makes various helpful points, for example introducing the "action risk vs inaction risk" frame, which I use constantly. This post is probably one of the first ten posts I'd send someone on the topic of "the current state of AI safety technology".

I think that I somewhat prefer the version of these arguments that I give in e.g. this talk and other posts.

My main objection to the post is the section about decoding and manipulating internal states; I don't think that anything that I'd call "digital neuroscience" would be a major part of ensuring safety if we had to do so right now.

In general, I think this post is kind of sloppy about distinguishing between control-based and alignment-based approaches to making usage of a particular AI safe, and this makes its points weaker.

Thanks for writing this; I agree with most of what you’ve said. I wish the terminology was less confusing.

One clarification I want to make, though:

You describe deceptive alignment as being about the model taking actions so that the reward-generating process thinks that the actions are good. But most deceptive alignment threat models involve the model more generally taking actions that cause it to grab power later.

Some examples of such actions that aren’t getting better train loss or train-time reward:

  • if you do evaluations to see whether your sometimes takes bad actions, the model might take actions that look good there even though you’re not going to train on the model’s actions.
  • The model might sandbag. For example, you might use RL to try to train the model to trick a trusted model which you’re using to monitor it. A deceptively aligned model might not try its best here, even though this means it’s getting worse reward.

Another important point on this topic is that I expect it's impossible to produce weak-to-strong generalization techniques that look good according to meta-level adversarial evaluations, while I expect that some scalable oversight techniques will look good by that standard. And so it currently seems to me that scalable-oversight-style techniques are a more reliable response to the problem "your oversight performs worse than you expected, because your AIs are intentionally subverting the oversight techniques whenever they think you won't be able to evaluate that they're doing so".

I think this point is incredibly important and quite underrated, and safety researchers often do way dumber work because they don't think about it enough.

Load More