All of the gears to ascension's Comments + Replies

I think there may have been a communication error. It sounded to me like you were making the point that the policy does not have to internalize the reward function, but he was making the point that the training setup does attempt to find a policy that maximizes-as-far-as-it-can-tell the reward function. in other words, he was saying that reward is the optimization target of RL training, you were saying reward is not the optimization target of policy inference. Maybe.

2Alex Turner
I'm pretty sure he was talking about the trained policies and them, by default, maximizing reward outside the historical training distribution. He was making these claims very strongly and confidently, and in the very next slide cited Cohen's Advanced artificial agents intervene in the provision of reward. That work advocates a very strong version of "policies will maximize some kind of reward because that's the point of RL." He later appeared to clarify/back down from these claims, but in a way which seemed inconsistent with his slides, so I was pretty confused about his overall stance. His presentation, though, was going strong on "RL trains reward maximizers." There's also a problem where a bunch of people appear to have cached that e.g. "inner alignment failures" can happen (whatever the heck that's supposed to mean), but other parts of their beliefs seem to obviously not have incorporated this post's main point. So if you say "hey you seem to be making this mistake", they can point to some other part of their beliefs and go "but I don't believe that in general!". 

well, the fact that I don't have an answer ready is itself a significant component of an answer to my question, isn't it?

A friend on an alignment chat said something to the effect of:

i think they are just sorely underestimating again and again the difference between a cute gang of sincere EA red teamers and the internet. the internet is where [...] lives for gods sake.

And so I figured I'd come here and ask about it. This eval seems super shallow, only checking if the model is, on its own, trying to destroy the world. Seems rather shallow and uncreative - it barely touched on any of the jailbreaks or ways to pressure or trick the model into misbehaving.

3Daniel Kokotajlo
It is well understood within OpenAI that 'the internet' (tens of millions of people interacting with the models) is a more powerful red-teamer than anything we can do in-house. What's your point?
3AVoropaev
What improvements do you suggest?

I do think there's real risk there even with base models, but it's important to be clear where it's coming from - simulators can be addictive when trying to escape the real world. Your agency needs to somehow aim away from the simulator, and use the simulator as an instrumental tool.

2janus
I think you just have to select for / rely on people who care more about solving alignment than escapism, or at least that are able to aim at alignment in conjunction with having fun. I think fun can be instrumental. As I wrote in my testimony, I often explored the frontier of my thinking in the context of stories. My intuition is that most people who go into cyborgism with the intent of making progress on alignment will not make themselves useless by wireheading, in part because the experience is not only fun, it's very disturbing, and reminds you constantly why solving alignment is a real and pressing concern.

my impression is that by simulator and simulacra this post is not intending to claim that the thing it is simulating is realphysics but rather that it learns a general "textphysics engine", the model, which runs textphysics environments. it's essentially just a reframing of the prediction objective to describe deployment time - not a claim that the model actually learns a strong causal simplification of the full variety of real physics.

2janus
That's correct. Even if it did learn microscopic physics, the knowledge wouldn't be of use for most text predictions because the input doesn't specify/determine microscopic state information. It is forced by the partially observed state to simulate at a higher level of abstraction than microphysics -- it must treat the input as probabilistic evidence for unobserved variables that affect time evolution. See this comment for slightly more elaboration.