michaelcohen

Wiki Contributions

Comments

Sorted by

I think [process-based RL] has roughly the same risk profile as imitation learning, while potentially being more competitive.

I agree with this in a sense, although I may be quite a bit a more harsh about what counts as "executing an action". For example, if reward is based on an overseer talking about the action with a large group of people/AI assistants, then that counts as "executing the action" in the overseer-conversation environment, even if the action looks like it's for some other environment, like a plan to launch a new product in the market. I do think myopia in this environment would suffice for existential safety, but I don't know how much myopia we need.

If you're always talking about myopic/process-based RLAIF when you say RLAIF, then I think what you're saying is defensible. I speculate that not everyone reading this recognizes that your usage of RLAIF implies RLAIF with a level of myopia that matches current instances of RLAIF, and that that is a load-bearing part of your position.

I say "defensible" instead of fully agreeing because I weakly disagree that increasing compute is any more of a dangerous way to improve performance than by modifying the objective to a new myopic objective. That is, I disagree with this:

I think you would probably prefer to do process-based RL with smaller models, rather than imitation learning with bigger models

You suggest that increasing compute is the last thing we should do if we're looking for performance improvements, as opposed to adding a very myopic approval-seeking objective. I don't see it. I think changing the objective from imitation learning is more likely to lead to problems than scaling up the imitation learners. But this is probably beside the point, because I don't think problems are particularly likely in either case.

What is process-based RL?

I think your intuitions about costly international coordination are challenged by a few facts about the world. 1) Advanced RL, like open borders + housing deregulation, guarantees vast economic growth in wealthy countries. Open borders, in a way that seems kinda speculative, but intuitively forceful for most people, has the potential to existentially threaten the integrity of a culture, including especially its norms; AI has the potential, in a way that seems kinda speculative, but intuitively forceful for most people, has the potential to existentially threaten all life. The decisions of wealthy countries are apparently extremely strongly correlated, maybe in part for "we're all human"-type reasons, and maybe in part because legislators and regulators know that they won't get their ear chewed off for doing things like the US does. With immigration law, there is no attempt at coordination; quite the opposite (e.g. Syrian refugees in the EU). 2) The number of nuclear states is stunningly small if one follows the intuition that wildly uncompetitive behavior, which leaves significant value on the table, produces an unstable situation. Not every country needs to sign on eagerly to avoiding some of the scariest forms of AI. The US/EU/China can shape other countries' incentives quite powerfully. 3) People in government do not seem to be very zealous about economic growth. Sorry this isn't a very specific example. But their behavior on issue after issue does not seem very consistent with someone who would see, I don't know, 25% GDP growth from their country's imitation learners, and say, "these international AI agreements are too cautious and are holding us back from even more growth"; it seems much more likely to me that politicians' appetite for risking great power conflict requires much worse economic conditions than that.

In cases 1 and 2, the threat is existential, and countries take big measures accordingly. So I think existing mechanisms for diplomacy and enforcement are powerful enough "coordination mechanisms" to stop highly-capitalized RL projects. I also object a bit to calling a solution here "strong global coordination". If China makes a law preventing AI that would kill everyone with 1% probability if made, that's rational for them to do regardless of whether the US does the same. We just need leaders to understand the risks, and we need them to be presiding over enough growth that they don't need to take desperate action, and that seems doable.

Also, consider how much more state capacity AI-enabled states could have. It seems to me that a vast population of imitation learners (or imitations of populations of imitation learners) can prevent advanced RL from ever being developed, if the latter is illegal; they don't have to compete with them after they've been made. If there are well-designed laws against RL (beyond some level of capability), we would have plenty of time to put such enforcement in place.

I believe that LM agents based on chain of thought and decomposition seem like the most plausible approach to bootstrapping subhuman systems into trusted superhuman systems. For about 7 years using LM agents for RLAIF has seemed like the easiest path to safety,[4] and in my view this is looking more and more plausible over time.

I agree whole-heartedly with the first sentence. I'm not sure why you understand it to support the second sentence; I feel the first sentence supports my disagreement with the second sentence! Long-horizon RL is a different way to get superhuman systems, and one encourages that intervening in feedback if the agent is capable enough. Doesn't the first sentence support the case that it would be safer to stick to chain of thought and decomposition as the key drivers of superhumanness, rather than using RL?

On the 2nd point, the whole discussion of mu^prox vs. mu^dist is fundamentally about goal (mis)generalization. My position is that for a very advanced agent, point estimates of the goal (i.e. certainty that some given account of the goal is correct) would probably really limit performance in many contexts. This is captured by Assumptions 2 and 3. An advanced agent is likely to entertain multiple models of what their current understanding of their goal in a familiar context implies about their goal in a novel context. Full conviction in mu^dist does indeed imply non-wireheading behavior, and I wouldn't even call it misgeneralization; I think that would be a perfectly valid interpretation of past rewards. So that's why I spend so much time discussing relative credence in those models.

Load More