I think the core confusion is that outer/inner (mis)-alignment have different (reasonable) meanings which are often mixed up:
The key thing is that threat models are not necessarily problems that need to be solved directly. For instance, AI control aims to address the threat model of inner misalignment without solving inner alignment.
Here are these terms defined as threat models
This seems like a pretty reasonable decomposition of problems to me, but again note that these problems don't have to respectively be solved by inner/outer alignment "solutions".
This proposal has two necessary desiderata:
This seems like generally reasonable overall proposal, though there are alternatives. And the caveats around outer alignment only needing to be locally adequate are important.
This content is copied out of this draft which I've never gotten around to cleaning up and publishing.
The outer misalignment threat model covers cases where problematic feedback results in training a misaligned AI even if the actual oversight process used for training would actually have caught the catastrophically bad behavior if it was applied to this action.
For AIs which aren’t well described as pursuing goals, it’s sufficient for the AI to just be reasonably well optimized to perform well according to this reward provision process. However, note that AIs which aren't well described as pursuing goals also likely pose no misalignment risk.
Prior work hasn’t been clear about outer alignment solutions just needing to be robust to a particular AI produced by some training process, but this seems extremely key to a reasonable definition of the problem from my perspective. This is both because we don’t need to be arbitrarily robust (the key AIs will only be so smart, perhaps not even smarter than humans) and because approaches might depend on utilizing the AI itself in the oversight process (recursive oversight) such that they’re only robust to that AI but not others. For an example of recursive oversight being robust to a specific AI, consider ELK type approaches. ELK could be applicable to outer alignment via ensuring that a human overseer is well informed about everything a given AI knows (but not necessarily well informed about everything any AI could know).Os
Again, this is only important insofar as AIs are doing anything well described as "trying" in any cases.
In some circumstances, it’s unclear exactly what it would even mean to optimize a given reward provision process as the process is totally inapplicable to the novel circumstances. We’ll ignore this issue.
That's an excellent point.
I agree. I think that's probably a better way of clarifying the confusion that what I wrote.
Here’s my take on why the distinction between inner and outer-alignment frame is weird/unclear/ambiguous in some circumstances: My understanding is that these terms were originally used when talking about AGI. So outer alignment involved writing down a reward or utility function for all of human values and inner alignment involves getting these values in the AI.
However, it gets confusing when you use these terms in relation to narrow AI. For a narrow AI, there’s a sense in which we feel that we should only have to define the reward on that narrow task. ie. if we want an AI to be good at answering questions, an objective that rewards it for correct answers and penalises itself for incorrect answers feels like a correct reward function for that domain. So if things go wrong and it kidnaps humans and forces us to ask it lots of easy questions so it can score higher, we’re not sure whether to say that it’s inner or outer alignment. On one hand, if our reward function penalised kidnapping humans (which is something we indeed want penalised) then it wouldn’t have done it. So we are tempted to say it is outer misalignment. On the other hand, many people also have an intuition that we’ve defined the reward function correctly on that domain and that the problem is that our AI didn’t generalise correctly from a correct specification. This pulls us in the opposite direction, towards saying it is inner misalignment.
Notice that what counts as a proper reward function is only unclear because we’re talking about narrow AI. If we were talking about AGI, then of course our utility function would be incomplete if it doesn’t specify that it shouldn’t kidnap us in order to do better at a question-answering task. It’s an AGI, so that’s in scope. But when we’re talking about narrow AI, it feels as though we shouldn’t have to specify it or provide anti-kidnapping training data. We feel like it should just learn it automatically on the limited domain, ie. that avoiding kidnapping is the responsibility of the training process, not of the reward function.
Hence the confusion. The resolution is relatively simple: define how you want to partition responsibilities between the reward function and the training process.