I think your main point is probably right but was not well argued here. It seems like the argument is a vibe argument of like "nah they probably won't find this evidence compelling".
You could also make an argument from past examples where there has been large action to address risks in the world, and look at the evidence there (e.g., banning of CFCs, climate change more broadly, tobacco regulation, etc.)
You could also make an argument from existing evidence around AI misbehavior and how its being dealt with, where (IMO) 'evidence much stronger than internals' basically doesn't seem to affect the public conversation outside the safety community (or even much here).
I think it's also worth saying a thing very directly: just because non-behavioral evidence isn't likely to be widely legible and convincing does not mean it is not useful evidence for those trying to have correct beliefs. Buck's previous post and many others discuss the rough epistemic situation when it comes to detecting misalignment. Internals evidence is going to be one of the tools in the toolkit, and it will be worth keeping in mind.
Another thing worth saying: if you think scheming is plausible, and you think it will be difficult to update against scheming from behavioral evidence (Buck's post), and you think non-behavioral evidence is not likely to be widely convincing (this post), then the situation looks really rough.
I think your main point is probably right but was not well argued here.
Fair, I though that an example would make this sufficiently obvious that it wasn't worth arguing for at length but I should have spelled it out a bit more.
I think it's also worth saying a thing very directly: just because non-behavioral evidence isn't likely to be widely legible and convincing does not mean it is not useful evidence for those trying to have correct beliefs.
FWIW, I do say this under "These techniques could be quite useful via two mechanisms:".
I think you make some good points here, but there's an additional mechanism by which I believe internal-based techniques have the potential to make people intervene strongly on suspected misaligned behaviour.
This is whereby (in pre-superintelligent models) we are able to establish a strong correlation between misaligned AI behaviour examples understandable to humans and some model internals e.g. a deception probe. If there is strong empirical evidence establishing a link (I'm not strongly confident this will be the case, but mildly optimistic), then as we move to superintelligent models, I believe people will be more likely to take action on evidence from model internals alone, especially if above a certain threshold for likelihood.
My reasoning for this relates to example today such as medical interventions which are taken as a result of EEG data (electrical activity in brain) even if no external behavioral signs are present (or ever present), but because there is enough evidence that certain patterns act as early warning signs for medical issues.
While there are obviously material differences between the 'cost' of these decisions, it does give me encouragement that people will place a high level of confidence in signals which aren't directly interpetable to humans if a statistical correlation has been established with previously observed behaviour.
I think this holds true only in a situation where there is positive intent by decision makers to actually accurately detect misaligned behaviour, as without human understable behavioural examples, internal-based signals would be easier to dismiss if that was the intent of the decision maker.
Honestly I think this can go one step further and require that the behavioural output be something of more substance than text of what the model says it would do, or chain of thought pondering. I'm thinking more of the lines of the honeypot work. I suspect we are interested in convincing similar people about AI safety and will likely get the "cool, come back to me when the models start actually doing bad things" responses if we aren't able to provide concrete examples of model capability.
Ryan agrees, the main thing he means by "behavioral output" is what you're saying: an actually really dangerous action.
One key hope for mitigating risk from misalignment is inspecting the AI's behavior, noticing that it did something egregiously bad, converting this into legible evidence the AI is seriously misaligned, and then this triggering some strong and useful response (like spending relatively more resources on safety or undeploying this misaligned AI).
You might hope that (fancy) internals-based techniques (e.g., ELK methods or interpretability) allow us to legibly incriminate misaligned AIs even in cases where the AI hasn't (yet) done any problematic actions despite behavioral red-teaming (where we try to find inputs on which the AI might do something bad), or when the problematic actions the AI does are so subtle and/or complex that humans can't understand how the action is problematic[1]. That is, you might hope that internals-based methods allow us to legibly incriminate misaligned AIs even when we can't produce behavioral evidence that they are misaligned.
Unfortunately, I think it's unlikely that evidence which is just from fancy internals-based methods (and can't be supported by human inspection of AI behavior) will be legible and convincing in practice. Suppose that we use some fancy method to train an ELK head to honestly predict what is going to happen or to honestly answer our questions. Then, suppose this head says/predicts "this action would yield very bad consequences in a way which is unlikely to be an accident" but humans can't understand how this action would result in these bad consequences (even after it is explained to them). Even if the internals-based method is extremely well supported theoretically and empirically (which seems quite unlikely), I don't think this would suffice for this to trigger a strong response by convincing relevant people[2]. This reasoning also applies to "our interpretability method decoded some very problematic deceptive reasoning" or "the deception probe fires in a very suspicious way (e.g. it fires more when we give it a fake opportunity to escape or ask it about its alignment research outputs)".
More generally, I think that to convince people enough to take strong action, evidence will need to bottom out in some sort of behavioral evidence that (many) humans can understand.[3] (That's not to say solid behavioral evidence will necessarily suffice.)
This isn't to say that internals-based techniques for detecting scheming aren't useful. These techniques could be quite useful via two mechanisms:
Internals-based techniques for generally understanding an AI's behavior could also help by providing evidence for a particular interpretation of behavioral evidence.[5] For instance, suppose an AI attempted to sabotage some alignment research. Internals-based techniques could (in principle) help by backing up the interpretation that this behavior is due to the AI being dangerously misaligned (or could provide evidence against this interpretation). However, for usage as legible evidence, this internals-based understanding is unlikely to work as a heavily load-bearing component, while it might work as a more minor/moderate piece of evidence in support of a given interpretation.
Given these difficulties in producing legible evidence, I think researchers working on internals-based techniques for detecting misalignment should spend some time thinking about the ultimate use case of their technique and then alter their approach as needed. These difficulties also generally make me somewhat less excited about internals-based methods for detecting misalignment (at least if these methods don't involve finding inputs on which the AI behaves poorly). I think techniques for detecting misalignment should focus on finding behavioral evidence or should aim to be robust (and cheap) enough to be iterated against.
Audio narrations of this blog are available on our podcast. Search "Redwood Research" or subscribe via Apple Podcasts, Spotify, RSS, or other platforms.
We can push the frontier of what humans can understand using recursive oversight techniques like debate which might allow for eliciting explanations that allow for humans to understand how the action is problematic (e.g., pointing out exactly what the problematic behavior is and explaining how to test this in isolation). However, if the problematic nature of an action is sufficiently subtle and/or complex (perhaps due to the AI having superhuman understanding), then even well elicited debate might be unable to provide a good enough explanation. We might have hoped that internals-based methods could push frontier beyond debate (perhaps to indefinite worst case scalability) while still producing legible and convincing evidence, but I'll argue that non-behavioral evidence is unlikely to be convincing. This suggests an upper bound on strategies which depend on legible evidence: once well elicited debate (and other methods for improving human understanding) aren't viable, misaligned AIs can get away with doing bad actions without these actions resulting in legible evidence of misalignment. ↩︎
That is, relevant humans, this approach might work for eliciting a strong response from AIs once AIs make the important decisions. ↩︎
That is, it must at least partially bottom out behaviorally. Internals-based evidence which is substantially behaviorally supported (in a relatively direct way) could suffice. E.g., if we can read off the AI's internal reasoning using some fancy method, it looks quite malign, and the malign aspects of the reasoning are also quite predictive of the AI's behavior, this could suffice. For instance, Alignment Faking in Large Language Models utilized a mixture of inspecting the Chain-of-Thought and behavioral evidence in a relatively effective way. ↩︎
Presumably, the actual level of usefulness might be substantially lower as the AI is a misaligned AI trying to disempower humans. ↩︎
I added this paragraph in an edit because it seemed like an important additional hope. ↩︎