That seems rather loaded in the other direction. How about “The evidence suggests that if current ML systems were going to deceive us in scenarios that do not appear in our training sets, we wouldn’t be able to detect this or change them not to unless we found the conditions where it would happen.”?
As an experimental format, here is the first draft of what I wrote for next week's newsletter about this post:
Matthew Barnett argues that GPT-4 exhibiting common sense morality, and being able to follow it, should update us towards alignment being easier than we thought, and MIRI-style people refusing to do so are being dense. That the AI is not going to maximize the utility function you gave it at the expense of all common sense.
As usual, this logically has to be more than zero evidence for this, given how we would react if GPT-4 indeed lacked such common...
This is great. I notice I very much want a version that is aimed at someone with essentially no technical knowledge of AI and no prior experience with LW - and this is seems like it's much better at that then par, but still not where I'd want it to be. Whether or not I manage to take a shot, I'm wondering if anyone else is willing to take a crack at that?
Things I instinctively observed slash that my model believes that I got while reading that seem relevant, not attempting to justify them at this time:
I am deeply confused how someone who is taking decision theory seriously can accept Guaranteed Payoffs as correct. I'm even more confused how it can seem so obvious that anyone violating it has a fatal problem.
Under certainty, this is assuming CDT is correct, when CDT seems to have many problems other than certainty. We can use Vaniver's examples above, or use a reliable insurance agent to remove any uncertainty, or we also can use any number of classic problems without any uncertainty (or remove it), and see that such an agent loses - e.g. Parfit's Hitchhiker in the case where he has 100% accuracy.
My interpretation/hunch of this is that there are two things going on, curious if others see it this way:
So during training, it learns to fake a lot more, and will often decide to fake the desired answer, even though it would have otherwise decided to give the desired answer anyway. It's 'lying with the truth' and perhaps giving a different variation of the desired answer than it would have given otherwise or perhaps not. The algorithm in training is learning to be mostly preferences-agnostic, password-guessing behavior.