The agency literature is there to model real agency relations in the world. Those real relations no doubt contain plenty of "unawareness". If models without unawareness were failing to capture and explain a big fraction of real agency problems, there would be plenty of scope for people to try to fill that gap via models that include it. The claim that this couldn't work because such models are limited seems just arbitrary and wrong to me. So either one must claim that AI-related unawareness is of a very different type or scale from ordinary...
Are you sure "rationalist" is a good label here? It suggests the claim that you are rational, or at least more rational than most. "Rational" has so many associations that go beyond truth-seeking.
We need some kind of word that means "seeker after less wrongness", and refers pragmatically to a group of people who go around discussing epistemic hygiene and actually worrying about how to think and whether their beliefs are correct. I know of no shorter and clearer alternative than "rationalist". There are some words I'm willing to try to rescue, and this is one of them.
We have lots of models that are useful even when the conclusions follow pretty directly. Such as supply and demand. The question is whether such models are useful, not if they are simple.