RobinHanson

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

We have lots of models that are useful even when the conclusions follow pretty directly. Such as supply and demand. The question is whether such models are useful, not if they are simple.

The agency literature is there to model real agency relations in the world. Those real relations no doubt contain plenty of "unawareness". If models without unawareness were failing to capture and explain a big fraction of real agency problems, there would be plenty of scope for people to try to fill that gap via models that include it. The claim that this couldn't work because such models are limited seems just arbitrary and wrong to me. So either one must claim that AI-related unawareness is of a very different type or scale from ordinary human cases in our world today, or one must implicitly claim that unawareness modeling would in fact be a contribution to the agency literature. It seems to me a mild burden of proof sits on advocates for this latter case to in fact create such contributions.

Are you sure "rationalist" is a good label here? It suggests the claim that you are rational, or at least more rational than most. "Rational" has so many associations that go beyond truth-seeking.