You mention having looked through the literature; in case you missed any, here's what I think of as the standard resources on this topic.
All are very worth reading.
Edit: There was an old discussion between Holden Karnofsky + Jaan Tallinn on Tool AI in yahoo groups, but Yahoo Groups has been deprecated. Here's the page in the wayback machine, but the attachment is not available. I would appreciate someone here leaving a link to that old document, I recall it being quite thoughtful.
An extremely basic question that, after months of engaging with AI safety literature, I'm surprised to realize I don't fully understand: why not tool AI?
AI Safety scenarios seem to conceive of AI as an autonomous agent. Is that because of the current machine learning paradigm, where we're setting the AI's goals but not specifying the steps to get there? Is this paradigm the entire reason why AI safety is an issue?
If so, is there a reason why advanced AI would need an agenty utility function sort of set up? Is it just too cumbersome to give step by step instructions for high level tasks?
Thanks!