AI ALIGNMENT FORUM
AF

Nina Panickssery's Shortform

by Nina Panickssery
7th Jan 2025
1 min read
51

4

This is a special post for quick takes by Nina Panickssery. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Nina Panickssery's Shortform
6Nina Panickssery
3Vladimir_Nesov
2 comments, sorted by
top scoring
Click to highlight new comments since: Today at 2:02 PM
[-]Nina Panickssery6mo62

I think people who predict significant AI progress and automation often underestimate how human domain experts will continue to be useful for oversight, auditing, accountability, keeping things robustly on track, and setting high-level strategy.

Having "humans in the loop" will be critical for ensuring alignment and robustness, and I think people will realize this, creating demand for skilled human experts who can supervise and direct AIs.

(I may be responding to a strawman here, but my impression is that many people talk as if in the future most cognitive/white-collar work will be automated and there'll be basically no demand for human domain experts in any technical field, for example.)

Reply
[-]Vladimir_Nesov6mo30

Oversight, auditing, and accountability are jobs. Agriculture shows that 95% of jobs going away is not the problem. But AI might be better at the new jobs as well, without any window of opportunity where humans are initially doing them and AI needs to catch up. Instead it's AI that starts doing all the new things well first and humans get no opportunity to become competitive at anything, old or new, ever again.

Even formulation of aligned high-level tasks and intent alignment of AIs make sense as jobs that could be done well by misaligned AIs for instrumental reasons. Which is not even deceptive alignment, but still plausibly segues into gradual disempowerment or sharp left turn.

Reply
Moderation Log
More from Nina Panickssery
View more
Curated and popular this week
2Comments