AI ALIGNMENT FORUM
AF

Wikitags

Task identification problem

Edited by Eliezer Yudkowsky last updated 24th Mar 2016

A subproblem of building a task-directed AGI (genie) is communicating to the AGI the next task and identifying which outcomes are considered as fulfilling the task. For the superproblem, see safe plan identification and verification. This seems like primarily a communication problem. It might have additional constraints associated with, e.g., the AGI being a behaviorist genie. In the known-fixed-algorithm case of AGI, it might be that we don't have much freedom in aligning the AGI's planning capabilities with its task representation, and that we hence need to work with a particular task representation (i.e., we can't just use language to communicate, we need to use labeled training cases). This is currently a stub page, and is mainly being used as a parent or tag for subproblems.

Parents:
Task-directed AGI
Children:
Look where I'm pointing, not at my finger
Discussion0
Discussion0