Many real world learning tasks involve complex or hard-to-specify objectives, and using an easier-to-specify proxy can lead to poor performance or misaligned behavior. One solution is to have humans provide a training signal by demonstrating or judging performance, but this approach fails if the task is too complicated for a human to directly evaluate. We propose Iterated Amplification, an alternative training strategy which progressively builds up a training signal for difficult problems by combining solutions to easier subproblems. Iterated Amplification is closely related to Expert Iteration (Anthony et al., 2017; Silver et al., 2017b), except that it uses no external reward function. We present results in algorithmic environments, showing that Iterated Amplification can efficiently learn complex behaviors.
Tomorrow's AI Alignment Forum sequences post will be 'AI safety without goal-directed behavior' by Rohin Shah, in the sequence on Value Learning.
The next post in this sequence on Iterated Amplification will be 'AlphaGo Zero and capability amplification', by Paul Christiano, on Tuesday 8th January.
Abstract
Tomorrow's AI Alignment Forum sequences post will be 'AI safety without goal-directed behavior' by Rohin Shah, in the sequence on Value Learning.
The next post in this sequence on Iterated Amplification will be 'AlphaGo Zero and capability amplification', by Paul Christiano, on Tuesday 8th January.