Squiggle Maximizer (formerly "Paperclip maximizer")

Applied to Out of the Box by jesseduffield ago

Any future AGI,AGI with full power over the lightcone, if it is not to destroy us,most potential from a human perspective, must have something sufficiently close to human values as its terminal value (goal). Further, seemingly small deviations could result in losing most of the value. Human values don'tseem unlikely to spontaneously emerge in a generic optimization process.process[1]. A dependably safe AI would therefore have to be programmed explicitly with human values or programmed with the ability (including the goal) of inferring human values.

  1. ^

    Though it's conceivable that empirical versions of moral realism could hold in practice.