Squiggle Maximizer (formerly "Paperclip maximizer")

Applied to Out of the Box by jesseduffield ago

Any future AGI,AGI with full power over the lightcone, if it is not to destroy us,most potential from a human perspective, must have something sufficiently close to human values as its terminal value (goal). Further, seemingly small deviations could result in losing most of the value. Human values don'tseem unlikely to spontaneously emerge in a generic optimization process.process[1]. A dependably safe AI would therefore have to be programmed explicitly with human values or programmed with the ability (including the goal) of inferring human values.

  1. ^

    Though it's conceivable that empirical versions of moral realism could hold in practice.

Applied to PaperclipGPT(-4) by Michael Tontchev ago

Historical Note: This was originally called a "paperclip maximizer", with paperclips chosen for illustrative purposes because it is very unlikely to be implemented, and has little apparent danger or emotional load (in contrast to, for example, curing cancer or winning wars). Many people interpreted this to be about an AI that was specifically given the instruction of manufacturing paperclips, and that the intended lesson was of an outer alignment failure. i.e humans failed to give the AI the correct goal. TheYudkowsky has since stated the originally intended lesson was of inner alignment failure, wherein the humans gave the AI some other goal, but the AI's internal processes converged on a goal that seems completely arbitrary from the human perspective.)

First discussed in conversations betweenmentioned by Yudkowsky and Bostrom (circa 2003)on the extropian's mailing list, a squiggle maximizer is an artificial general intelligence (AGI) whose goal is to maximize the number of molecular squiggles in its collection.