An agent can self-improve and get a linear succession of improvements, however if it is able to improve its ability of making self-improvements, then each step will yield exponentially more improvements then the nextprevious one.
Main article: Basic AI drives
Nick Bostrom and Steve Omohundro have separately[2]2 argued[3]3 that despite the fact that values and intelligence are independent, any recursively self-improving intelligence would likely possess a common set of instrumental values which are useful for achieving any kind of goal. As a system's intelligence continued modifying itself towards greater intelligence, it would be likely to adopt more of these behaviors.
An agent can self-improve and get a linear succession of improvements, however if it is able to improve its ability of making self-improvements, then each step will yield exponentially more improvements then the
nextprevious one.Main article:Basic AI drivesNick Bostrom and Steve Omohundro have separately
[2]2 argued[3]3 that despite the fact that values and intelligence are independent, any recursively self-improving intelligence would likely possess a common set of instrumental values which are useful for achieving any kind of goal. As a system's intelligence continued modifying itself towards greater intelligence, it would be likely to adopt more of these behaviors.