AI ALIGNMENT FORUM
AF

Personal Blog

0

Humans are not agents: short vs long term

by Stuart_Armstrong
27th Jun 2017
1 min read
5

0

Personal Blog
Humans are not agents: short vs long term
0Stuart_Armstrong
0Stuart_Armstrong
New Comment
2 comments, sorted by
top scoring
Click to highlight new comments since: Today at 12:48 AM
[-]Stuart_Armstrong8y00

How can the short term preference be classified as “live forever” and the long term preference as “die after a century”?

Because "live forever" is the inductive consequence of the short-term "live till tomorrow" preference applied to every day.

Do the arguments imply that the AI will have an RLong function and a PKurtz function for preference-shaping

No. It implies that the human can be successfully modelled as having a mix of RLong and RKurtz preferences, conditional on which philosopher they meet first. And the AI is trying to best implement human preferences, yet humans have these odd mixed preferences.

What we (the AI) have to "do", is decide which philosopher the human meets first, and hence what their future preferences will be.

Reply
[-]Stuart_Armstrong8y00

It is ‘a preference for preferences’; eg "my long term needs take precedence over my short term desires" is a meta-preference (in fact the use of terms 'needs' vs 'desires' is itself a meta-preference, as at the lowest formal level, both are just preferences).

Reply
Moderation Log
More from Stuart_Armstrong
View more
Curated and popular this week
2Comments

A putative new idea for AI control; index here.

This is an example of humans not being (idealised) agents.

Imagine a human who has a preference to not live beyond a hundred years. However, they want to live to next year, and it's predictable that every year they are alive, they will have the same desire to survive till the next year.


This human (not a completely implausible example, I hope!) has a contradiction between their long and short term preferences. So which is accurate? It seems we could resolve these preferences in favour of the short term ("live forever") or the long term ("die after a century") preferences.

Now, at this point, maybe we could appeal to meta-preferences - what would the human themselves want, if they could choose? But often these meta-preferences are un- or under-formed, and can be influenced by how the question or debate is framed.

Specifically, suppose we are scheduling this human's agenda. We have the choice of making them meet one of two philosophers (not meeting anyone is not an option). If they meet Professor R. T. Long, he will advise them to follow long term preferences. If instead, they meet Paul Kurtz, he will advise them to pay attention their short term preferences. Whichever one they meet, they will argue for a while and will then settle on the recommended preference resolution. And then they will not change that, whoever they meet subsequently.

Since we are doing the scheduling, we effectively control the human's meta-preferences on this issue. What should we do? And what principles should we use to do so? We are trying to maximise human preferences, but we can also control what they are (and have to control what they are, though our choice of which philosopher they meet first).

It's clear that this can apply to AIs: if they are simultaneously aiding humans as well as learning their preferences, they will have multiple opportunities to do this sort of preference-shaping.