A rational agent is an entity which has a utility function, forms beliefs about its environment, evaluates the consequences of possible actions, and then takes the action which maximizes its utility. They are also referred to as goal-seeking. The concept of a rational agent is used in economics,economics, game theory, decision theory, and artificial intelligence.
Editor note: there is work to be done reconciling this page, Agency page, and Robust Agents. Currently they overlap and I'm not sure they're consistent. - Ruby, 2020-09-15
AnA rational agent is an entity which has a utility function, forms beliefs about its environment, evaluates the consequences of possible actions, and then takes the action which maximizes its utility. They are also referred to as goal-seeking. The concept of a rational agent is used in economics, game theory, decision theory, and artificial intelligence.
More generally, an agent is anything that can be viewed as agents
The first use of the concept 'agent' was to model humans in economics. While humans undoubtedly model their surroundings, consider multiple actions, et cetera, they often do not do so in the most rational way. Many documented biases exist which comprise the human process of reasoning. For a thorough review of these, see perceiving its environment through sensors and acting upon that environment through actuators.Bias1.
There ishas been much discussion on LessWrong as to whether certain AIAGI designs can be made into mere tools or whether they will necessarily be agents,agents which will attempt to actively carry out their goals. Any minds that actively engage in goal-directed behavior are potentially dangerous, due to considerations such as oraclesbasic AI drives and tool AI. possibly causing behavior which is in conflict with humanity's values.
In Dreams of Friendliness and in Reply to Holden on Tool AI, Eliezer Yudkowsky argues that, since all intelligences select correct beliefs from the much larger space of incorrect beliefs, they have goals. AIs which are agents will likely dramatically alter the world. Therefore, agents are likely to be necessarily agents.
The first use of the concept 'agent' was to model humans in economics. While humans undoubtedly model their surroundings, consider multiple actions, et cetera, they often do not do so in the most rational way. Many documented biases exist which comprise the human process of reasoning. For a thorough review of these, see Thinking Fast and Slow by Daniel Kahneman.Bias.
An agent is an entity which has preferences,a utility function, forms beliefs about its environment, evaluates the consequences of possible actions, and then takes the action which maximizes its preferences.utility. They are also referred to as goal-seeking. The concept of a rational agent is used in economics, game theory, decision theory, and artificial intelligence.
An agent is an entity which has preferences, forms beliefs about its environment, evaluates the consequences of possible actions, and then takes the action which maximizes its preferences. They are also referred to as goal-seeking. The concept of a rational agent is used in economics, game theory, decision theory, and artificial intelligence.
The first use of the concept 'agent' was to model humans in economics. While humans undoubtedly model their surroundings, consider multiple actions, et cetera, they often do not do so in the most rational way. Many documented biases exist which comprise the human process of reasoning. For a thorough review of these, see Thinking Fast and Slow by Daniel Kahneman.
There is much discussion on LessWrong as to whether certain AI designs will be agents, such as oracles and tool AI. In Dreams of Friendliness, Eliezer Yudkowsky argues that, since all intelligences select correct beliefs from the much larger space of incorrect beliefs, they have goals. AIs which are agents will likely dramatically alter the world. Therefore, agents are likely to be Unfriendly AIs. Finding non-agent AIs is a potential way to achieve the Singularity without encountering UFAI.
Russel, S. & Norvig, P. (2003) Artificial Intelligence: A Modern Approach. Second Edition. Page 32.
↩