This is a list of brief explanations and definitions for terms that Eliezer Yudkowsky uses in the book Rationality: From AI to Zombies, an edited version of the Sequences.
The glossary is a community effort, and you're welcome to improve on the entries here, or add new ones. See the Talk page for some ideas for unwritten entries.
A
- a priori. A sentence that is reasonable to believe even in the absence of any experiential evidence (outside of the evidence needed to understand the sentence). A priori claims are in some way introspectively self-evident, or justifiable using only abstract reasoning. For example, pure mathematics is often claimed to be a priori, while scientific knowledge is claimed to be a posteriori, or dependent on (sensory) experience. These two terms shouldn’t be confused with prior and posterior probabilities.
- ad hominem. A verbal attack on the person making an argument, where a direct criticism of the argument is possible and would be more relevant. The term is reserved for cases where talking about the person amounts to changing the topic. If your character is the topic from the outset (e.g., during a job interview), then it isn't an ad hominem fallacy to cite evidence showing that you're a lousy worker.
- affective death spiral. A halo effect that perpetuates and exacerbates itself over time.
- AI-Box Experiment. A demonstration by Yudkowsky that people tend to overestimate how hard it is to manipulate people, and therefore underestimate the risk of building an Unfriendly AI that can only interact with its environment by verbally communicating with its programmers. One participant role-plays an AI, while another role-plays a human whose job it is interact with the AI without voluntarily releasing the AI from its “box”. Yudkowsky and a few other people who have role-played the AI have succeeded in getting the human supervisor to agree to release them, which suggests that a superhuman intelligence would have an even easier time escaping.
- algorithm. A specific procedure for computing some function. A mathematical object consisting of a finite, well-defined sequence of steps that concludes with some output determined by its initial input. Multiple physical systems can simultaneously instantiate the same algorithm.
- alien god. One of Yudkowsky's pet names for natural selection.
- ambiguity aversion. Preferring small certain gains over much larger uncertain gains.
- amplitude. A quantity in a configuration space, represented by a complex number. Amplitudes are physical, not abstract or formal. The complex number’s modulus squared (i.e., its absolute value multiplied by itself) yields the Born probabilities, but the reason for this is unknown.
- anchoring. The cognitive bias of relying excessively on initial information after receiving relevant new information.
- anthropomorphism. The tendency to assign human qualities to non-human phenomena.
- artificial neural network. See “neural network.”
- ASCII. The American Standard Code for Information Exchange. A
...
Anki deck.