This page contains summaries of LessWrong posts published in 2008.
Eliezer warns readers that he is about to make a few posts directly discussing politics.
Voters for either political party usually have more in common with each other than they do with the politicians they vote for. And yet, they support their own "team members" with fanatic devotion. Nobody is allowed to criticize their own team's politicians, without their fellow voters accusing them of treason.
The American System and Misleading Labels
The conclusions we draw from analyzing the American political system are often biased by our own previous understanding of it, which we got in elementary school. In fact, the power of voting for a particular candidate (which is not the same as the power to choose which candidates will run) is not the greatest power of the voters. Instead, voters' main ability is the threat to change which party controls the government, or, extremely rarely, to completely dethrone both political parties and replace them with a third.
Many people try to vote "strategically", by considering which candidate is more "electable". One of the most important factors in whether someone is "electable" is whether they have received attention from the media and the support of one of the two major parties. Naturally, those organizations put considerable thought into who is electable in making their decision. Ultimately, all arguments for "strategic voting" tend to fall apart. The voters themselves get so little say in who the next president is that the best we can do is just to not vote for nincompoops.
Rational vs. Scientific Ev-Psych
In Evolutionary Biology or Psychology, a nice-sounding but untested theory is referred to as a "just-so story", after the stories written by Rudyard Kipling. But, if there is a way to test the theory, people tend to consider it more likely to be correct. This is not a rational tendency.
Part of the reason professional evolutionary biologists dislike just-so stories is that many of them are simply wrong.
But There's Still A Chance, Right?
Sometimes, you calculate the probability of a certain event and find that the number is so unbelievably small that your brain really can't keep track of how small it is, any more than you can spot an individual grain of sand on a beach from 100 meters off. But, because you're already thinking about that event enough to calculate the probability of it, it feels like it's still worth keeping track of. It's not.
Nothing is perfectly black or white. Everything is gray. However, this does not mean that everything is the same shade of gray. It may be impossible to completely eliminate bias, but it is still worth reducing bias.
Those without the understanding of the Quantitative Way will often map the process of arriving at beliefs onto the social domains of Authority. They think that if Science is not infinitely certain, or if it has ever admitted a mistake, then it is no longer a trustworthy source, and can be ignored. This cultural gap is rather difficult to cross.
If you say you are 99.9999% confident of a proposition, you're saying that you could make one million equally likely statements and be wrong, on average, once. Probability 1 indicates a state of infinite certainty. Furthermore, once you assign a probability 1 to a proposition, Bayes' theorem says that it can never be changed, in response to any evidence. Probability 1 is a lot harder to get to with a human brain than you would think.
In the ordinary way of writing probabilities, 0 and 1 both seem like entirely reachable quantities. But when you transform probabilities into odds ratios, or log-odds, you realize that in order to get a proposition to probability 1 would require an infinite amount of evidence.
The joy of mathematics is inventing mathematical objects, and then noticing that the mathematical objects that you just created have all sorts of wonderful properties that you never intentionally built into them. It is like building a toaster and then realizing that your invention also, for some unexplained reason, acts as a rocket jetpack and MP3 player.
Mathematicians expect that if you dig deep enough, a stable, or even beautiful, pattern will emerge. Some people claim that this belief is unfounded. But, we have previously found order in many of the places we've looked for it.
There are three reasons why a world governed by math can still seem messy. First, we may not actually know the math. Secondly, even if we do know all of the math, we may not have enough computing power to do the full calculation. And finally, even if we did know all the math, and we could compute it, we still don't know where in the mathematical system we are living.
Bayesians expect probability theory, and rationality itself, to be math. Self consistent, neat, even beautiful. This is why Bayesians think that Cox's theorems are so important.
When you find a seeming inconsistency in the rules of math, or logic, or probability theory, you might do well to consider that math has rightfully earned a bit more credibility than that. Check the proof. It is more likely that you have made a mistake in algebra, than that you have just discovered a fatal flaw in math itself.
(and subsequent followups) - Offered choices between gambles, people make decision-theoretically inconsistent decisions.
Offered choices between gambles, people make decision-theoretically inconsistent decisions.
Offered choices between gambles, people make decision-theoretically inconsistent decisions.
We really shouldn't care less about the future than we do about the present.
Our moral preferences shouldn't be circular. If a policy A is better than B, and B is better than C, and C is better than D, and so on, then policy A really should be better than policy Z.
The "Intuitions" Behind "Utilitarianism"
Our intuitions, the underlying cognitive tricks that we use to build our thoughts, are an indispensable part of our cognition. The problem is that many of those intuitions are incoherent, or are undesirable upon reflection. But if you try to "renormalize" your intuition, you wind up with what is essentially utilitarianism.
There is a long history of people claiming to have found paradoxes in Bayesian Probability Theory. Typically, these proofs are fallacious, but correct seeming, just as apparent proofs that 2 = 1 are. But in probability theory, the illegal operation is usually not a hidden division by zero, but rather an infinity that is not arrived as a limit of a finite calculation. Once you are more careful with your math, these paradoxes typically go away.
Many people only start to grow as a rationalist when they find something that they care about more than they care about rationality itself. It takes something really scary to cause you to override your intuitions with math.
Newcomb's Problem and Regret of Rationality
Newcomb's problem is a very famous decision theory problem in which the rational move appears to be consistently punished. This is the wrong attitude to take. Rationalists should win. If your particular ritual of cognition consistently fails to yield good results, change the ritual.
OB Meetup: Millbrae, Thu 21 Feb, 7pm
A word fails to connect to reality in the first place. Is Socrates a framster? Yes or no?
Socrates is a human, and humans, by definition, are mortal. So if you defined humans to not be mortal, would Socrates live forever?
(alternate summary:)
Your argument, if it worked, could coerce reality to go a different way by choosing a different word definition. Socrates is a human, and humans, by definition, are mortal. So if you defined humans to not be mortal, would Socrates live forever?
You try to establish any sort of empirical proposition as being true "by definition". Socrates is a human, and humans, by definition, are mortal. So is it a logical truth if we empirically predict that Socrates should keel over if he drinks hemlock? It seems like there are logically possible, non-self-contradictory worlds where Socrates doesn't keel over - where he's immune to hemlock by a quirk of biochemistry, say. Logical truths are true in all possible worlds, and so never tell you which possible world you live in - and anything you can establish "by definition" is a logical truth.
You unconsciously slap the conventional label on something, without actually using the verbal definition you just gave. You know perfectly well that Bob is "human", even though, on your definition, you can never call Bob "human" without first observing him to be mortal.
The mere presence of words can influence thinking, sometimes misleading it.
(alternate summary:)
The mere presence of words can influence thinking, sometimes misleading it.
The act of labeling something with a word, disguises a challengable inductive inference you are making. If the last 11 egg-shaped objects drawn have been blue, and the last 8 cubes drawn have been red, it is a matter of induction to say this rule will hold in the future. But if you call the blue eggs "bleggs" and the red cubes "rubes", you may reach into the barrel, feel an egg shape, and think "Oh, a blegg."
You try to define a word using words, in turn defined with ever-more-abstract words, without being able to point to an example.
(alternate summary:)
You try to define a word using words, in turn defined with ever-more-abstract words, without being able to point to an example. "What is red?" "Red is a color." "What's a color?" "It's a property of a thing?" "What's a thing? What's a property?" It never occurs to you to point to a stop sign and an apple.
The extension doesn't match the intension. We aren't consciously aware of our identification of a red light in the sky as "Mars", which will probably happen regardless of your attempt to define "Mars" as "The God of War".
Buy Now Or Forever Hold Your Peace
If you really think that your reasoning is superior to that of prediction markets, there is free money available to you right now. If you aren't picking it up, you clearly don't really believe that you can beat the markets.
Your verbal definition doesn't capture more than a tiny fraction of the category's shared characteristics, but you try to reason as if it does.
(alternate summary:)
Your verbal definition doesn't capture more than a tiny fraction of the category's shared characteristics, but you try to reason as if it does. When the philosophers of Plato's Academy claimed that the best definition of a human was a "featherless biped", Diogenes the Cynic is said to have exhibited a plucked chicken and declared "Here is Plato's Man." The Platonists promptly changed their definition to "a featherless biped with broad nails".
Typicality and Asymmetrical Similarity
You try to treat category membership as all-or-nothing, ignoring the existence of more and less typical subclusters.
(alternate summary:)
You try to treat category membership as all-or-nothing, ignoring the existence of more and less typical subclusters. Ducks and penguins are less typical birds than robins and pigeons. Interestingly, a between-groups experiment showed that subjects thought a disease was more likely to spread from robins to ducks on an island, than from ducks to robins.
The Cluster Structure of Thingspace
A verbal definition works well enough in practice to point out the intended cluster of similar things, but you nitpick exceptions.
(alternate summary:)
A verbal definition works well enough in practice to point out the intended cluster of similar things, but you nitpick exceptions. Not every human has ten fingers, or wears clothes, or uses language; but if you look for an empirical cluster of things which share these characteristics, you'll get enough information that the occasional nine-fingered human won't fool you.
You ask whether something "is" or "is not" a category member but can't name the question you really want answered.
(alternate summary:)
You ask whether something "is" or "is not" a category member but can't name the question you really want answered. What is a "man"? Is Barney the Baby Boy a "man"? The "correct" answer may depend considerably on whether the query you really want answered is "Would hemlock be a good thing to feed Barney?" or "Will Barney make a good husband?"
You treat intuitively perceived hierarchical categories like the only correct way to parse the world, without realizing that other forms of statistical inference are possible even though your brain doesn't use them.
(alternate summary:)
You treat intuitively perceived hierarchical categories like the only correct way to parse the world, without realizing that other forms of statistical inference are possible even though your brain doesn't use them. It's much easier for a human to notice whether an object is a "blegg" or "rube"; than for a human to notice that red objects never glow in the dark, but red furred objects have all the other characteristics of bleggs. Other statistical algorithms work differently.
How An Algorithm Feels From Inside
You talk about categories as if they are manna fallen from the Platonic Realm, rather than inferences implemented in a real brain.
(alternate summary:)
You talk about categories as if they are manna fallen from the Platonic Realm, rather than inferences implemented in a real brain. The ancient philosophers said "Socrates is a man", not, "My brain perceptually classifies Socrates as a match against the 'human' concept".
You argue about a category membership even after screening off all questions that could possibly depend on a category-based inference. After you observe that an object is blue, egg-shaped, furred, flexible, opaque, luminescent, and palladium-containing, what's left to ask by arguing, "Is it a blegg?" But if your brain's categorizing neural network contains a (metaphorical) central unit corresponding to the inference of blegg-ness, it may still feel like there's a leftover question.
(see also the wiki page)
An example of how the technique helps.
(alternate summary:)
You allow an argument to slide into being about definitions, even though it isn't what you originally wanted to argue about. If, before a dispute started about whether a tree falling in a deserted forest makes a "sound", you asked the two soon-to-be arguers whether they thought a "sound" should be defined as "acoustic vibrations" or "auditory experiences", they'd probably tell you to flip a coin. Only after the argument starts does the definition of a word become politically charged.
You think a word has a meaning, as a property of the word itself; rather than there being a label that your brain associates to a particular concept.
(alternate summary:)
You think a word has a meaning, as a property of the word itself; rather than there being a label that your brain associates to a particular concept. When someone shouts, "Yikes! A tiger!", evolution would not favor an organism that thinks, "Hm... I have just heard the syllables 'Tie' and 'Grr' which my fellow tribemembers associate with their internal analogues of my own tiger concept and which aiiieeee CRUNCH CRUNCH GULP." So the brain takes a shortcut, and it seems that the meaning of tigerness is a property of the label itself. People argue about the correct meaning of a label like "sound".
The Argument from Common Usage
You argue over the meanings of a word, even after all sides understand perfectly well what the other sides are trying to say.
(alternate summary:)
You argue over the meanings of a word, even after all sides understand perfectly well what the other sides are trying to say. The human ability to associate labels to concepts is a tool for communication. When people want to communicate, we're hard to stop; if we have no common language, we'll draw pictures in sand. When you each understand what is in the other's mind, you are done.
You pull out a dictionary in the middle of an empirical or moral argument. Dictionary editors are historians of usage, not legislators of language. If the common definition contains a problem - if "Mars" is defined as the God of War, or a "dolphin" is defined as a kind of fish, or "Negroes" are defined as a separate category from humans, the dictionary will reflect the standard mistake.
You pull out a dictionary in the middle of any argument ever. Seriously, what the heck makes you think that dictionary editors are an authority on whether "atheism" is a "religion" or whatever? If you have any substantive issue whatsoever at stake, do you really think dictionary editors have access to ultimate wisdom that settles the argument?
You defy common usage without a reason, making it gratuitously hard for others to understand you. Fast stand up plutonium, with bagels without handle.
You use complex renamings to create the illusion of inference.
(alternate summary:)
You use complex renamings to create the illusion of inference. Is a "human" defined as a "mortal featherless biped"? Then write: "All [mortal featherless bipeds] are mortal; Socrates is a [mortal featherless biped]; therefore, Socrates is mortal." Looks less impressive that way, doesn't it?
Classic Sichuan in Millbrae, Thu Feb 21, 7pm
When a word poses a problem, the simplest solution is to eliminate the word and its synonyms.
(alternate summary:)
If Albert and Barry aren't allowed to use the word "sound", then Albert will have to say "A tree falling in a deserted forest generates acoustic vibrations", and Barry will say "A tree falling in a deserted forest generates no auditory experiences". When a word poses a problem, the simplest solution is to eliminate the word and its synonyms.
Replace the Symbol with the Substance
Description of the technique.
(alternate summary:)
The existence of a neat little word prevents you from seeing the details of the thing you're trying to think about.
(alternate summary:)
The existence of a neat little word prevents you from seeing the details of the thing you're trying to think about. What actually goes on in schools once you stop calling it "education"? What's a degree, once you stop calling it a "degree"? If a coin lands "heads", what's its radial orientation? What is "truth", if you can't say "accurate" or "correct" or "represent" or "reflect" or "semantic" or "believe" or "knowledge" or "map" or "real" or any other simple term?
You have only one word, but there are two or more different things-in-reality, so that all the facts about them get dumped into a single undifferentiated mental bucket.
(alternate summary:)
You have only one word, but there are two or more different things-in-reality, so that all the facts about them get dumped into a single undifferentiated mental bucket. It's part of a detective's ordinary work to observe that Carol wore red last night, or that she has black hair; and it's part of a detective's ordinary work to wonder if maybe Carol dyes her hair. But it takes a subtler detective to wonder if there are two Carols, so that the Carol who wore red is not the same as the Carol who had black hair.
You see patterns where none exist, harvesting other characteristics from your definitions even when there is no similarity along that dimension.
(alternate summary:)
You see patterns where none exist, harvesting other characteristics from your definitions even when there is no similarity along that dimension. In Japan, it is thought that people of blood type A are earnest and creative, blood type Bs are wild and cheerful, blood type Os are agreeable and sociable, and blood type ABs are cool and controlled.
You try to sneak in the connotations of a word, by arguing from a definition that doesn't include the connotations.
(alternate summary:)
You try to sneak in the connotations of a word, by arguing from a definition that doesn't include the connotations. A "wiggin" is defined in the dictionary as a person with green eyes and black hair. The word "wiggin" also carries the connotation of someone who commits crimes and launches cute baby squirrels, but that part isn't in the dictionary. So you point to someone and say: "Green eyes? Black hair? See, told you he's a wiggin! Watch, next he's going to steal the silverware."
You claim "X, by definition, is a Y!" On such occasions you're almost certainly trying to sneak in a connotation of Y that wasn't in your given definition.
(alternate summary:)
You claim "X, by definition, is a Y!" On such occasions you're almost certainly trying to sneak in a connotation of Y that wasn't in your given definition. You define "human" as a "featherless biped", and point to Socrates and say, "No feathers - two legs - he must be human!" But what you really care about is something else, like mortality. If what was in dispute was Socrates's number of legs, the other fellow would just reply, "Whaddaya mean, Socrates's got two legs? That's what we're arguing about in the first place!"
You claim "Ps, by definition, are Qs!" If you see Socrates out in the field with some biologists, gathering herbs that might confer resistance to hemlock, there's no point in arguing "Men, by definition, are mortal!" The main time you feel the need to tighten the vise by insisting that something is true "by definition" is when there's other information that calls the default inference into doubt.
You try to establish membership in an empirical cluster "by definition". You wouldn't feel the need to say, "Hinduism, by definition, is a religion!" because, well, of course Hinduism is a religion. It's not just a religion "by definition", it's, like, an actual religion. Atheism does not resemble the central members of the "religion" cluster, so if it wasn't for the fact that atheism is a religion by definition, you might go around thinking that atheism wasn't a religion. That's why you've got to crush all opposition by pointing out that "Atheism is a religion" is true by definition, because it isn't true any other way.
Your definition draws a boundary around things that don't really belong together.
(alternate summary:)
Your definition draws a boundary around things that don't really belong together. You can claim, if you like, that you are defining the word "fish" to refer to salmon, guppies, sharks, dolphins, and trout, but not jellyfish or algae. You can claim, if you like, that this is merely a list, and there is no way a list can be "wrong". Or you can stop playing nitwit games and admit that you made a mistake and that dolphins don't belong on the fish list.
Which sounds more plausible, "God did a miracle" or "A supernatural universe-creating entity temporarily suspended the laws of physics"?
(alternate summary:)
You use a short word for something that you won't need to describe often, or a long word for something you'll need to describe often. This can result in inefficient thinking, or even misapplications of Occam's Razor, if your mind thinks that short sentences sound "simpler". Which sounds more plausible, "God did a miracle" or "A supernatural universe-creating entity temporarily suspended the laws of physics"?
Mutual Information, and Density in Thingspace
You draw your boundary around a volume of space where there is no greater-than-usual density, meaning that the associated word does not correspond to any performable Bayesian inferences.
(alternate summary:)
You draw your boundary around a volume of space where there is no greater-than-usual density, meaning that the associated word does not correspond to any performable Bayesian inferences. Since green-eyed people are not more likely to have black hair, or vice versa, and they don't share any other characteristics in common, why have a word for "wiggin"?
Superexponential Conceptspace, and Simple Words
You draw an unsimple boundary without any reason to do so.
(alternate summary:)
You draw an unsimple boundary without any reason to do so. The act of defining a word to refer to all humans, except black people, seems kind of suspicious. If you don't present reasons to draw that particular boundary, trying to create an "arbitrary" word in that location is like a detective saying: "Well, I haven't the slightest shred of support one way or the other for who could've murdered those orphans... but have we considered John Q. Wiffleheim as a suspect?"
If you are trying to judge whether some unpleasant idea is true you should visualise what the world would look like if it were true, and what you would do in that situation. This will allow you to be less scared of the idea, and reason about it without immediately trying to reject it.
The Second Law of Thermodynamics, and Engines of Cognition
To form accurate beliefs about something, you really do have to observe it. It's a very physical, very real process: any rational mind does "work" in the thermodynamic sense, not just the sense of mental effort. Engines of cognition are not so different from heat engines, though they manipulate entropy in a more subtle form than burning gasoline. So unless you can tell me which specific step in your argument violates the laws of physics by giving you true knowledge of the unseen, don't expect me to believe that a big, elaborate clever argument can do it either.
People learn under the traditional school regimen that the teacher tells you certain things, and you must believe them and recite them back; but if a mere student suggests a belief, you do not have to obey it. They map the domain of belief onto the domain of authority, and think that a certain belief is like an order that must be obeyed, but a probabilistic belief is like a mere suggestion. And when half-trained or tenth-trained rationalists abandon their art and try to believe without evidence just this once, they often build vast edifices of justification, confusing themselves just enough to conceal the magical steps. It can be quite a pain to nail down where the magic occurs - their structure of argument tends to morph and squirm away as you interrogate them. But there's always some step where a tiny probability turns into a large one - where they try to believe without evidence - where they step into the unknown, thinking, "No one can prove me wrong".
If a mind is arriving at true beliefs, and we assume that the second law of thermodynamics has not been violated, that mind must be doing something at least vaguely Bayesian - at least one process with a sort-of Bayesian structure somewhere - or it couldn't possibly work.
Conditional Independence, and Naive Bayes
You use categorization to make inferences about properties that don't have the appropriate empirical structure, namely, conditional independence given knowledge of the class, to be well-approximated by Naive Bayes.
(alternate summary:)
You use categorization to make inferences about properties that don't have the appropriate empirical structure, namely, conditional independence given knowledge of the class, to be well-approximated by Naive Bayes. No way am I trying to summarize this one. Just read the blog post.
Words as Mental Paintbrush Handles
Visualize a "triangular lightbulb". What did you see?
(alternate summary:)
You think that words are like tiny little LISP symbols in your mind, rather than words being labels that act as handles to direct complex mental paintbrushes that can paint detailed pictures in your sensory workspace. Visualize a "triangular lightbulb". What did you see?
"Martin told Bob the building was on his left." But "left" is a function-word that evaluates with a speaker-dependent variable grabbed from the surrounding context. Whose "left" is meant, Bob's or Martin's?
(alternate summary:)
You use a word that has different meanings in different places as though it meant the same thing on each occasion, possibly creating the illusion of something protean and shifting. "Martin told Bob the building was on his left." But "left" is a function-word that evaluates with a speaker-dependent variable grabbed from the surrounding context. Whose "left" is meant, Bob's or Martin's?
37 Ways That Words Can Be Wrong
Contains summaries of the sequence of posts about the proper use of words.
This is where the "free will" puzzle is explicitly posed, along with criteria for what does and does not constitute a satisfying answer.
Where the mind cuts against reality's grain, it generates wrong questions - questions that cannot possibly be answered on their own terms, but only dissolved by understanding the cognitive algorithm that generates the perception of a question.
When you are faced with an unanswerable question - a question to which it seems impossible to even imagine an answer - there is a simple trick which can turn the question solvable. Instead of asking, "Why do I have free will?", try asking, "Why do I think I have free will?"
E. T. Jaynes used the term Mind Projection Fallacy to denote the error of projecting your own mind's properties into the external world. the Mind Projection Fallacy generalizes as an error. It is in the argument over the real meaning of the word sound, and in the magazine cover of the monster carrying off a woman in the torn dress, and Kant's declaration that space by its very nature is flat, and Hume's definition of a priori ideas as those "discoverable by the mere operation of thought, without dependence on what is anywhere existent in the universe"...
Probabilities express uncertainty, and it is only agents who can be uncertain. A blank map does not correspond to a blank territory. Ignorance is in the mind.
The Quotation is not the Referent
It's very easy to derive extremely wrong conclusions if you don't make a clear enough distinction between your beliefs about the world, and the world itself.
Using qualitative, binary reasoning may make it easier to confuse belief and reality; if we use probability distributions, the distinction is much clearer.
We build models of the universe that have many different levels of description. But so far as anyone has been able to determine, the universe itself has only the single level of fundamental physics - reality doesn't explicitly compute protons, only quarks.
Explaining vs. Explaining Away
Apparently "the mere touch of cold philosophy", i.e., the truth, has destroyed haunts in the air, gnomes in the mine, and rainbows. This calls to mind a rather different bit of verse:
One of these things Is not like the others One of these things Doesn't belong
The air has been emptied of its haunts, and the mine de-gnomed—but the rainbow is still there!
There is a very great distinction between being able to see where the rainbow comes from, and playing around with prisms to confirm it, and maybe making a rainbow yourself by spraying water droplets, versus some dour-faced philosopher just telling you, "No, there's nothing special about the rainbow. Didn't you hear? Scientists have explained it away. Just something to do with raindrops or whatever. Nothing to be excited about." I think this distinction probably accounts for a hell of a lot of the deadly existential emptiness that supposedly accompanies scientific reductionism.
Equations of physics aren't about strong emotions. They can inspire those emotions in the mind of a scientist, but the emotions are not as raw as the stories told about Jupiter (the god). And so it might seem that reducing Jupiter to a spinning ball of methane and ammonia takes away some of the poetry in those stories. But ultimately, we don't have to keep telling stories about Jupiter. It's not necessary for Jupiter to think and feel in order for us to tell stories, because we can always write stories with humans as its protagonists.
If you can't take joy in things that turn out to be explicable, you're going to set yourself up for eternal disappointment. Don't worry if quantum physics turns out to be normal.
It feels incredibly good to discover the answer to a problem that nobody else has answered. And we should enjoy finding answers. But we really shouldn't base our joy on the fact that nobody else has done it before. Even if someone else knows the answer to a puzzle, if you don't know it, it's still a mystery to you. And you should still feel joy when you discover the answer.
There are several reasons why it's worth talking about joy in the merely real in a discussion on reductionism. One is to leave a line of retreat. Another is to improve your own abilities as a rationalist by learning to invest your energy in the real world, and in accomplishing things here, rather than in a fantasy.
If You Demand Magic, Magic Won't Help
Magic (and dragons, and UFOs, and ...) get much of their charm from the fact that they don't actually exist. If dragons did exist, people would treat them like zebras; most people wouldn't bother to pay attention, but some scientists would get oddly excited about them. If we ever create dragons, or find aliens, we will have to learn to enjoy them, even though they happen to exist.
New York OB Meetup (ad-hoc) on Monday, Mar 24, @6pm
Most of the stuff reported in Science News is false, or at the very least, misleading. Scientific controversies are topics of such incredible difficulty that even people in the field aren't sure what's true. Read elementary textbooks. Study the settled science before you try to understand the outer fringes.
Amazing Breakthrough Day: April 1st
A proposal for a new holiday, in which journalists report on great scientific discoveries of the past as if they had just happened, and were still shocking.
Is Humanism A Religion-Substitute?
Trying to replace religion with humanism, atheism, or transhumanism doesn't work. If you try to write a hymn to the nonexistence of god, it will fail, because you are simply trying to imitate something that we don't really need to imitate. But that doesn't mean that the feeling of transcendence is something we should always avoid. After all, in a world in which religion never existed, people would still feel that same way.
Describes a few pieces of experimental evidence showing that objects or information which are believed to be in short supply are valued more than the same objects or information would be on their own.
To Spread Science, Keep It Secret
People don't study science, in part, because they perceive it to be public knowledge. In fact, it's not; you have to study a lot before you actually understand it. But because science is thought to be freely available, people ignore it in favor of cults that conceal their secrets, even if those secrets are wrong. In fact, it might be better if scientific knowledge was hidden from anyone who didn't undergo the initiation ritual, and study as an acolyte, and wear robes, and chant, and...
Brennan is inducted into the Conspiracy
When you pick up a cup of water, is it your hand that picks it up, or is it your fingers, thumb, and palm working together? Just because something can be reduced to smaller parts doesn't mean that the original thing doesn't exist.
It is very hard, without the benefit of hindsight, to understand just how it is that these little bouncing billiard balls called atoms, could ever combine in such a way as to make something angry. If you try to imagine this problem without understanding the idea of neurons, information processing, computing, etc you realize just how challenging reductionism actually is.
For a very long time, people had a detailed understanding of kinetics, and they had a detailed understanding of heat. They understood concepts such as momentum and elastic rebounds, as well as concepts such as temperature and pressure. It took an extraordinary amount of work in order to understand things deeply enough to make us realize that heat and motion were really the same thing.
Brain Breakthrough! It's Made of Neurons!
Eliezer's contribution to Amazing Breakthrough Day.
Virtually every belief you have is not about elementary particle fields, which are (as far as we know) the actual reality. This doesn't mean that those beliefs aren't true. "Snow is white" does not mention quarks anywhere, and yet snow nevertheless is white. It's a computational shortcut, but it's still true.
Don't try to put your consciousness or your personal identity outside physics. Whatever makes you say "I think therefore I am", causes your lips to move; it is within the chains of cause and effect that produce our observed universe.
A few more points on Zombies.
The Generalized Anti-Zombie Principle
The argument against zombies can be extended into a more general anti-zombie principle. But, figuring out what that more general principle is, is more difficult than it may seem.
Fleshes out the generalized anti-zombie principle a bit more, and describes the game "follow-the-improbability".
Belief in the Implied Invisible
That it's impossible even in principle to observe something sometimes isn't enough to conclude that it doesn't exist.
(alternate summary:)
If a spaceship goes over the cosmological horizon relative to us, so that it can no longer communicate with us, should we believe that the spaceship instantly ceases to exist?
Quantum mechanics doesn't deserve its fearsome reputation.
(alternate summary:)
Quantum mechanics doesn't deserve its fearsome reputation. If you tell people something is supposed to be mysterious, they won't understand it. It's human intuitions that are "strange" or "weird"; physics itself is perfectly normal. Talking about historical erroneous concepts like "particles" or "waves" is just asking to confuse people; present the real, unified quantum physics straight out. The series will take a strictly realist perspective - quantum equations describe something that is real and out there. Warning: Although a large faction of physicists agrees with this, it is not universally accepted. Stronger warning: I am not even going to present non-realist viewpoints until later, because I think this is a major source of confusion.
A preliminary glimpse at the stuff reality is made of. The classic split-photon experiment with half-silvered mirrors. Alternative pathways the photon can take, can cancel each other out. The mysterious measuring tool that tells us the relative squared moduli.
The laws of physics are inherently over mathematical entities, configurations, that involve multiple particles. A basic, ontologically existent entity, according to our current understanding of quantum mechanics, does not look like a photon - it looks like a configuration of the universe with "A photon here, a photon there." Amplitude flows between these configurations can cancel or add; this gives us a way to detect which configurations are distinct. It is an experimentally testable fact that "Photon 1 here, photon 2 there" is the same configuration as "Photon 2 here, photon 1 there".
Since configurations are over the combined state of all the elements in a system, adding a sensor that detects whether a particle went one way or the other, becomes a new element of the system that can make configurations "distinct" instead of "identical". This confused the living daylights out of early quantum experimenters, because it meant that things behaved differently when they tried to "measure" them. But it's not only measuring instruments that do the trick - any sensitive physical element will do - and the distinctness of configurations is a physical fact, not a fact about our knowledge. There is no need to suppose that the universe cares what we think.
Where Philosophy Meets Science
In retrospect, supposing that quantum physics had anything to do with consciousness was a big mistake. Could philosophers have told the physicists so? But we don't usually see philosophers sponsoring major advances in physics; why not?
Can You Prove Two Particles Are Identical?
You wouldn't think that it would be possible to do an experiment that told you that two particles are completely identical - not just to the limit of experimental precision, but perfectly. You could even give a precise-sounding philosophical argument for why it was not possible - but the argument would have a deeply buried assumption. Quantum physics violates this deep assumption, making the experiment easy.
Classical Configuration Spaces
How to visualize the state of a system of two 1-dimensional particles, as a single point in 2-dimensional space. Understanding configuration spaces in classical physics is a useful first step, before trying to imagine quantum configuration spaces.
Instead of a system state being associated with a single point in a classical configuration space, the instantaneous real state of a quantum system is a complex amplitude distribution over a quantum configuration space. What creates the illusion of "individual particles", like an electron caught in a trap, is a plaid distribution - one that happens to factor into the product of two parts. It is the whole distribution that evolves when a quantum system evolves. Individual configurations don't have physics; amplitude distributions have physics. Quantum entanglement is the general case; quantum independence is the special case.
Instead of thinking that a photon takes a single straight path through space, we can regard it as taking all possible paths through space, and adding the amplitudes for every possible path. Nearly all the paths cancel out - unless we do clever quantum things, so that some paths add instead of canceling out. Then we can make light do funny tricks for us, like reflecting off a mirror in such a way that the angle of incidence doesn't equal the angle of reflection. But ordinarily, nearly all the paths except an extremely narrow band, cancel out - this is one of the keys to recovering the hallucination of classical physics.
One of the chief ways to confuse yourself while thinking about quantum mechanics, is to think as if photons were little billiard balls bouncing around. The appearance of little billiard balls is a special case of a deeper level on which there are only multiparticle configurations and amplitude flows. It is easy to set up physical situations in which there exists no fact of the matter as to which electron was originally which.
Identity Isn't In Specific Atoms
As a consequence of quantum theory, we can see that the concept of swapping out all the atoms in you with "different" atoms is physical nonsense. It's not something that corresponds to anything that could ever be done, even in principle, because the concept is so confused. You are still you, no matter "which" atoms you are made of.
A satirical script for a zombie movie, but not about the lurching and drooling kind. The philosophical kind.
Given that there's no such thing as "the same atom", whether you are "the same person" from one time to another can't possibly depend on whether you're made out of the same atoms.
A quantum system that factorizes can evolve into a system that doesn't factorize, destroying the illusion of independence. But entangling a quantum system with its environment, can appear to destroy entanglements that are already present. Entanglement with the environment can separate out the pieces of an amplitude distribution, preventing them from interacting with each other. Decoherence is fundamentally symmetric in time, but appears asymmetric because of the second law of thermodynamics.
The So-Called Heisenberg Uncertainty Principle
Unlike classical physics, in quantum physics it is not possible to separate out a particle's "position" from its "momentum".
(alternate summary:)
Unlike classical physics, in quantum physics it is not possible to separate out a particle's "position" from its "momentum". The evolution of the amplitude distribution over time, involves things like taking the second derivative in space and multiplying by i to get the first derivative in time. The end result of this time evolution rule is that blobs of particle-presence appear to race around in physical space. The notion of "an exact particular momentum" or "an exact particular position" is not something that can physically happen, it is a tool for analyzing amplitude distributions by taking them apart into a sum of simpler waves. This uses the assumption and fact of linearity: the evolution of the whole wavefunction seems to always be the additive sum of the evolution of its pieces. Using this tool, we can see that if you take apart the same distribution into a sum of positions and a sum of momenta, they cannot both be sharply concentrated at the same time. When you "observe" a particle's position, that is, decohere its positional distribution by making it interact with a sensor, you take its wave packet apart into two pieces; then the two pieces evolve differently. The Heisenberg Principle definitely does not say that knowing about the particle, or consciously seeing it, will make the universe behave differently.
Which Basis Is More Fundamental?
The position basis can be computed locally in the configuration space; the momentum basis is not local. Why care about locality? Because it is a very deep principle; reality itself seems to favor it in some way.
Where Physics Meets Experience
Meet the Ebborians, who reproduce by fission. The Ebborian brain is like a thick sheet of paper that splits down its thickness. They frequently experience dividing into two minds, and can talk to their other selves. It seems that their unified theory of physics is almost finished, and can answer every question, when one Ebborian asks: When exactly does one Ebborian become two people?
Where Experience Confuses Physicists
It then turns out that the entire planet of Ebbore is splitting along a fourth-dimensional thickness, duplicating all the people within it. But why does the apparent chance of "ending up" in one of those worlds, equal the square of the fourth-dimensional thickness? Many mysterious answers are proposed to this question, and one non-mysterious one.
When a sensor measures a particle whose amplitude distribution stretches over space - perhaps seeing if the particle is to the left or right of some dividing line - then the standard laws of quantum mechanics call for the sensor+particle system to evolve into a state of (particle left, sensor measures LEFT) + (particle right, sensor measures RIGHT). But when we humans look at the sensor, it only seems to say "LEFT" or "RIGHT", never a mixture like "LIGFT". This, of course, is because we ourselves are made of particles, and subject to the standard quantum laws that imply decoherence. Under standard quantum laws, the final state is (particle left, sensor measures LEFT, human sees "LEFT") + (particle right, sensor measures RIGHT, human sees "RIGHT").
Decoherence is implicit in quantum physics, not an extra law on top of it. Asking exactly when "one world" splits into "two worlds" may be like asking when, if you keep removing grains of sand from a pile, it stops being a "heap". Even if you're inside the world, there may not be a definite answer. This puzzle does not arise only in quantum physics; the Ebborians could face it in a classical universe, or we could build sentient flat computers that split down their thickness. Is this really a physicist's problem?
There is no exact point at which decoherence suddenly happens. All of quantum mechanics is continuous and differentiable, and decoherent processes are no exception to this.
Decoherence is implicit within physics, not an extra law on top of it. You can choose representations that make decoherence harder to see, just like you can choose representations that make apples harder to see, but exactly the same physical process still goes on; the apple doesn't disappear and neither does decoherence. If you could make decoherence magically go away by choosing the right representation, we wouldn't need to shield quantum computers from the environment.
The last serious mysterious question left in quantum physics: When a quantum world splits in two, why do we seem to have a greater probability of ending up in the larger blob, exactly proportional to the integral of the squared modulus? It's an open problem, but non-mysterious answers have been proposed. Try not to go funny in the head about it.
Since quantum evolution is linear and unitary, decoherence can be seen as projecting a wavefunction onto orthogonal subspaces. This can be neatly illustrated using polarized photons and the angle of the polarized sheet that will absorb or transmit them.
Using our newly acquired understanding of photon polarizations, we see how to construct a quantum state of two photons in which, when you measure one of them, the person in the same world as you, will always find that the opposite photon has opposite quantum state. This is not because any influence is transmitted; it is just decoherence that takes place in a very symmetrical way, as can readily be observed in our calculations.
Bell's Theorem: No EPR "Reality"
(Note: This post was designed to be read as a stand-alone, if desired.) Originally, the discoverers of quantum physics thought they had discovered an incomplete description of reality - that there was some deeper physical process they were missing, and this was why they couldn't predict exactly the results of quantum experiments. The math of Bell's Theorem is surprisingly simple, and we walk through it. Bell's Theorem rules out being able to locally predict a single, unique outcome of measurements - ruling out a way that Einstein, Podolsky, and Rosen once defined "reality". This shows how deep implicit philosophical assumptions can go. If worlds can split, so that there is no single unique outcome, then Bell's Theorem is no problem. Bell's Theorem does, however, rule out the idea that quantum physics describes our partial knowledge of a deeper physical state that could locally produce single outcomes - any such description will be inconsistent.
Spooky Action at a Distance: The No-Communication Theorem
As Einstein argued long ago, the quantum physics of his era - that is, the single-global-world interpretation of quantum physics, in which experiments have single unique random results - violates Special Relativity; it imposes a preferred space of simultaneity and requires a mysterious influence to be transmitted faster than light; which mysterious influence can never be used to transmit any useful information. Getting rid of the single global world dispels this mystery and puts everything back to normal again.
The idea that decoherence fails the test of Occam's Razor is wrong as probability theory.
Decoherence is Falsifiable and Testable
(Note: Designed to be standalone readable.) An epistle to the physicists. To probability theorists, words like "simple", "falsifiable", and "testable" have exact mathematical meanings, which are there for very strong reasons. The (minority?) faction of physicists who say that many-worlds is "not falsifiable" or that it "violates Occam's Razor" or that it is "untestable", are committing the same kind of mathematical crime as non-physicists who invent their own theories of gravity that go as inverse-cube. This is one of the reasons why I, a non-physicist, dared to talk about physics - because I saw (some!) physicists using probability theory in a way that was simply wrong. Not just criticizable, but outright mathematically wrong: 2 + 2 = 3.
"Shut up and calculate" is the best approach you can take when none of your theories are very good. But that is not the same as claiming that "Shut up!" actually is a theory of physics. Saying "I don't know what these equations mean, but they seem to work" is a very different matter from saying: "These equations definitely don't mean anything, they just work!"
Early physicists simply didn't think of the possibility of more than one world - it just didn't occur to them, even though it's the straightforward result of applying the quantum laws at all levels. So they accidentally invented a completely and strictly unnecessary part of quantum theory to ensure there was only one world - a law of physics that says that parts of the wavefunction mysteriously and spontaneously disappear when decoherence prevents us from seeing them any more. If such a law really existed, it would be the only non-linear, non-unitary, non-differentiable, non-local, non-CPT-symmetric, acausal, faster-than-light phenomenon in all of physics.
If early physicists had never made the mistake, and thought immediately to apply the quantum laws at all levels to produce macroscopic decoherence, then "collapse postulates" would today seem like a completely crackpot theory. In addition to their other problems, like FTL, the collapse postulate would be the only physical law that was informally specified - often in dualistic (mentalistic) terms - because it was the only fundamental law adopted without precise evidence to nail it down. Here, we get a glimpse at that alternate Earth.
Summarizes the arguments that nail down macroscopic decoherence, aka the "many-worlds interpretation". Concludes that many-worlds wins outright given the current state of evidence. The argument should have been over fifty years ago. New physical evidence could reopen it, but we have no particular reason to expect this.
A short story set in the same world as "Initiation Ceremony". Future physics students look back on the cautionary tale of quantum physics.
The Dilemma: Science or Bayes?
The failure of first-half-of-20th-century-physics was not due to straying from the scientific method. Science and rationality - that is, Science and Bayesianism - aren't the same thing, and sometimes they give different answers.
Science Doesn't Trust Your Rationality
The reason Science doesn't always agree with the exact, Bayesian, rational answer, is that Science doesn't trust you to be rational. It wants you to go out and gather overwhelming experimental evidence.
If you have an idea, Science tells you to test it experimentally. If you spend 10 years testing the idea and the result comes out negative, Science slaps you on the back and says, "Better luck next time." If you want to spend 10 years testing a hypothesis that will actually turn out to be right, you'll have to try to do the thing that Science doesn't trust you to do: think rationally, and figure out the answer before you get clubbed over the head with it.
Science lets you believe any damn stupid idea that hasn't been refuted by experiment. Bayesianism says there is always an exactly rational degree of belief given your current evidence, and this does not shift a nanometer to the left or to the right depending on your whims. Science is a social freedom - we let people test whatever hypotheses they like, because we don't trust the village elders to decide in advance - but you shouldn't confuse that with an individual standard of rationality.
Do Scientists Already Know This Stuff?
No. Maybe someday it will be part of standard scientific training, but for now, it's not, and the absence is visible.
No Safe Defense, Not Even Science
Why am I trying to break your trust in Science? Because you can't think and trust at the same time. The social rules of Science are verbal rather than quantitative; it is possible to believe you are following them. With Bayesianism, it is never possible to do an exact calculation and get the exact rational answer that you know exists. You are visibly less than perfect, and so you will not be tempted to trust yourself.
Changing the Definition of Science
Many of these ideas are surprisingly conventional, and being floated around by other thinkers. I'm a good deal less of a lonely iconoclast than I seem; maybe it's just the way I talk.
Conference on Global Catastrophic Risks
Is it really possible to arrive at the truth faster than Science does? Not only is it possible, but the social process of science relies on scientists doing so - when they choose which hypotheses to test. In many answer spaces it's not possible to find the true hypothesis by accident. Science leaves it up to experiment to socially declare who was right, but if there weren't some people who could get it right in the absence of overwhelming experimental proof, science would be stuck.
Albert was unusually good at finding the right theory in the presence of only a small amount of experimental evidence. Even more unusually, he admitted it - he claimed to know the theory was right, even in advance of the public proof. It's possible to arrive at the truth by thinking great high-minded thoughts of the sort that Science does not trust you to think, but it's a lot harder than arriving at the truth in the presence of overwhelming evidence.
Einstein used evidence more efficiently than other physicists, but he was still extremely inefficient in an absolute sense. If a huge team of cryptographers and physicists were examining a interstellar transmission, going over it bit by bit, we could deduce principles on the order of Galilean gravity just from seeing one or two frames of a picture. As if the very first human to see an apple fall, had, on the instant, realized that its position went as the square of the time and that this implied constant acceleration.
I looked up to the ideal of a Bayesian superintelligence, not Einstein.
Mach's Principle: Anti-Epiphenomenal Physics
Could you tell if the whole universe were shifted an inch to the left? Could you tell if the whole universe was traveling left at ten miles per hour? Could you tell if the whole universe was accelerating left at ten miles per hour? Could you tell if the whole universe was rotating?
Maybe the reason why we can't observe absolute speeds, absolute positions, absolute accelerations, or absolute rotations, is that particles don't have absolute positions - only positions relative to each other. That is, maybe quantum physics takes place in a relative configuration space.
What time is it? How do you know? The question "What time is it right now?" may make around as much sense as asking "Where is the universe?" Not only that, our physics equations may not need a t in them!
To get rid of time you must reduce it to nontime. In timeless physics, everything that exists is perfectly global or perfectly local. The laws of physics are perfectly global; the configuration space is perfectly local. Every fundamentally existent ontological entity has a unique identity and a unique value. This beauty makes ugly theories much more visibly ugly; a collapse postulate becomes a visible scar on the perfection.
Using the modern, Bayesian formulation of causality, we can define causality without talking about time - define it purely in terms of relations. The river of time never flows, but it has a direction.
There's an unfortunate tendency to talk as if Einstein had superpowers - as if, even before Einstein was famous, he had an inherent disposition to be Einstein - a potential as rare as his fame and as magical as his deeds. Yet the way you acquire superpowers is not by being born with them, but by seeing, with a sudden shock, that they are perfectly normal.
From the world of Initiation Ceremony. Brennan and the others are faced with their midterm exams.
(alternate summary:)
The students are given one month to develop a theory of quantum gravity.
A response to opinions expressed by Robin Hanson, Roger Schank, and others, and arguing against the notion that producing a friendly general artificial intelligence is an insurmountable problem.
A discussion of a number of disagreements Eliezer Yudkowsky has been in, with a few comments on rational disagreement.
You do have to pay attention to other people's authority a fair amount of the time. But above all, try to get the actual right answer. Clever tricks are only valuable if they help you learn what the truth actually is. If a clever argument doesn't actually work, don't use it.
How can you be the same person tomorrow as today, in the river that never flows, when not a drop of water is shared between one time and another? Having used physics to completely trash all naive theories of identity, we reassemble a conception of persons and experiences from what is left. With a surprising practical application...
Why do a series on quantum mechanics? Some of the many morals that are best illustrated by the tale of quantum mechanics and its misinterpretation.
The many worlds of quantum mechanics are not some strange, alien universe into which you have been thrust. They are where you have always lived. Egan's Law: "It all adds up to normality." Then why care about quantum physics at all? Because there's still the question of what adds up to normality, and the answer to this question turns out to be, "Quantum physics." If you're thinking of building any strange philosophies around many-worlds, you probably shouldn't - that's not what it's for.
If the laws of physics control everything we do, then how can our choices be meaningful? Because you are physics. You aren't competing with physics for control of the universe, you are within physics. Anything you control is necessarily controlled by physics.
We throw away "time" but retain causality, and with it, the concepts "control" and "decide". To talk of something as having been "always determined" is mixing up a timeless and a timeful conclusion, with paradoxical results. When you take a perspective outside time, you have to be careful not to let your old, timeful intuitions run wild in the absence of their subject matter.
(alternate summary:)
(from The Quantum Physics Sequence)
Bloggingheads: Yudkowsky and Horgan
Playing Devil's Advocate is occasionally helpful, but much less so than it appears. Ultimately, you should only be able to create plausible arguments for things that are actually plausible.
Eliezer's Post Dependencies; Book Notification; Graphic Designer Wanted
An Intuitive Explanation of Quantum Mechanics
(just the science, for students confused by their physics textbooks)
Quantum Physics Revealed As Non-Mysterious
(quantum physics does not make the universe any more mysterious than it was previously)
And the Winner is... Many-Worlds!
An index of posts explaining quantum mechanics and the many-worlds interpretation.
(the many-worlds interpretations wins outright given the current state of evidence)
(alternate summary:)
An index of posts explaining quantum mechanics and the many-worlds interpretation.
Quantum Mechanics and Personal Identity
A shortened index into the Quantum Physics Sequence describing only the prerequisite knowledge to understand the statement that "science can rule out a notion of personal identity that depends on your being composed of the same atoms - because modern physics has taken the concept of 'same atom' and thrown it out the window. There are no little billiard balls with individual identities. It's experimentally ruled out." The key post in this sequence is Timeless Identity, in which "Having used physics to completely trash all naive theories of identity, we reassemble a conception of persons and experiences from what is left" but this finale might make little sense without the prior discussion.
(alternate summary:)
(the ontology of quantum mechanics, in which there are no particles with individual identities, rules out theories of personal continuity that invoke "the same atoms" as a concept)
Causality and Moral Responsibility
Knowing that you are a deterministic system does not make you any less responsible for the consequences of your actions. You still make your decisions; you do have psychological traits, and experiences, and goals. Determinism doesn't change any of that.
Our sense of "could-ness", as in "I could have not rescued the child from the burning orphanage", comes from our own decision making algorithms labeling some end states as "reachable". If we wanted to achieve the world-state of the child being burned, there is a series of actions that would lead to that state.
There is a school of thought in philosophy that says that even if you make a decision, that still isn't enough to conclude that you have free will. You have to have been the ultimate source of your decision. Nothing else can have influenced it previously. This doesn't work. There is no such thing as "the ultimate source" of your decisions.
When confronted with a difficult question, don't try to point backwards to a misunderstood black box. Ask yourself, what's inside the black box? If the answer is another black box, you likely have a problem.
An illustration of a few ways that trying to perform reductionism can go wrong.
There is a way of thinking about programming a computer that conforms well to human intuitions: telling the computer what to do. The problem is that the computer isn't going to understand you, unless you program the computer to understand. If you are programming an AI, you are not giving instructions to a ghost in the machine; you are creating the ghost.
A comparison of LA-602, the classified report investigating the possibility of a nuclear bomb igniting the atmosphere and killing everyone, and RHIC, the document explaining why the LHC is not going to destroy the world. There is a key difference between these documents: one of them is a genuine discussion of the risks, taking them seriously, and the other is a work of public relations. Work on existential risk needs to be more like the former.
A description of the last several months of sequence posts, that identifies the topic that Eliezer actually wants to explain: morality.
A dialogue on the proper application of the inside and outside views.
Surface Analogies and Deep Causes
Just because two things share surface similarities doesn't mean that they work the same way, or can be expected to be similar in other respects. If you want to understand what something does, it typically doesn't help you to understand something else. That type of reasoning only works if the two things are especially similar, on a deep level.
Optimization and the Singularity
An introduction to optimization processes and why Yudkowsky thinks that a singularity would be far more powerful than calculations based on human progress would suggest.
The Psychological Unity of Humankind
Because humans are a sexually reproducing species, human brains are nearly identical. All human beings share similar emotions, tell stories, and employ identical facial expressions. We naively expect all other minds to work like ours, which cause problems when trying to predict the actions of non-human intelligences.
The Design Space of Minds-In-General
When people talk about "AI", they're talking about an incredibly wide range of possibilities. Having a word like "AI" is like having a word for everything which isn't a duck.
No Universally Compelling Arguments
Because minds are physical processes, it is theoretically possible to specify a mind which draws any conclusion in response to any argument. There is no argument that will convince every possible mind.
It is possible to talk about "sexiness" as a property of an observer and a subject. It is also equally possible to talk about "sexiness" as a property of a subject, as long as each observer can have a different process to determine how sexy someone is. Failing to do either of these will cause you trouble.
A few thoughts from Eliezer Yudkowsky about a discussion of sexism on Overcoming Bias.
What Would You Do Without Morality?
If your own theory of morality was disproved, and you were persuaded that there was no morality, that everything was permissible and nothing was forbidden, what would you do? Would you still tip cabdrivers?
If there were some great stone tablet upon which Morality was written, and you read it, and it was something horrible, that would be a rather unpleasant scenario. What would you want that tablet to say, if you could choose it? What would be the best case scenario?
Why don't you just do that, and ignore the tablet completely?
There is no computer program so persuasive that you can run it on a rock. A mind, in order to be a mind, needs some sort of dynamic rules of inference or action. A mind has to be created already in motion.
What does "fairness" actually refer to? Why is it "fair" to divide a pie into three equal pieces for three different people?
Key questions for two different moral intuitions: morality-as-preference, and morality-as-given.
A dialogue on the idea that morality is a subset of our desires.
A dialogue on the idea that morality is an absolute external truth.
Eliezer mentions four interpretations of "A man can do as he wills, but not will as he wills.", a quote by Arthur Schopenhauer.
Where Recursive Justification Hits Bottom
Ultimately, when you reflect on how your mind operates, and consider questions like "why does occam's razor work?" and "why do I expect the future to be like the past?", you have no other option but to use your own mind. There is no way to jump to an ideal state of pure emptiness and evaluate these claims without using your existing mind.
A discussion of an interesting kind of lie, in which someone tells a lie that the person they're speaking to knows is a lie, but doesn't know that the person who told the lie knows that they know it's a lie.
A few key differences between Eliezer Yudkowsky's ideas on reflection and the ideas of other philosophers.
The genetic fallacy seems like a strange kind of fallacy. The problem is that the original justification for a belief does not always equal the sum of all the evidence that we currently have available. But, on the other hand, it is very easy for people to still believe untruths from a source that they have since rejected.
There are some things that are so fundamental, that you really can't doubt them effectively. Be careful you don't use this as an excuse, but ultimately, you really can't start out by saying that you won't trust anything that is the output of a neuron.
When we rebel against our own nature, we act in accordance with our own nature. There isn't any other way it could be.
Probability is Subjectively Objective
Probabilities exist only in minds. The probability you calculate for winning the lottery depends on your prior, which depends on which mind you have. However, this calculation does not refer to your mind. Thus, your calculated probability is subjectively objective. You conclude that someone who assigns a different probability (given the same information) is objectively wrong: You expect that they will lose on average.
A review of Lawrence Watt-Evans's fiction.
Does moral progress actually happen? And if it does so, how?
How did love ever come into the universe? How did that happen, and how special was it, really?
You do know quite a bit about morality. It's not perfect information, surely, or absolutely reliable, but you have someplace to start. If you didn't, you'd have a much harder time thinking about morality than you do.
As a general rule, if you find yourself suffering from existential angst, check and see if you're not just feeling unhappy because of something else going on in your life. An awful lot of existential angst comes from people trying to solve the wrong problem.
Seeing history in person is a very strong feeling.
There is a chance, however remote, that novel physics experiments could destroy the earth. Is banning physics experiments a good idea?
Fake Norms, or "Truth" vs. Truth
Our society has a moral norm for applauding "truth", but actual truths get much less applause (this is a bad thing).
When (Not) To Use Probabilities
When you don't have a numerical procedure to generate probabilities, you're probably better off using your own evolved abilities to reason in the presence of uncertainty.
How can we explain counterfactuals having a truth value, if we don't talk about "nearby possible worlds" or any of the other explanations offered by philosophers?
Math is Subjunctively Objective
It really does seem like "2+3=5" is true. Things get confusing if you ask what you mean when you say "2+3=5 is true". But because the simple rules of addition function so well to predict observations, it really does seem like it really must be true.
Does Your Morality Care What You Think?
If, for whatever reason, evolution or education had convinced you to believe that it was moral to do something that you now believe is immoral, you would go around saying "This is moral to do no matter what anyone else thinks of it." How much does this matter?
Discusses the various lines of retreat that have been set up in the discussion on metaethics.
What exactly does a correct theory of metaethics need to look like?
Eliezer's long-awaited theory of meta-ethics.
A few clarifications on how Yudkowsky's theory of metaethics applies to interpersonal interactions.
It's really hard to imagine aliens that are fundamentally different from human beings.
There is a lot of machinery hidden beneath the words, and rationalist's taboo is one way to make a step towards exposing it.
The behaviorists thought that speaking about anything like a mind, or emotions, or thoughts, was unscientific. After all, they said, you can't observe anger. You can just observe behavior. But, it is possible, using empathy, to correctly predict wide varieties of behavior, which you can't account for by Pavlovian conditioning.
Logical positivism was based around the idea that the only meaningful statements were those that could be verified by experiment. Unfortunately for positivism, there are meaningful statements that are very likely true and very likely false, and yet cannot be tested.
Don't bother coming up with clever, persuasive arguments for why evolution will do things the way you prefer. It really isn't listening.
Avoid situations, as much as you possibly can, in which optimistic thinking suggests ideas for conscious consideration. In real life problems, if you've done that, you've probably already screwed up.
A clarification about Yudkowsky's metaethics.
Inseparably Right; or, Joy in the Merely Good
Don't go looking for some pure essence of goodness, distinct from, you know, actual good.
Sorting Pebbles Into Correct Heaps
A parable about an imaginary society that has arbitrary, alien values.
Moral Error and Moral Disagreement
How can you make errors about morality?
A bit of explanation on the idea of morality as "computation".
When we say that something is arbitrary, we are saying that it feels like it should come with a justification, but doesn't.
When we say that a fair division of pie among N people is for each person to get 1/N of the pie, we aren't being arbitrary. We're being fair.
The Bedrock of Morality: Arbitrary?
Humans are built in such a way as to do what is right. Other optimization processes may not. So what?
"Disagreement" between rabbits and foxes is sheer anthropomorphism. Rocks and hot air don't disagree, even though one decreases in elevation and one increases in elevation.
When Anthropomorphism Became Stupid
Anthropomorphism didn't become obviously wrong until we realized that the tangled neurons inside the brain were performing complex information processing, and that this complexity arose as a result of evolution.
The Cartoon Guide to Lob's Theorem
An explanation, using cartoons, of Lob's theorem.
You Provably Can't Trust Yourself
Lob's theorem provides, by analogy, a nice explanation for why you really can't trust yourself. Don't trust thoughts because you think them, trust them because they were generated by trustworthy rules.
Good things aren't good because humans care about what's good. Good things are good because they save lives, make people happy, give us control over our own lives, involve us with others and prevent us from collapsing into total self-absorption, keep life complex and non-repeating and aesthetic and interesting, etc.
A particular system of values is analyzed, and is used to demonstrate the idea that anytime you consider changing your morals, you do so using your own current meta-morals. Forget this at your peril.
CEV is not the essence of goodness. If functioning properly, it is supposed to work analogously to a mirror -- a mirror is not inherently apple-shaped, but in the presence of an apple, it reflects the image of an apple. In the presence of the Pebblesorters, an AI running CEV would begin transforming the universe into heaps containing prime numbers of pebbles. In the presence of humankind, an AI running CEV would begin doing whatever is right for it to do.
There are some mental categories we draw that are relatively simple and straightforward. Others get trickier, because they are primarily drawn in such a way that whether or not something fits into that category is important information to our utility function. Deciding whether someone is "alive", for instance. Is someone like Terry Schaivo alive? This issue is why, in part, technology creates new moral dilemmas, and why teaching morality to a computer is so hard.
We underestimate the complexity of our own unnatural categories. This doesn't work when you're trying to build a FAI.
Theories of teleology have a few problems. First, theories of teleology often wind up drawing causal arrows from the future to the past. It also leads you to make predictions based on anthropomorphism. Finally, it opens you up to the Mind Projection Fallacy, assuming that the purpose of something is an inherent property of that thing, as opposed to a property of the agent or process that produced it.
It can feel as though you understand how to build an AI, when really, you're still making all your predictions based on empathy. Your AI design will not work until you figure out a way to reduce the mental to the non-mental.
Unfortunately, very little of philosophy is actually helpful in AI research, for a few reasons.
If a choice is hard, that means the alternatives are around equally balanced, right?
Qualitative Strategies of Friendliness
Qualitative strategies to achieve friendliness tend to run into difficulty.
Why programming an AI that only answers questions is not a trivial problem, for many of the same reasons that programming an FAI isn't trivial.
The standard visualization for the Prisoner's Dilemma doesn't really work on humans. We can't pretend we're completely selfish.
The Truly Iterated Prisoner's Dilemma
According to classic game theory, if you know how many iterations there are going to be in the iterated prisoner's dilemma, then you shouldn't use tit for tat. Does this really seem right?
Hollywood seems to model "emotionless" AI's as humans with some slight differences. For the most part, they act as emotionally repressed humans, despite the fact that this is a very unlikely way for AI's to behave.
Don't rule out supernatural explanations because they're supernatural. Test them the way you would test any other hypothesis. And probably, you will find out that they aren't true.
Some of the previous post was incorrect. Psychic powers, if indeed they were ever discovered, would actually be strong evidence in favor of non-reductionism.
A discussion of the concept of optimization.
When Eliezer went into his death spiral around intelligence, he would up making a lot of mistakes that later became very useful.
When Eliezer was quite young, it took him a very long time to get to the point where he was capable of considering that the dangers of technology might outweigh the benefits.
Eliezer's skills at defeating other people's ideas led him to believe that his own (mistaken) ideas must have been correct.
The Sheer Folly of Callow Youth
Eliezer's big mistake was when he took a mysterious view of morality.
If you're uncertain about something, communicate that uncertainty. Do so as clearly as you can. You don't help yourself by hiding how confused you are.
How Many LHC Failures Is Too Many?
If the LHC, or some sort of similar project, continually seemed to fail right before it did something we thought might destroy the world, this is something we should notice.
An illustration of inconsistent probability assignments.
Eliezer started to dig himself out of his philosophical hole when he noticed a tiny inconsistency.
Fighting a Rearguard Action Against the Truth
When Eliezer started to consider the possibility of Friendly AI as a contingency plan, he permitted himself a line of retreat. He was now able to slowly start to reconsider positions in his metaethics, and move gradually towards better ideas.
Eliezer actually looked back and realized his mistakes when he imagined the idea of an optimization process.
There are people who have acquired more mastery over various fields than Eliezer has over his.
People in higher levels of business, science, etc, often really are there because they're significantly more competent than everyone else.
A lot of AI researchers aren't really all that exceptional. This is a problem, though most people don't seem to see it.
Friedman's "Prediction vs. Explanation"
The Magnitude of His Own Folly
Eliezer considers his training as a rationalist to have started the day he realized just how awfully he had screwed up.
As a human, if you try to try something, you will put much less work into it than if you try something.
A fictional exchange between Mark Hamill and George Lucas over the scene in Empire Strikes Back where Luke Skywalker attempts to lift his X-wing with the force.
Compare the world in which there is a God, who will intervene at some threshold, against a world in which everything happens as a result of physical laws. Which universe looks more like our own?
The story of how Eliezer Yudkowsky became a Bayesian.
Bay Area Meetup for Singularity Summit
A lot of projects seem impossible, meaning that we don't immediately see a way to do them. But after working on them for a long time, they start to look merely extremely difficult.
It takes an extraordinary amount of rationality before you stop making stupid mistakes. Doing better requires making extraordinary efforts.
Shut up and do the impossible!
The ultimate level of attacking a problem is the point at which you simply shut up and solve the impossible problem.
Depiction of crisis of faith in Beisutsukai world.
(alternate summary:)
Jeffreyssai carefully undergoes a crisis of faith.
There are simple evolutionary reasons why power corrupts humans. They can be beaten, though.
Ends Don't Justify Means (Among Humans)
Entangled Truths, Contagious Lies
Before you start talking about a system of values, try to actually understand the values of that system as believed by its practitioners.
If you want to tell a truly convincing lie, to someone who knows what they're talking about, you either have to lie about lots of specific object level facts, or about more general laws, or about the laws of thought. Lots of the memes out there about how you learn things originally came from people who were trying to convince other people to believe false statements.
Ethics can protect you from your own mistakes, especially when your mistakes are about really fundamental things.
Humans may have a sense of ethical inhibition because various ancestors, who didn't follow ethical norms when they thought they could get away with it, nevertheless got caught.
Are ethical rules simply actions that have a high cost associated with them? Or are they bindings, expected to hold in all situations, no matter the cost otherwise?
Some responses to comments about the idea of Ethical Injunctions.
Everything you are, is inside your brain. But not everything inside your brain is you. You can draw mental separation lines, which can make you more reflective.
San Jose Meetup, Sat 10/25 @ 7:30pm
The unpredictability of intelligence is a very special and unusual kind of surprise, which is not at all like noise or randomness. There is a weird balance between the unpredictability of actions and the predictability of outcomes.
What does a belief that an agent is intelligent look like? What predictions does it make?
When you make plans, you are trying to steer the future into regions higher in your preference ordering.
Efficient Cross-Domain Optimization
To speak of intelligence, rather than optimization power, we need to divide optimization power by the resources needed, or the amount of prior optimization that had to be done on the system.
Economic Definition of Intelligence?
Could economics help provide a definition and a general measure of intelligence?
There are a few connections between economics and intelligence, so economics might have something to contribute to a definition of intelligence.
A list of abilities that would be amazing if they were magic, or if only a few people had them.
BHTV: Jaron Lanier and Yudkowsky
It is possible for humans to create something better than ourselves. It's been done. It's not paradoxical.
Hanging Out My Speaker's Shingle
Back Up and Ask Whether, Not Why
When someone asks you why you're doing "X", don't ask yourself why you're doing "X". Ask yourself whether someone should do "X".
Suppose we landed on another planet and found a large metal object that contained wires made of superconductors, and hundreds of tightly matched gears. Would we be able to infer the presence of an optimization process?
Creativity seems to consist of breaking rules, and violating expectations. But there is one rule that cannot be broken: creative solutions must have something good about them. Creativity is a surprise, but most surprises aren't creative.
Facing a random scenario, the correct solution is really not to behave randomly. Faced with an irrational universe, throwing away your rationality won't help.
If a system does better when randomness is added into its processing, then it must somehow have been performing worse than random. And if you can recognize that this is the case, you ought to be able to generate a non-randomized system.
The Weighted Majority Algorithm
An illustration of a case in Artificial Intelligence in which a randomized algorithm is purported to work better than a non-randomized algorithm, and a discussion of why this is the case.
Bay Area Meetup: 11/17 8PM Menlo Park
In most cases, if you say that something isn't working, then you have to specify a new thing that you think could work. You can't just say that you have to not do what you have been doing. If you observe that selling apples isn't working out for you financially, you can't just decide to sell nonapples.
What logic actually does is preserve truth in a model. It says that if all of the premises are true, then this conclusion is indeed true. But that's not all that minds do. There's an awful lot else that you need, before you start actually getting anything like intelligence.
Boston-area Meetup: 11/18/08 9pm MIT/Cambridge
The difference between Logical and Connectionist AIs is portrayed as a grand dichotomy between two different sides of the force. The truth is that they're just two different designs out of many possible ones.
It's very tempting to reason that your invention X will do Y, because it is similar to thing Z, which also does Y. But reality very often ignores this justification for why your new invention will work.
Making analogies to things that have positive or negative connotations is an even better way to make sure you fail.
On cases where the causal factors creating a circumstance are changing, the outside view may be misleading. In that case, the best you can do may just be to take the inside view, but not try to assign predictions that were too precise.
The first replicator was the original black swan. A couple of molecules that, despite not having a particularly good optimization process, could explore new regions of pattern-space. This is an event that would have implications that would have seemed absurd to predict.
Figuring out how to place concepts in categories is an important part of the problem. Before we classify AI into the same group as human intelligence, farming, and industry, we need to think about why we want to put them into that same category.
Trying to derive predictions from a theory that says that sexual reproduction increases the rate of evolution is more difficult than it first appears.
A discussion of some of the classical big steps in the evolution of life, and how they relate to the idea of optimization.
If you hadn't ever seen brains before, but had only seen evolution, you might start making astounding predictions about their ability. You might, for instance, think that creatures with brains might someday be able to create complex machinery in only a millenium.
Cascades, cycles, and insight are three ways in which the development of intelligence appears discontinuous. Cascades are when one development makes more developments possible. Cycles are when completing a process causes that process to be completed more. And insight is when we acquire a chunk of information that makes solving a lot of other problems easier.
If you have a system that gets better at making itself get better, it will appear to discontinuously advance. Add in the ability of intelligences to accomplish tasks which previous intelligences labeled impossible, and you have the potential for dramatic advancement.
The Complete Idiot's Guide to Ad Hominem
Engelbart: Insufficiently Recursive
The development of the mouse did lead to a productivity increase. But it didn't lead to a major productivity increase at creating future productivity increases. Therefore, the recursive process didn't take off properly.
If you get a small advantage in nanotechnology, that might not be enough to take over the world. But if you use that small advantage in nanotechnology to gain a major advancement in bots, you could gain an extraordinary amount of power very fast.
It is possible to create a singleton that won't do nasty things. This may be preferable to a scenario in which many agents start competing for resources without any way of securing themselves other than constant defense and deterrence.
A list of Ray Kurzweil's predictions for the period 1999-2009.
When you take a process that is capable of making significant progress developing other processes, and turn it on itself, you should either see it flatline, or FOOM. The likelihood of it doing anything that looks like human-scale progress is unbelievably low.
It seems likely that there will be a discontinuity in the process of AI self-improvement around the time when AIs become capable of doing AI theory. A lot of things have to go exactly right in order to get a slow takeoff, and there is no particular reason to expect them all to happen that way.
Permitted Possibilities, & Locality
Yudkowsky's attempt to summarize Hanson's positions, list the possible futures discussed so far, and identify which ones seem most and least likely to Yudkowsky.
The problem with selecting abstractions is that for your data, there are probably lots of abstractions that fit the data equally well. In that case, we need some other way to decide which abstractions are useful.
Sustained strong recursion has a much larger effect on growth than other possible mechanisms for growth.
People's stated reason for a rejection may not be the same as the actual reason for that rejection.
Artificial Mysterious Intelligence
Attempting to create an intelligence without actually understanding what intelligence is, is a common failure mode. If you want to make actual progress, you need to truly understand what it is that you are trying to make.
Yudkowsky's guesses about what the key sticking points in the AI FOOM debate are.
Disjunctions, Antipredictions, Etc.
A few tricks Yudkowsky uses to think about the future.
Bay Area Meetup Wed 12/10 @8pm
Reasons why aspiring rationalists might still disagree after trading arguments.
Yudkowsky's attempt to summarize what he thinks on the subject of Friendly AI, without providing any of the justifications for what he believes.
Yudkowsky's addition to Hanson's endorsement of cryonics.
For The People Who Are Still Alive
Given that we live in a big universe, and that we can't actually determine whether or not a particular person exists (because they will exist anyway in some other Hubble volume or Everett branch), then it makes more sense to care about whether or not people we can influence are having happy lives, than about whether certain people exist in our own local area.
It's rather difficult to imagine a way in which you could create an AI, and not somehow either take over or destroy the world. How can you use unlimited power in such a way that you don't become a malevolent deity, in the Epicurean sense?
Trying to imagine a Eutopia is actually difficult. But it is worth trying.
Prolegomena to a Theory of Fun
Fun Theory is an attempt to actually answer questions about eternal boredom that are more often posed and left hanging. Attempts to visualize Utopia are often defeated by standard biases, such as the attempt to imagine a single moment of good news ("You don't have to work anymore!") rather than a typical moment of daily life ten years later. People also believe they should enjoy various activities that they actually don't. But since human values have no supernatural source, it is quite reasonable for us to try to understand what we want. There is no external authority telling us that the future of humanity should not be fun.
Life should not always be made easier for the same reason that video games should not always be made easier. Think in terms of eliminating low-quality work to make way for high-quality work, rather than eliminating all challenge. One needs games that are fun to play and not just fun to win. Life's utility function is over 4D trajectories, not just 3D outcomes. Values can legitimately be over the subjective experience, the objective result, and the challenging process by which it is achieved - the traveller, the destination and the journey.
Are we likely to run out of new challenges, and be reduced to playing the same video game over and over? How large is Fun Space? This depends on how fast you learn; the faster you generalize, the more challenges you see as similar to each other. Learning is fun, but uses up fun; you can't have the same stroke of genius twice. But the more intelligent you are, the more potential insights you can understand; human Fun Space is larger than chimpanzee Fun Space, and not just by a linear factor of our brain size. In a well-lived life, you may need to increase in intelligence fast enough to integrate your accumulating experiences. If so, the rate at which new Fun becomes available to intelligence, is likely to overwhelmingly swamp the amount of time you could spend at that fixed level of intelligence. The Busy Beaver sequence is an infinite series of deep insights not reducible to each other or to any more general insight.
Much of the anomie and disconnect in modern society can be attributed to our spending all day on tasks (like office work) that we didn't evolve to perform (unlike hunting and gathering on the savanna). Thus, many of the tasks we perform all day do not engage our senses - even the most realistic modern video game is not the same level of sensual experience as outrunning a real tiger on the real savanna. Even the best modern video game is low-bandwidth fun - a low-bandwidth connection to a relatively simple challenge, which doesn't fill our brains well as a result. But future entities could have different senses and higher-bandwidth connections to more complicated challenges, even if those challenges didn't exist on the savanna.
Our hunter-gatherer ancestors strung their own bows, wove their own baskets and whittled their own flutes. Part of our alienation from our design environment is the number of tools we use that we don't understand and couldn't make for ourselves. It's much less fun to read something in a book than to discover it for yourself. Specialization is critical to our current civilization. But the future does not have to be a continuation of this trend in which we rely more and more on things outside ourselves which become less and less comprehensible. With a surplus of power, you could begin to rethink the life experience as a road to internalizing new strengths, not just staying alive efficiently through extreme specialization.
People who are not members of a minority group may somehow come to believe that members of this group possess certain traits which seem to "fit". These traits are not required to have any connection to the real traits of that group.
Offering people more choices that differ along many dimensions, may diminish their satisfaction with their final choice. Losses are more painful than the corresponding gains are pleasurable, so people think of the dimensions along which their final choice was inferior, and of all the other opportunities passed up. If you can only choose one dessert, you're likely to be happier choosing from a menu of two than from a menu of fourteen. Refusing tempting choices consumes mental energy and decreases performance on other cognitive tasks. A video game that contained an always-visible easier route through, would probably be less fun to play even if that easier route were deliberately foregone. You can imagine a Devil who follows someone around, making their life miserable, solely by offering them options which are never actually taken. And what if a worse option is taken due to a predictable mistake? There are many ways to harm people by offering them more choices.
It is dangerous to live in an environment in which a single failure of resolve, throughout your entire life, can result in a permanent addiction or in a poor edit of your own brain. For example, a civilization which is constantly offering people tempting ways to shoot off their own feet - for example, offering them a cheap escape into eternal virtual reality, or customized drugs. It requires a constant stern will that may not be much fun. And it's questionable whether a superintelligence that descends from above to offer people huge dangerous temptations that they wouldn't encounter on their own, is helping.
An AI, trying to develop highly accurate models of the people it interacts with, may develop models which are conscious themselves. For ethical reasons, it would be preferable if the AI wasn't creating and destroying people in the course of interpersonal interactions. Resolving this issue requires making some progress on the hard problem of conscious experience. We need some rule which definitely identifies all conscious minds as conscious. We can make do if it still identifies some nonconscious minds as conscious.
Discusses some of the problems of, and justification for, creating AIs that are knowably not conscious / sentient / people / citizens / subjective experiencers. We don't want the AI's models of people to be people - we don't want conscious minds trapped helplessly inside it. So we need how to tell that something is definitely not a person, and in this case, maybe we would like the AI itself to not be a person, which would simplify a lot of ethical issues if we could pull it off. Creating a new intelligent species is not lightly to be undertaken from a purely ethical perspective; if you create a new kind of person, you have to make sure it leads a life worth living.
Eliezer informs readers that he had accidentally published the previous post, "Nonsentient Optimizers", when it was only halfway done.
As a piece of meta advice for how to act when you have more power than you probably should, avoid doing things that cannot be undone. Creating a new sentient being is one of those things to avoid. If you need to rewrite the source code of a nonsentient optimization process, this is less morally problematic than rewriting the source code of a sentient intelligence who doesn't want to be rewritten. Creating new life forms creates such massive issues that it's really better to just not try, at least until we know a lot more.
C. S. Lewis's Narnia has a problem, and that problem is the super-lion Aslan - who demotes the four human children from the status of main characters, to mere hangers-on while Aslan does all the work. Iain Banks's Culture novels have a similar problem; the humans are mere hangers-on of the superintelligent Minds. We already have strong ethical reasons to prefer to create nonsentient AIs rather than sentient AIs, at least at first. But we may also prefer in just a fun-theoretic sense that we not be overshadowed by hugely more powerful entities occupying a level playing field with us. Entities with human emotional makeups should not be competing on a level playing field with superintelligences - either keep the superintelligences off the playing field, or design the smaller (human-level) minds with a different emotional makeup that doesn't mind being overshadowed.
Robin Dunbar's original calculation showed that the maximum human group size was around 150. But a typical size for a hunter-gatherer band would be 30-50, cohesive online groups peak at 50-60, and small task forces may peak in internal cohesiveness around 7. Our attempt to live in a world of six billion people has many emotional costs: We aren't likely to know our President or Prime Minister, or to have any significant influence over our country's politics, although we go on behaving as if we did. We are constantly bombarded with news about improbably pretty and wealthy individuals. We aren't likely to find a significant profession where we can be the best in our field. But if intelligence keeps increasing, the number of personal relationships we can track will also increase, along with the natural degree of specialization. Eventually there might be a single community of sentients that really was a single community.
Try spending a day doing as many new things as possible.