Less Wrong/Article summaries
The following is a list of summaries of all articles from Less Wrong, in chronological order.
The main purpose of this list is to keep track of which articles already have summaries written for them. We may eventually set up a way to automatically add these summaries to other wiki pages that link to the articles.
This list is still in progress. Please feel free to continue filling in this list, or fixing problems with the parts that are already filled in.
This page is compiled from:
- Less Wrong/2006 Articles/Summaries
- Less Wrong/2007 Articles/Summaries
- Less Wrong/2008 Articles/Summaries
- Less Wrong/2009 Articles/Summaries
2006 Articles
The Martial Art of Rationality
Rationality is a technique to be trained.
(alternate summary:)
Rationality is the martial art of the mind, building on universally human machinery. But developing rationality is more difficult than developing physical martial arts. One reason is because rationality skill is harder to verify. In recent decades, scientific fields like heuristics and biases, Bayesian probability theory, evolutionary psychology, and social psychology have given us a theoretical body of work on which to build the martial art of rationality. It remains to develop and especially to communicate techniques that apply this theoretical work introspectively to our own minds.
(alternate summary:)
Basic introduction of the metaphor and some of its consequences.
Why truth? And...
Truth can be instrumentally useful and intrinsically satisfying.
(alternate summary:)
Why should we seek truth? Pure curiosity is an emotion, but not therefore irrational. Instrumental value is another reason, with the advantage of giving an outside verification criterion. A third reason is conceiving of truth as a moral duty, but this might invite moralizing about "proper" modes of thinking that don't work. Still, we need to figure out how to think properly. That means avoiding biases, for which see the next post.
(alternate summary:)
You have an instrumental motive to care about the truth of your beliefs about anything you care about.
...What's a bias, again?
Biases are obstacles to truth seeking caused by one's own mental machinery.
(alternate summary:)
There are many more ways to miss than to find the truth. Finding the truth is the point of avoiding the things we call "biases", which form one of the clusters of obstacles that we find: biases are those obstacles to truth-finding that arise from the structure of the human mind, rather than from insufficient information or computing power, from brain damage, or from bad learned habits or beliefs. But ultimately, what we call a "bias" doesn't matter.
The Proper Use of Humility
Use humility to justify further action, not as an excuse for laziness and ignorance.
(alternate summary:)
There are good and bad kinds of humility. Proper humility is not being selectively underconfident about uncomfortable truths. Proper humility is not the same as social modesty, which can be an excuse for not even trying to be right. Proper scientific humility means not just acknowledging one's uncertainty with words, but taking specific actions to plan for the case that one is wrong.
The Modesty Argument
Factor in what other people think, but not symmetrically, if they are not epistemic peers.
(alternate summary:)
The Modesty Argument states that any two honest Bayesian reasoners who disagree should each take the other's beliefs into account and both arrive at a probability distribution that is the average of the ones they started with. Robin Hanson seems to accept the argument but Eliezer does not. Eliezer gives the example of himself disagreeing with a creationist as evidence for how following the modesty argument could lead to decreased individual rationality. He also accuses those who agree with the argument of not taking it into account when planning their actions.
"I don't know."
You can pragmatically say "I don't know", but you rationally should have a probability distribution.
(alternate summary:)
An edited instant messaging conversation regarding the use of the phrase "I don't know". "I don't know" is a useful phrase if you want to avoid getting in trouble or convey the fact that you don't have access to privileged information.
A Fable of Science and Politics
People respond in different ways to clear evidence they're wrong, not always by updating and moving on.
(alternate summary:)
A story about an underground society divided into two factions: one that believes that the sky is blue and one that believes the sky is green. At the end of the story, the reactions of various citizens to discovering the outside world and finally seeing the color of the sky are described.
End of 2006 articles
2007 Articles
Some Claims Are Just Too Extraordinary
Publications in peer-reviewed scientific journals are more worthy of trust than what you detect with your own ears and eyes.
Outside the Laboratory
Those who understand the map/territory distinction will integrate their knowledge, as they see the evidence that reality is a single unified process.
(alternate summary:)
Written regarding the proverb "Outside the laboratory, scientists are no wiser than anyone else." The case is made that if this proverb is in fact true, that's quite worrisome because it implies that scientists are blindly following scientific rituals without understanding why. In particular, it is argued that if a scientist is religious, e probably doesn't understand the foundations of science very well.
Politics is the Mind-Killer
In your discussions, beware, for people have great difficulty being rational about current political issues. This is no surprise to someone familiar with evolutionary psychology.
(alternate summary:)
People act funny when they talk about politics. In the ancestral environment, being on the wrong side might get you killed, and being on the correct side might get you sex, food, or let you kill your hated rival. If you must talk about politics (for the purpose of teaching rationality), use examples from the distant past. Politics is an extension of war by other means. Arguments are soldiers. Once you know which side you're on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side; otherwise, it's like stabbing your soldiers in the back - providing aid and comfort to the enemy. If your topic legitimately relates to attempts to ban evolution in school curricula, then go ahead and talk about it, but don't blame it explicitly on the whole Republican/Democratic/Liberal/Conservative/Nationalist Party.
Just Lose Hope Already
Admit it when the evidence goes against you, or else things can get a whole lot worse.
(alternate summary:)
Casey Serin owes banks 2.2 million dollars after lying on mortgage applications in order to simultaneously buy 8 different houses in different states. The sad part is that he hasn't given up - he hasn't declared bankruptcy, and has just attempted to purchase another house. While this behavior seems merely stupid, it brings to mind Merton and Scholes of Long-Term Capital Management, who made 40% profits for three years, and then lost it all when they overleveraged. Each profession has rules on how to be successful, which makes rationality seem unlikely to help greatly in life. Yet it seems that one of the greater skills is not being stupid, which rationality does help with.
You Are Not Hiring the Top 1%
Interviewees represent a selection bias on the pool skewed toward those who are not successful or happy in their current jobs.
(alternate summary:)
Software companies may see themselves as being very selective about who they hire. Out of 200 applicants, they may hire just one or two. However, that doesn't necessarily mean that they're hiring the top 1%. The programmers who weren't hired are likely to apply for jobs somewhere else. Overall, the worst programmers will apply for many more jobs over the course of their careers than the best. So programmers who are applying for a particular job are not representative of programmers as a whole. This phenomenon probably shows up in other places as well.
Policy Debates Should Not Appear One-Sided
Debates over outcomes with multiple effects will have arguments both for and against, so you must integrate the evidence, not expect the issue to be completely one-sided.
(alternate summary:)
Robin Hanson proposed a "banned products shop" where things that the government ordinarily would ban are sold. Eliezer responded that this would probably cause at least one stupid and innocent person to die. He became surprised when people inferred from this remark that he was against Robin's idea. Policy questions are complex actions with many consequences. Thus they should only rarely appear one-sided to an objective observer. A person's intelligence is largely a product of circumstances they cannot control. Eliezer argues for cost-benefit analysis instead of traditional libertarian ideas of tough-mindedness (people who do stupid things deserve their consequences).
Burch's Law
Just because your ethics require an action doesn't mean the universe will exempt you from the consequences.
(alternate summary:)
Just because your ethics require an action doesn't mean the universe will exempt you from the consequences. Manufactured cars kill an estimated 1.2 million people per year worldwide. (Roughly 2% of the annual planetary death rate.) Not everyone who dies in an automobile accident is someone who decided to drive a car. The tally of casualties includes pedestrians. It includes minor children who had to be pushed screaming into the car on the way to school. And yet we still manufacture automobiles, because, well, we're in a hurry. The point is that the consequences don't change no matter how good the ethical justification sounds.
The Scales of Justice, the Notebook of Rationality
People have an irrational tendency to simplify their assessment of things into how good or bad they are without considering that the things in question may have many distinct and unrelated attributes.
(alternate summary:)
In non-binary answer spaces, you can't add up pro and con arguments along one dimension without risk of getting important factual questions wrong.
Blue or Green on Regulation?
Both sides are often right in describing the terrible things that will happen if we take the other side's advice; the universe is "unfair", terrible things are going to happen regardless of what we do, and it's our job to trade off for the least bad outcome.
(alternate summary:)
In a rationalist community, it should not be necessary to talk in the usual circumlocutions when talking about empirical predictions. We should know that people think of arguments as soldiers and recognize the behavior in our selves. When you think about all the truth values around you come to see that much of what the Greens said about the downside of the Blue policy was true - that, left to the mercy of the free market, many people would be crushed by powers far beyond their understanding, nor would they deserve it. And imagine that most of what the Blues said about the downside of the Green policy was also true - that regulators were fallible humans with poor incentives, whacking on delicately balanced forces with a sledgehammer.
(alternate summary:)
Burch's law isn't a soldier-argument for regulation; estimating the appropriate level of regulation in each particular case is a superior third option.
Superstimuli and the Collapse of Western Civilization
As a side effect of evolution, superstimuli exist, and, as a result of economics, they are getting and will likely continue to get worse.
(alternate summary:)
At least 3 people have died by playing online games non-stop. How is it that a game is so enticing that after 57 straight hours playing, a person would rather spend the next hour playing the game over sleeping or eating? A candy bar is superstimulus, it corresponds overwhelmingly well to the EEA healthy food characteristics of sugar and fat. If people enjoy these things, the market will respond to provide as much of it as possible, even if other considerations make it undesirable.
Useless Medical Disclaimers
Medical disclaimers without probabilities are hard to use, and if probabilities aren't there because some people can't handle having them there, maybe we ought to tax those people.
(alternate summary:)
Eliezer complains about a disclaimer he had to sign before getting toe surgery because it didn't give numerical probabilities for the possible negative outcomes it described. He guesses this is because of people afflicted with "innumeracy" who would over-interpret small numbers. He proposes a tax wherein folks are asked if they are innumerate and asked to pay in proportion to their innumeracy. This tax is revealed in the comments to be a state-sponsored lottery.
Archimedes's Chronophone
Consider the thought experiment where you communicate general thinking patterns which will lead to right answers, as opposed to pre-hashed content...
(alternate summary:)
Imagine that Archimedes of Syracuse invented a device that allows you to talk to him. Imagine the possibilities for improving history! Unfortunately, the device will not literally transmit your words - it transmits cognitive strategies. If you advise giving women the vote, it comes out as advising finding a wise tyrant, the Greek ideal of political discourse. Under such restrictions, what do you say to Archimedes?
Chronophone Motivations
If you want to really benefit humanity, do some original thinking, especially about areas of application, and directions of effort.
(alternate summary:)
The point of the chronophone dilemma is to make us think about what kind of cognitive policies are good to follow when you don't know your destination in advance.
Self-deception: Hypocrisy or Akrasia?
It is suggested that in some cases, people who say one thing and do another thing are not in fact "hypocrites". Instead they are suffering from "akrasia" or weakness of will. At the end, the problem of deciding what parts of a person's mind are considered their "real self" is discussed.
(alternate summary:)
If part of a person--for example, the verbal module--says it wants to become more rational, we can ally with that part even when weakness of will makes the person's actions otherwise; hypocrisy need not be assumed.
Tsuyoku Naritai! (I Want To Become Stronger)
Don't be satisfied knowing you are biased; instead, aspire to become stronger, studying your flaws so as to remove them. There is a temptation to take pride in confessions, which can impede progress.
Tsuyoku vs. the Egalitarian Instinct
There may be evolutionary psychological factors that encourage modesty and mediocrity, at least in appearance; while some of that may still apply today, you should mentally plan and strive to pull ahead, if you are doing things right.
"Statistical Bias"
There are two types of error, systematic error, and random variance error; by repeating experiments you can average out and drive down the variance error.
Useful Statistical Biases
If you know an estimator has high variance, you can intentionally introduce bias by choosing a simpler hypothesis, and thereby lower expected variance while raising expected bias; sometimes total error is lower, hence the "bias-variance tradeoff". Keep in mind that while statistical bias might be useful, cognitive biases are not.
The Error of Crowds
Variance decomposition does not imply majoritarian-ish results; this is an artifact of minimizing square error, and drops out using square root error when bias is larger than variance; how and why to factor in evidence requires more assumptions, as per Aumann agreement.
(alternate summary)
Mean squared error drops when we average our predictions, but only because it uses a convex loss function. If you faced a concave loss function, you wouldn't isolate yourself from others, which casts doubt on the relevance of Jensen's inequality for rational communication. The process of sharing thoughts and arguing differences is not like taking averages.
The Majority Is Always Wrong
Anything worse than the majority opinion should get selected out, so the majority opinion is rarely strictly superior to existing alternatives.
Knowing About Biases Can Hurt People
Learning common biases won't help you obtain truth if you only use this knowledge to attack beliefs you don't like. Discussions about biases need to first do no harm by emphasizing motivated cognition, the sophistication effect, and dysrationalia, although even knowledge of these can backfire.
Debiasing as Non-Self-Destruction
Not being stupid seems like a more easily generalizable skill than breakthrough success. If debiasing is mostly about not being stupid, its benefits are hidden: lottery tickets not bought, blind alleys not followed, cults not joined. Hence, checking whether debiasing works is difficult, especially in the absence of organizations or systematized training.
"Inductive Bias"
Inductive bias is a systematic direction in belief revisions. The same observations could be evidence for or against a belief, depending on your prior. Inductive biases are more or less correct depending on how well they correspond with reality, so "bias" might not be the best description.
Suggested Posts
This is an obsolete "meta" post.
Futuristic Predictions as Consumable Goods
The Friedman Unit is named after Thomas Friedman who called "the next six months" the critical period in Iraq eight times between 2003 and 2007. This is because future predictions are created and consumed in the now; they are used to create feelings of delicious goodness or delicious horror now, not provide useful future advice.
Marginally Zero-Sum Efforts
After a point, labeling a problem as "important" is a commons problem. Rather than increasing the total resources devoted to important problems, resources are taken from other projects. Some grants proposals need to be written, but eventually this process becomes zero- or negative-sum on the margin.
Priors as Mathematical Objects
A prior is an assignment of a probability to every possible sequence of observations. In principle, the prior determines a probability for any event. Formally, the prior is a giant look-up table, which no Bayesian reasoner would literally implement. Nonetheless, the formal definition is sometimes convenient. For example, uncertainty about priors can be captured with a weighted sum of priors.
Lotteries: A Waste of Hope
Some defend lottery-ticket buying as a rational purchase of fantasy. But you are occupying your valuable brain with a fantasy whose probability is nearly zero, wasting emotional energy. Without the lottery, people might fantasize about things that they can actually do, which might lead to thinking of ways to make the fantasy a reality. To work around a bias, you must first notice it, analyze it, and decide that it is bad. Lottery advocates are failing to complete the third step.
New Improved Lottery
If the opportunity to fantasize about winning justified the lottery, then a "new improved" lottery would be even better. You would buy a nearly-zero chance to become a millionaire at any moment over the next five years. You could spend every moment imagining that you might become a millionaire at that moment.
Your Rationality is My Business
As a human, I have a proper interest in the future of human civilization, including the human pursuit of truth. That makes your rationality my business. The danger is that we will think that we can respond to irrationality with violence. Relativism is not the way to avoid this danger. Instead, commit to using only arguments and evidence, never violence, against irrational thinking.
Consolidated Nature of Morality Thread
This post was a place for debates about the nature of morality, so that subsequent posts touching tangentially on morality would not be overwhelmed.
Examples of questions to be discussed here included: What is the difference between "is" and "ought" statements? Why do some preferences seem voluntary, while others do not? Do children believe that God can change what is moral? Is there a direction to the development of moral beliefs in history, and, if so, what is the causal explanation of this? Does Tarski's definition of truth extend to moral statements? If you were physically altered to prefer killing, would "killing is good" become true? If the truth value of a moral claim cannot be changed by any physical act, does this make the claim stronger or weaker? What are the referents of moral claims, or are they empty of content? Are there "pure" ought-statements, or do they all have is-statements mixed into them? Are there pure aesthetic judgments or preferences?
Feeling Rational
Strong emotions can be rational. A rational belief that something good happened leads to rational happiness. But your emotions ought not to change your beliefs about events that do not depend causally on your emotions.
Universal Fire
You can't change just one thing in the world and expect the rest to continue working as before.
Universal Law
In our everyday lives, we are accustomed to rules with exceptions, but the basic laws of the universe apply everywhere without exception. Apparent violations exist only in our models, not in reality.
Think Like Reality
"Quantum physics is not "weird". You are weird. You have the absolutely bizarre idea that reality ought to consist of little billiard balls bopping around, when in fact reality is a perfectly normal cloud of complex amplitude in configuration space. This is your problem, not reality's, and you are the one who needs to change."
Beware the Unsurprised
If reality consistently surprises you, then your model needs revision. But beware those who act unsurprised by surprising data. Maybe their model was too vague to be contradicted. Maybe they haven't emotionally grasped the implications of the data. Or maybe they are trying to appear poised in front of others. Respond to surprise by revising your model, not by suppressing your surprise.
The Third Alternative
People justify Noble Lies by pointing out their benefits over doing nothing. But, if you really need these benefits, you can construct a Third Alternative for getting them. How? You have to search for one. Beware the temptation not to search or to search perfunctorily. Ask yourself, "Did I spend five minutes by the clock trying hard to think of a better alternative?"
Third Alternatives for Afterlife-ism
One source of hope against death is Afterlife-ism. Some say that this justifies it as a Noble Lie. But there are better (because more plausible) Third Alternatives, including nanotech, actuarial escape velocity, cryonics, and the Singularity. If supplying hope were the real goal of the Noble Lie, advocates would prefer these alternatives. But the real goal is to excuse a fixed belief from criticism, not to supply hope.
Scope Insensitivity
The human brain can't represent large quantities: an environmental measure that will save 200,000 birds doesn't conjure anywhere near a hundred times the emotional impact and willingness-to-pay of a measure that would save 2,000 birds.
One Life Against the World
Saving one life and saving the whole world provide the same warm glow. But, however valuable a life is, the whole world is billions of times as valuable. The duty to save lives doesn't stop after the first saved life. Choosing to save one life when you could have saved two is as bad as murder.
Risk-Free Bonds Aren't
There are no risk-free investments. Even US treasury bills would fail under a number of plausible "black swan" scenarios. Nassim Taleb's own investment strategy doesn't seem to take sufficient account of such possibilities. Risk management is always a good idea.
Correspondence Bias
Also known as the fundamental attribution error, refers to the tendency to attribute the behavior of others to intrinsic dispositions, while excusing one's own behavior as the result of circumstance.
(alternate summary:)
Correspondence Bias is a tendency to attribute to a person a disposition to behave in a particular way, based on observing an episode in which that person behaves in that way. The data set that gets considered consists only of the observed episode, while the target model is of the person's behavior in general, in many possible episodes, in many different possible contexts that may influence the person's behavior.
Are Your Enemies Innately Evil?
People want to think that the Enemy is an innately evil mutant. But, usually, the Enemy is acting as you might in their circumstances. They think that they are the hero in their story and that their motives are just. That doesn't mean that they are right. Killing them may be the best option available. But it is still a tragedy.
Open Thread
This obsolete post was a place for free-form comments related to the project of the Overcoming Bias blog.
Two More Things to Unlearn from School
School encourages two bad habits of thought: (1) equating "knowledge" with the ability to parrot back answers that the teacher expects; and (2) assuming that authorities are perfectly reliable. The first happens because students don't have enough time to digest what they learn. The second happens especially in fields like physics because students are so often just handed the right answer.
Making Beliefs Pay Rent (in Anticipated Experiences)
Not every belief that we have is directly about sensory experience, but beliefs should pay rent in anticipations of experience. For example, if I believe that "Gravity is 9.8 m/s^2" then I should be able to predict where I'll see the second hand on my watch at the time I hear the crash of a bowling ball dropped off a building. On the other hand, if your postmodern English professor says that the famous writer Wulky is a "post-utopian," this may not actually mean anything. The moral is to ask "What experiences do I anticipate?" instead of "What statements do I believe?"
Belief in Belief
Suppose someone claims to have a dragon in their garage, but as soon as you go to look, they say, "It's an invisible dragon!" The remarkable thing is that they know in advance exactly which experimental results they shall have to excuse, indicating that some part of their mind knows what's really going on. And yet they may honestly believe they believe there's a dragon in the garage. They may perhaps believe it is virtuous to believe there is a dragon in the garage, and believe themselves virtuous. Even though they anticipate as if there is no dragon.
Bayesian Judo
You can have some fun with people whose anticipations get out of sync with what they believe they believe. This post recounts a conversation in which a theist had to backpedal when he realized that, by drawing an empirical inference from his religion, he had opened up his religion to empirical disproof.
Professing and Cheering
A woman on a panel enthusiastically declared her belief in a pagan creation myth, flaunting its most outrageously improbable elements. This seemed weirder than "belief in belief" (she didn't act like she needed validation) or "religious profession" (she didn't try to act like she took her religion seriously). So, what was she doing? She was cheering for paganism — cheering loudly by making ridiculous claims.
Belief as Attire
When you've stopped anticipating-as-if something is true, but still believe it is virtuous to believe it, this does not create the true fire of the child who really does believe. On the other hand, it is very easy for people to be passionate about group identification - sports teams, political sports teams - and this may account for the passion of beliefs worn as team-identification attire.
Religion's Claim to be Non-Disprovable
Religions used to claim authority in all domains, including biology, cosmology, and history. Only recently have religions attempted to be non-disprovable by confining themselves to ethical claims. But the ethical claims in scripture ought to be even more obviously wrong than the other claims, making the idea of non-overlapping magisteria a Big Lie.
The Importance of Saying "Oops"
When your theory is proved wrong, just scream "OOPS!" and admit your mistake fully. Don't just admit local errors. Don't try to protect your pride by conceding the absolute minimal patch of ground. Making small concessions means that you will make only small improvements. It is far better to make big improvements quickly. This is a lesson of Bayescraft that Traditional Rationality fails to teach.
Focus Your Uncertainty
If you are paid for post-hoc analysis, you might like theories that "explain" all possible outcomes equally well, without focusing uncertainty. But what if you don't know the outcome yet, and you need to have an explanation ready in 100 minutes? Then you want to spend most of your time on excuses for the outcomes that you anticipate most, so you still need a theory that focuses your uncertainty.
The Proper Use of Doubt
Doubt is often regarded as virtuous for the wrong reason: because it is a sign of humility and recognition of your place in the hierarchy. But from a rationalist perspective, this is not why you should doubt. The doubt, rather, should exist to annihilate itself: to confirm the reason for doubting, or to show the doubt to be baseless. When you can no longer make progress in this respect, the doubt is no longer useful to you as a rationalist.
The Virtue of Narrowness
One way to fight cached patterns of thought is to focus on precise concepts.
(alternate summary:)
It was perfectly all right for Isaac Newton to explain just gravity, just the way things fall down - and how planets orbit the Sun, and how the Moon generates the tides - but not the role of money in human society or how the heart pumps blood. Sneering at narrowness is rather reminiscent of ancient Greeks who thought that going out and actually looking at things was manual labor, and manual labor was for slaves.
You Can Face Reality
This post quotes a poem by Eugene Gendlin, which reads, "What is true is already so. / Owning up to it doesn't make it worse. / Not being open about it doesn't make it go away. / And because it's true, it is what is there to be interacted with. / Anything untrue isn't there to be lived. / People can stand what is true, / for they are already enduring it."
The Apocalypse Bet
If you think that the apocalypse will be in 2020, while I think that it will be in 2030, how could we bet on this? One way would be for me to pay you X dollars every year until 2020. Then, if the apocalypse doesn't happen, you pay me 2X dollars every year until 2030. This idea could be used to set up a prediction market, which could give society information about when an apocalypse might happen. Yudkowsky later realized that this wouldn't work.
Your Strength as a Rationalist
A hypothesis that forbids nothing permits everything, and thus fails to constrain anticipation. Your strength as a rationalist is your ability to be more confused by fiction than by reality. If you are equally good at explaining any outcome, you have zero knowledge.
I Defy the Data!
If an experiment contradicts a theory, we are expected to throw out the theory, or else break the rules of Science. But this may not be the best inference. If the theory is solid, it's more likely that an experiment got something wrong than that all the confirmatory data for the theory was wrong. In that case, you should be ready to "defy the data", rejecting the experiment without coming up with a more specific problem with it; the scientific community should tolerate such defiances without social penalty, and reward those who correctly recognized the error if it fails to replicate. In no case should you try to rationalize how the theory really predicted the data after all.
Absence of Evidence Is Evidence of Absence
Absence of proof is not proof of absence. But absence of evidence is always evidence of absence. According to the probability calculus, if P(H|E) > P(H) (observing E would be evidence for hypothesis H), then P(H|~E) < P(H) (absence of E is evidence against H). The absence of an observation may be strong evidence or very weak evidence of absence, but it is always evidence.
Conservation of Expected Evidence
If you are about to make an observation, then the expected value of your posterior probability must equal your current prior probability. On average, you must expect to be exactly as confident as when you started out. If you are a true Bayesian, you cannot seek evidence to confirm your theory, because you do not expect any evidence to do that. You can only seek evidence to test your theory.
Update Yourself Incrementally
Many people think that you must abandon a belief if you admit any counterevidence. Instead, change your belief by small increments. Acknowledge small pieces of counterevidence by shifting your belief down a little. Supporting evidence will follow if your belief is true. "Won't you lose debates if you concede any counterarguments?" Rationality is not for winning debates; it is for deciding which side to join.
One Argument Against An Army
It is tempting to weigh each counterargument by itself against all supporting arguments. No single counterargument can overwhelm all the supporting arguments, so you easily conclude that your theory was right. Indeed, as you win this kind of battle over and over again, you feel ever more confident in your theory. But, in fact, you are just rehearsing already-known evidence in favor of your view.
Hindsight bias
Hindsight bias makes us overestimate how well our model could have predicted a known outcome. We underestimate the cost of avoiding a known bad outcome, because we forget that many other equally severe outcomes seemed as probable at the time. Hindsight bias distorts the testing of our models by observation, making us think that our models are better than they really are.
Hindsight Devalues Science
Hindsight bias leads us to systematically undervalue scientific findings, because we find it too easy to retrofit them into our models of the world. This unfairly devalues the contributions of researchers. Worse, it prevents us from noticing when we are seeing evidence that doesn't fit what we really would have expected. We need to make a conscious effort to be shocked enough.
Scientific Evidence, Legal Evidence, Rational Evidence
For good social reasons, we require legal and scientific evidence to be more than just rational evidence. Hearsay is rational evidence, but as legal evidence it would invite abuse. Scientific evidence must be public and reproducible by everyone, because we want a pool of especially reliable beliefs. Thus, Science is about reproducible conditions, not the history of any one experiment.
Is Molecular Nanotechnology "Scientific"?
The belief that nanotechnology is possible is based on qualitative reasoning from scientific knowledge. But such a belief is merely rational. It will not be scientific until someone constructs a nanofactory. Yet if you claim that nanomachines are impossible because they have never been seen before, you are being irrational. To think that everything that is not science is pseudoscience is a severe mistake.
Fake Explanations
People think that fake explanations use words like "magic," while real explanations use scientific words like "heat conduction." But being a real explanation isn't a matter of literary genre. Scientific-sounding words aren't enough. Real explanations constrain anticipation. Ideally, you could explain only the observations that actually happened. Fake explanations could just as well "explain" the opposite of what you observed.
Guessing the Teacher's Password
In schools, "education" often consists of having students memorize answers to specific questions (i.e., the "teacher's password"), rather than learning a predictive model that says what is and isn't likely to happen. Thus, students incorrectly learn to guess at passwords in the face of strange observations rather than admit their confusion. Don't do that: any explanation you give should have a predictive model behind it. If your explanation lacks such a model, start from a recognition of your own confusion and surprise at seeing the result. SilasBarta 00:54, 13 April 2011 (UTC)
Science as Attire
You don't understand the phrase "because of evolution" unless it constrains your anticipations. Otherwise, you are using it as attire to identify yourself with the "scientific" tribe. Similarly, it isn't scientific to reject strongly superhuman AI only because it sounds like science fiction. A scientific rejection would require a theoretical model that bounds possible intelligences. If your proud beliefs don't constrain anticipation, they are probably just passwords or attire.
Fake Causality
It is very easy for a human being to think that a theory predicts a phenomenon, when in fact is was fitted to a phenomenon. Properly designed reasoning systems (GAIs) would be able to avoid this mistake with our knowledge of probability theory, but humans have to write down a prediction in advance in order to ensure that our reasoning about causality is correct.
Semantic Stopsigns
There are certain words and phrases that act as "stopsigns" to thinking. They aren't actually explanations, or help to resolve the actual issue at hand, but they act as a marker saying "don't ask any questions."
Mysterious Answers to Mysterious Questions
The theory of vitalism was developed before the idea of biochemistry. It stated that the mysterious properties of living matter, compared to nonliving matter, was due to an "elan vital". This explanation acts as a curiosity-stopper, and leaves the phenomenon just as mysterious and inexplicable as it was before the answer was given. It feels like an explanation, though it fails to constrain anticipation.
The Futility of Emergence
The theory of "emergence" has become very popular, but is just a mysterious answer to a mysterious question. After learning that a property is emergent, you aren't able to make any new predictions.
Positive Bias: Look Into the Dark
Positive bias is the tendency to look for evidence that confirms a hypothesis, rather than disconfirming evidence.
Say Not "Complexity"
The concept of complexity isn't meaningless, but too often people assume that adding complexity to a system they don't understand will improve it. If you don't know how to solve a problem, adding complexity won't help; better to say "I have no idea" than to say "complexity" and think you've reached an answer.
My Wild and Reckless Youth
Traditional rationality (without Bayes' Theorem) allows you to formulate hypotheses without a reason to prefer them to the status quo, as long as they are falsifiable. Even following all the rules of traditional rationality, you can waste a lot of time. It takes a lot of rationality to avoid making mistakes; a moderate level of rationality will just lead you to make new and different mistakes.
Failing to Learn from History
There are no inherently mysterious phenomena, but every phenomenon seems mysterious, right up until the moment that science explains it. It seems to us now that biology, chemistry, and astronomy are naturally the realm of science, but if we had lived through their discoveries, and watched them reduced from mysterious to mundane, we would be more reluctant to believe the next phenomenon is inherently mysterious.
Making History Available
It's easy not to take the lessons of history seriously; our brains aren't well-equipped to translate dry facts into experiences. But imagine living through the whole of human history - imagine watching mysteries be explained, watching civilizations rise and fall, being surprised over and over again - and you'll be less shocked by the strangeness of the next era.
Stranger Than History
Imagine trying to explain quantum physics, the internet, or any other aspect of modern society to people from 1900. Technology and culture change so quickly that our civilization would be unrecognizable to people 100 years ago; what will the world look like 100 years from now?
Explain/Worship/Ignore?
When you encounter something you don't understand, you have three options: to seek an explanation, knowing that that explanation will itself require an explanation; to avoid thinking about the mystery at all; or to embrace the mysteriousness of the world and worship your confusion.
"Science" as Curiosity-Stopper
Although science does have explanations for phenomena, it is not enough to simply say that "Science!" is responsible for how something works -- nor is it enough to appeal to something more specific like "electricity" or "conduction". Yet for many people, simply noting that "Science has an answer" is enough to make them no longer curious about how it works. In that respect, "Science" is no different from more blatant curiosity-stoppers like "God did it!" But you shouldn't let your interest die simply because someone else knows the answer (which is a rather strange heuristic anyway): You should only be satisfied with a predictive model, and how a given phenomenon fits into that model. SilasBarta 01:22, 13 April 2011 (UTC)
Absurdity Heuristic, Absurdity Bias
Under some circumstances, rejecting arguments on the basis of absurdity is reasonable. The absurdity heuristic can allow you to identify hypotheses that aren't worth your time. However, detailed knowledge of the underlying laws should allow you to override the absurdity heuristic. Objects fall, but helium balloons rise. The future has been consistently absurd and will likely go on being that way. When the absurdity heuristic is extended to rule out crazy-sounding things with a basis in fact, it becomes absurdity bias.
Availability
Availability bias is a tendency to estimate the probability of an event based on whatever evidence about that event pops into your mind, without taking into account the ways in which some pieces of evidence are more memorable than others, or some pieces of evidence are easier to come by than others. This bias directly consists in considering a mismatched data set that leads to a distorted model, and biased estimate.
Why is the Future So Absurd?
New technologies and social changes have consistently happened at a rate that would seem absurd and impossible to people only a few decades before they happen. Hindsight bias causes us to see the past as obvious and as a series of changes towards the "normalcy" of the present; availability biases make it hard for us to imagine changes greater than those we've already encountered, or the effects of multiple changes. The future will be stranger than we think.
Anchoring and Adjustment
Exposure to numbers affects guesses on estimation problems by anchoring your mind to an given estimate, even if it's wildly off base. Be aware of the effect random numbers have on your estimation ability.
The Crackpot Offer
If you make a mistake, don't excuse it or pat yourself on the back for thinking originally; acknowledge you made a mistake and move on. If you become invested in your own mistakes, you'll stay stuck on bad ideas.
Radical Honesty
The Radical Honesty movement requires participants to speak the truth, always, whatever they think. The more competent you grow at avoiding self-deceit, the more of a challenge this would be - but it's an interesting thing to imagine, and perhaps strive for.
We Don't Really Want Your Participation
Advocates for the Singularity sometimes call for outreach to artists or poets; we should move away from thinking of people as if their profession is the only thing they can contribute to humanity. Being human is what gives us a stake in the future, not being poets or mathematicians.
Applause Lights
Words like "democracy" or "freedom" are applause lights - no one disapproves of them, so they can be used to signal conformity and hand-wave away difficult problems. If you hear people talking about the importance of "balancing risks and opportunities" or of solving problems "through a collaborative process" that aren't followed up by any specifics, then the words are applause lights, not real thoughts.
Rationality and the English Language
George Orwell's writings on language and totalitarianism are critical to understanding rationality. Orwell was an opponent of the use of words to obscure meaning, or to convey ideas without their emotional impact. Language should get the point across - when the effort to convey information gets lost in the effort to sound authoritative, you are acting irrationally.
Human Evil and Muddled Thinking
It's easy to think that rationality and seeking truth is an intellectual exercise, but this ignores the lessons of history. Cognitive biases and muddled thinking allow people to hide from their own mistakes and allow evil to take root. Spreading the truth makes a real difference in defeating evil.
Doublethink (Choosing to be Biased)
George Orwell wrote about what he called "doublethink", where a person was able to hold two contradictory thoughts in their mind simultaneously. While some argue that self deception can make you happier, doublethink will actually lead only to problems.
Why I'm Blooking
Eliezer explains that he is overcoming writer's block by writing one Less Wrong post a day.
Planning Fallacy
We tend to plan envisioning that everything will go as expected. Even assuming that such an estimate is accurate conditional on everything going as expected, things will not go as expected. As a result, we routinely see outcomes worse then the ex ante worst case scenario.
(alternate summary:)
Planning Fallacy is a tendency to overestimate your efficiency in achieving a task. The data set you consider consists of simple cached ways in which you move about accomplishing the task, and lacks the unanticipated problems and more complex ways in which the process may unfold. As a result, the model fails to adequately describe the phenomenon, and the answer gets systematically wrong.
Kahneman's Planning Anecdote
Nobel Laureate Daniel Kahneman recounts an incident where the inside view and the outside view of the time it would take to complete a project of his were widely different.
Conjunction Fallacy
Elementary probability theory tells us that the probability of one thing (we write P(A)) is necessarily greater than or equal to the conjunction of that thing and another thing (write P(A&B)). However, in the psychology lab, subjects' judgments do not conform to this rule. This is not an isolated artifact of a particular study design. Debiasing won't be as simple as practicing specific questions; it requires certain general habits of thought.
Conjunction Controversy (Or, How They Nail It Down)
When it seems like an experiment that's been cited does not provide enough support for the interpretation given, remember that Scientists are generally pretty smart. Especially if the experiment was done a long time ago, or it is described as "classic" or "famous". In that case, you should consider the possibility that there is more evidence that you haven't seen. Instead of saying "This experiment could also be interpreted in this way", ask "How did they distinguish this interpretation from ________________?"
Burdensome Details
If you want to avoid the conjunction fallacy, you must try to feel a stronger emotional impact from Occam's Razor. Each additional detail added to a claim must feel as though it is driving the probability of the claim down towards zero.
What is Evidence?
Evidence is an event connected by a chain of causes and effects to whatever it is you want to learn about. It also has to be an event that is more likely if reality is one way, than if reality is another. If a belief is not formed this way, it cannot be trusted.
The Lens That Sees Its Flaws
Part of what makes humans different from other animals is our own ability to reason about our reasoning. Mice do not think about the cognitive algorithms that generate their belief that the cat is hunting them. Our ability to think about what sort of thought processes would lead to correct beliefs is what gave rise to Science. This ability makes our admittedly flawed minds much more powerful.
How Much Evidence Does It Take?
If you are considering one hypothesis out of many, or that hypothesis is more implausible than others, or you wish to know with greater confidence, you will need more evidence. Ignoring this rule will cause you to jump to a belief without enough evidence, and thus be wrong.
Einstein's Arrogance
Albert Einstein, when asked what he would do if an experiment disproved his theory of general relativity, responded with "I would feel sorry for [the experimenter]. The theory is correct." While this may sound like arrogance, Einstein doesn't look nearly as bad from a Bayesian perspective. In order to even consider the hypothesis of general relativity in the first place, he would have needed a large amount of Bayesian evidence.
Occam's Razor
To a human, Thor feels like a simpler explanation for lightning than Maxwell's equations, but that is because we don't see the full complexity of an intelligent mind. However, if you try to write a computer program to simulate Thor and a computer program to simulate Maxwell's equations, one will be much easier to accomplish. This is how the complexity of a hypothesis is measured in the formalisms of Occam's Razor.
9/26 is Petrov Day
September 26th is Petrov Day, celebrated to honor the deed of Stanislav Yevgrafovich Petrov on September 26th, 1983. Wherever you are, whatever you're doing, take a minute to not destroy the world.
How to Convince Me That 2 + 2 = 3
The way to convince Eliezer that 2+2=3 is the same way to convince him of any proposition, give him enough evidence. If all available evidence, social, mental and physical, starts indicating that 2+2=3 then you will shortly convince Eliezer that 2+2=3 and that something is wrong with his past or recollection of the past.
The Bottom Line
If you first write at the bottom of a sheet of paper, “And therefore, the sky is green!”, it does not matter what arguments you write above it afterward; the conclusion is already written, and it is already correct or already wrong.
What Evidence Filtered Evidence?
Someone tells you only the evidence that they want you to hear. Are you helpless? Forced to update your beliefs until you reach their position? No, you also have to take into account what they could have told you but didn't.
Rationalization
Rationality works forward from evidence to conclusions. Rationalization tries in vain to work backward from favourable conclusions to the evidence. But you cannot rationalize what is not already rational. It is as if "lying" were called "truthization".
Recommended Rationalist Reading
Book recommendations by Eliezer and readers.
A Rational Argument
You can't produce a rational argument for something that isn't rational. First select the rational choice. Then the rational argument is just a list of the same evidence that convinced you.
We Change Our Minds Less Often Than We Think
We all change our minds occasionally, but we don't constantly, honestly reevaluate every decision and course of action. Once you think you believe something, the chances are good that you already do, for better or worse.
Avoiding Your Belief's Real Weak Points
When people doubt, they instinctively ask only the questions that have easy answers. When you're doubting one of your most cherished beliefs, close your eyes, empty your mind, grit your teeth, and deliberately think about whatever hurts the most.
The Meditation on Curiosity
If you can find within yourself the slightest shred of true uncertainty, then guard it like a forester nursing a campfire. If you can make it blaze up into a flame of curiosity, it will make you light and eager, and give purpose to your questioning and direction to your skills.
Singlethink
The path to rationality begins when you see a great flaw in your existing art, and discover a drive to improve, to create new skills beyond the helpful but inadequate ones you found in books. Eliezer's first step was to catch what it felt like to shove an unwanted fact to the corner of his mind. Singlethink is the skill of not doublethinking.
No One Can Exempt You From Rationality's Laws
Traditional Rationality is phrased in terms of social rules, with violations interpretable as cheating - as defections from cooperative norms. But viewing rationality as a social obligation gives rise to some strange ideas. The laws of rationality are mathematics, and no social maneuvering can exempt you.
A Priori
The facts that philosophers call "a priori" arrived in your brain by a physical process. Thoughts are existent in the universe; they are identical to the operation of brains. The "a priori" belief generator in your brain works for a reason.
Priming and Contamination
Even slight exposure to a stimulus is enough to change the outcome of a decision or estimate. See also Never Leave Your Room by Yvain, and Cached Selves by Salamon and Rayhawk.
(alternate summary:)
Contamination by Priming is a problem that relates to the process of implicitly introducing the facts in the attended data set. When you are primed with a concept, the facts related to that concept come to mind easier. As a result, the data set selected by your mind becomes tilted towards the elements related to that concept, even if it has no relation to the question you are trying to answer. Your thinking becomes contaminated, shifted in a particular direction. The data set in your focus of attention becomes less representative of the phenomenon you are trying to model, and more representative of the concepts you were primed with.
Do We Believe Everything We're Told?
Some experiments on priming suggest that mere exposure to a view is enough to get one to passively accept it, at least until it is specifically rejected.
Cached Thoughts
Brains are slow. They need to cache as much as they can. They store answers to questions, so that no new thought is required to answer. Answers copied from others can end up in your head without you ever examining them closely. This makes you say things that you'd never believe if you thought them through. So examine your cached thoughts! Are they true?
The "Outside the Box" Box
When asked to think creatively there's always a cached thought that you can fall into. To be truly creative you must avoid the cached thought. Think something actually new, not something that you heard was the latest innovation. Striving for novelty for novelty's sake is futile, instead you must aim to be optimal. People who strive to discover truth or to invent good designs, may in the course of time attain creativity.
Original Seeing
One way to fight cached patterns of thought is to focus on precise concepts.
How to Seem (and Be) Deep
Just find ways of violating cached expectations.
(alternate summary)
To seem deep, find coherent but unusual beliefs, and concentrate on explaining them well. To be deep, you actually have to think for yourself.
The Logical Fallacy of Generalization from Fictional Evidence
The Logical Fallacy of Generalization from Fictional Evidence consists in drawing the real-world conclusions based on statements invented and selected for the purpose of writing fiction. The data set is not at all representative of the real world, and in particular of whatever real-world phenomenon you need to understand to answer your real-world question. Considering this data set leads to an inadequate model, and inadequate answers.
Hold Off On Proposing Solutions
Proposing solutions prematurely is dangerous, because it introduces weak conclusions in the pool of the facts you are considering, and as a result the data set you think about becomes weaker, overly tilted towards premature conclusions that are likely to be wrong, that are less representative of the phenomenon you are trying to model than the initial facts you started from, before coming up with the premature conclusions.
"Can't Say No" Spending
Medical spending and aid to Africa have no net effect (or worse). But it's heartbreaking to just say no...
Congratulations to Paris Hilton
Eliezer offers his congratulations to Paris Hilton, who he believed had signed up for cryonics. (It turns out that she hadn't.)
Pascal's Mugging: Tiny Probabilities of Vast Utilities
An Artificial Intelligence coded using Solomonoff Induction would be vulnerable to Pascal's Mugging. How should we, or an AI, handle situations in which it is very unlikely that a proposition is true, but if the proposition is true, it has more moral weight than anything else we can imagine?
Illusion of Transparency: Why No One Understands You
Everyone knows what their own words mean, but experiments have confirmed that we systematically overestimate how much sense we are making to others.
Self-Anchoring
Related to contamination and the illusion of transparancy, we "anchor" on our own experience and under-adjust when trying to understand others.
Expecting Short Inferential Distances
Humans evolved in an environment where we almost never needed to explain long inferential chains of reasoning. This fact may account for the difficulty many people have when trying to explain complicated subjects. We only explain the last step of the argument, and not every step that must be taken from our listener's premises.
Explainers Shoot High. Aim Low!
Humans greatly overestimate how much sense our explanations make. In order to explain something adequately, pretend that you're trying to explain it to someone much less informed than your target audience.
Double Illusion of Transparency
In addition to the difficulties encountered in trying to explain something so that your audience understands it, there are other problems associated in learning whether or not you have explained something properly. If you read your intended meaning into whatever your listener says in response, you may think that e understands a concept, when in fact e is simply rephrasing whatever it was you actually said.
No One Knows What Science Doesn't Know
In the modern world, unlike our ancestral environment, it is not possible for one person to know more than a tiny fraction of the world's scientific knowledge. Just because you don't understand something, you should not conclude that not one of the six billion other people on the planet understands it.
Why Are Individual IQ Differences OK?
People act as though it is perfectly fine and normal for individuals to have differing levels of intelligence, but that it is absolutely horrible for one racial group to be more intelligent than another. Why should the two be considered any differently?
Bay Area Bayesians Unite!
An obsolete post in which Eliezer queried Overcoming Bias readers to find out if they would be interested in holding in-person meetings.
Motivated Stopping and Motivated Continuation
When the evidence we've seen points towards a conclusion that we like or dislike, there is a temptation to stop the search for evidence prematurely, or to insist that more evidence is needed.
Torture vs. Dust Specks
If you had to choose between torturing one person horribly for 50 years, or putting a single dust speck into the eyes of 3^^^3 people, what would you do?
A Case Study of Motivated Continuation
When you find yourself considering a problem in which all visible options are uncomfortable, making a choice is difficult. Grit your teeth and choose anyways.
A Terrifying Halloween Costume
The day after Halloween, Eliezer made a joke related to Torture vs. Dust Specks, which he had posted just a few days ago.
Fake Justification
We should be suspicious of our tendency to justify our decisions with arguments that did not actually factor into making said decisions. Whatever process you actually use to make your decisions is what determines your effectiveness as a rationalist.
An Alien God
Evolution is awesomely powerful, unbelievably stupid, incredibly slow, monomaniacally singleminded, irrevocably splintered in focus, blindly shortsighted, and itself a completely accidental process. If evolution were a god, it would not be Jehovah, but H. P. Lovecraft's Azathoth, the blind idiot god burbling chaotically at the center of everything.
The Wonder of Evolution
...is not how amazingly well it works, but that it works at all without a mind, brain, or the ability to think abstractly - that an entirely accidental process can produce complex designs. If you talk about how amazingly well evolution works, you're missing the point.
(alternate summary:)
The wonder of the first replicator was not how amazingly well it replicated, but that a first replicator could arise, at all, by pure accident, in the primordial seas of Earth. That first replicator would undoubtedly be devoured in an instant by a sophisticated modern bacterium. Likewise, the wonder of evolution itself is not how well it works, but that a brainless, accidentally occurring optimization process can work at all. If you praise evolution for being such a wonderfully intelligent Creator, you're entirely missing the wonderful thing about it.
Evolutions Are Stupid (But Work Anyway)
Evolution, while not simple, is sufficiently simpler than organic brains that we can describe mathematically how slow and stupid it is.
(alternate summary:)
Modern evolutionary theory gives us a definite picture of evolution's capabilities. If you praise evolution one millimeter higher than this, you are not scoring points against creationists, you are just being factually inaccurate. In particular we can calculate the probability and time for advantageous genes to rise to fixation. For example, a mutation conferring a 3% advantage would have only a 6% probability of surviving, and if it did so, would take 875 generations to rise to fixation in a population of 500,000 (on average).
Natural Selection's Speed Limit and Complexity Bound
Tried to argue mathematically that there could be at most 25MB of meaningful information (or thereabouts) in the human genome, but computer simulations failed to bear out the mathematical argument. It does seem probably that evolution has some kind of speed limit and complexity bound - eminent evolutionary biologists seem to believe it, and in fact the Genome Project discovered only 25,000 genes in the human genome - but this particular math may not be the correct argument.
Beware of Stephen J. Gould
A lot of people have gotten their grasp of evolutionary theory from Stephen J. Gould, a man who committed the moral equivalent of fraud in a way that is difficult to explain. At any rate, he severely misrepresented what evolutionary biologists believe, in the course of pretending to attack certain beliefs. One needs to clear from memory, as much as possible, not just everything that Gould positively stated but everything he seemed to imply the mainstream theory believed.
The Tragedy of Group Selectionism
A tale of how some pre-1960s biologists were led astray by expecting evolution to do smart, nice things like they would do themselves.
(alternate summary:)
Describes a key case where some pre-1960s evolutionary biologists went wrong by anthropomorphizing evolution - in particular, Wynne-Edwards, Allee, and Brereton among others believed that predators would voluntarily restrain their breeding to avoid overpopulating their habitat. Since evolution does not usually do this sort of thing, their rationale was group selection - populations that did this would survive better. But group selection is extremely difficult to make work mathematically, and an experiment under sufficiently extreme conditions to permit group selection, had rather different results.
Fake Selfishness
Many people who espouse a philosophy of selfishness aren't really selfish. If they were selfish, there are a lot more productive things to do with their time than espouse selfishness, for instance. Instead, individuals who proclaim themselves selfish do whatever it is they actually want, including altruism, but can always find some sort of self-interest rationalization for their behavior.
Fake Morality
Many people provide fake reasons for their own moral reasoning. Religious people claim that the only reason people don't murder each other is because of God. Selfish-ists provide altruistic justifications for selfishness. Altruists provide selfish justifications for altruism. If you want to know how moral someone is, don't look at their reasons. Look at what they actually do.
Fake Optimization Criteria
Why study evolution? For one thing - it lets us see an alien optimization process up close - lets us see the real consequence of optimizing strictly for an alien optimization criterion like inclusive genetic fitness. Humans, who try to persuade other humans to do things their way, think that this policy criterion ought to require predators to restrain their breeding to live in harmony with prey; the true result is something that humans find less aesthetic.
Adaptation-Executers, not Fitness-Maximizers
A central principle of evolutionary biology in general, and evolutionary psychology in particular. If we regarded human taste buds as trying to maximize fitness, we might expect that, say, humans fed a diet too high in calories and too low in micronutrients, would begin to find lettuce delicious, and cheeseburgers distasteful. But it is better to regard taste buds as an executing adaptation - they are adapted to an ancestral environment in which calories, not micronutrients, were the limiting factor.
Evolutionary Psychology
The human brain, and every ability for thought and emotion in it, are all adaptations selected for by evolution. Humans have the ability to feel angry for the same reason that birds have wings: ancient humans and birds with those adaptations had more kids. But, it is easy to forget that there is a distinction between the reason humans have the ability to feel anger, and the reason why a particular person was angry at a particular thing. Human brains are adaptation executors, not fitness maximizers.
Protein Reinforcement and DNA Consequentialism
Brains made of proteins can learn much faster than DNA, but DNA does seem to be more adaptable. The complexity of the evolutionary hypothesis is so enormous that no species, other than humans, is capable of thinking it, and yet DNA seems to implicitly understand it. This happens because DNA is learns through the actual consequences, but protein brains can simply imagine the consequences.
Thou Art Godshatter
Describes the evolutionary psychology behind the complexity of human values - how they got to be complex, and why, given that origin, there is no reason in hindsight to expect them to be simple. We certainly are not built to maximize genetic fitness.
(alternate summary:)
Being a thousand shards of desire isn't always fun, but at least it's not boring. Somewhere along the line, we evolved tastes for novelty, complexity, elegance, and challenge - tastes that judge the blind idiot god's monomaniacal focus, and find it aesthetically unsatisfying.
Terminal Values and Instrumental Values
Proposes a formalism for a discussion of the relationship between terminal and instrumental values. Terminal values are world states that we assign some sort of positive or negative worth to. Instrumental values are links in a chain of events that lead to desired world states.
Evolving to Extinction
Contrary to a naive view that evolution works for the good of a species, evolution says that genes which outreproduce their alternative alleles increase in frequency within a gene pool. It is entirely possible for genes which "harm" the species to outcompete their alternatives in this way - indeed, it is entirely possible for a species to evolve to extinction.
(alternate summary:)
On how evolution could be responsible for the bystander effect.
(alternate summary:)
It is a common misconception that evolution works for the good of a species, but actually evolution only cares about the inclusive fitness of genes relative to each other, and so it is quite possible for a species to evolve to extinction.
No Evolutions for Corporations or Nanodevices
Price's Equation describes quantitatively how the change in a average trait, in each generation, is equal to the covariance between that trait and fitness. Such covariance requires substantial variation in traits, substantial variation in fitness, and substantial correlation between the two - and then, to get large cumulative selection pressures, the correlation must have persisted over many generations with high-fidelity inheritance, continuing sources of new variation, and frequent birth of a significant fraction of the population. People think of "evolution" as something that automatically gets invoked where "reproduction" exists, but these other conditions may not be fulfilled - which is why corporations haven't evolved, and nanodevices probably won't.
The Simple Math of Everything
It is enormously advantageous to know the basic mathematical equations at the base of a field. Understanding a few simple equations of evolutionary biology, knowing how to use Bayes' Rule, and understanding the wave equation for sound in air are not enormously difficult challenges. However, if you know them, your own capabilities are greatly enhanced.
Conjuring An Evolution To Serve You
If you take the hens who lay the most eggs in each generation, and breed from them, you should get hens who lay more and more eggs. Sounds logical, right? But this selection may actually favor the most dominant hen, that pecked its way to the top of the pecking order at the expense of other hens. Such breeding programs produce hens that must be housed in individual cages, or they will peck each other to death. Jeff Skilling of Enron fancied himself an evolution-conjurer - summoning the awesome power of evolution to work for him - and so, every year, every Enron employee's performance would be evaluated, and the bottom 10% would get fired, and the top performers would get huge raises and bonuses...
Artificial Addition
If you imagine a world where people are stuck on the "artifical addition" (i.e. machine calculator) problem, the way people currently are stuck on artificial intelligence, and you saw them trying the same popular approaches taken today toward AI, it would become clear how silly they are. Contrary to popular wisdom (in that world or ours), the solution is not to "evolve" an artificial adder, or invoke the need for special physics, or build a huge database of solutions, etc. -- because all of these methods dodge the crucial task of understanding what addition involves, and instead try to dance around it. Moreover, the history of AI research shows the problems of believing assertions one cannot re-generate from one's own knowledge.
Truly Part Of You
Any time you believe you've learned something, you should ask yourself, "Could I re-generate this knowledge if it were somehow deleted from my mind, and how would I do so?" If the supposed knowledge is just empty buzzwords, you will recognize that you can't, and therefore that you haven't learned anything. But if it's an actual model of reality, this method will reinforce how the knowledge is entangled with the rest of the world, enabling you to apply it to other domains, and know when you need to update those beliefs. It will have become "truly part of you", growing and changing with the rest of your knowledge.
Not for the Sake of Happiness (Alone)
Tackles the Hollywood Rationality trope that "rational" preferences must reduce to selfish hedonism - caring strictly about personally experienced pleasure. An ideal Bayesian agent - implementing strict Bayesian decision theory - can have a utility function that ranges over anything, not just internal subjective experiences.
Leaky Generalizations
The words and statements that we use are inherently "leaky", they do not precisely convey absolute and perfect information. Most humans have ten fingers, but if you know that someone is a human, you cannot confirm (with probability 1) that they have ten fingers. The same holds with planning and ethical advice.
The Hidden Complexity of Wishes
There are a lot of things that humans care about. Therefore, the wishes that we make (as if to a genie) are enormously more complicated than we would intuitively suspect. In order to safely ask a powerful, intelligent being to do something for you, that being must share your entire decision criterion, or else the outcome will likely be horrible.
Lost Purposes
On noticing when you're still doing something that has become disconnected from its original purpose.
(alternate summary)
It is possible for the various steps in a complex plan to become valued in and of themselves, rather than as steps to achieve some desired goal. It is especially easy if the plan is being executed by a complex organization, where each group or individual in the organization is only evaluated by whether or not they carry out their assigned step. When this process is carried to its extreme, we get Soviet shoe factories manufacturing tiny shoes to increase their production quotas, and the No Child Left Behind Act.
Purpose and Pragmatism
It is easier to get trapped in a mistake of cognition if you have no practical purpose for your thoughts. Although pragmatic usefulness is not the same thing as truth, there is a deep connection between the two.
The Affect Heuristic
Positive and negative emotional impressions exert a greater effect on many decisions than does rational analysis.
Evaluability (And Cheap Holiday Shopping)
It's difficult for humans to evaluate an option except in comparison to other options. Poor decisions result when a poor category for comparison is used. Includes an application for cheap gift-shopping.
(alternate summary:)
Is there a way to exploit human biases to give the impression of largess with cheap gifts? Yes. Humans compare the value/price of an object to other similar objects. A $399 Eee PC is cheap (because other laptops are more expensive), yet a $399 PS3 is expensive (because the alternatives are less expensive). To give the impression of expense in a gift chose a cheap class of item (say, a candle) and buy the most expensive one around.
Unbounded Scales, Huge Jury Awards, & Futurism
Without a metric for comparison, estimates of, e.g., what sorts of punitive damages should be awarded, or when some future advance will happen, vary widely simply due to the lack of a scale.
The Halo Effect
Positive qualities seem to correlate with each other, whether or not they actually do.
Superhero Bias
It is better to risk your life to save 200 people than to save 3. But someone who risks their life to save 3 people is revealing a more altruistic nature than someone risking their life to save 200. And yet comic books are written about heroes who save 200 innocent schoolchildren, and not police officers saving three prostitutes.
Mere Messiahs
John Perry, an extropian and a transhumanist, died when the north tower of the World Trade Center fell. He knew he was risking his existence to save other people, and he had hope that he might be able to avoid death, but he still helped them. This takes far more courage than someone who dies, expecting to be rewarded in an afterlife for their virtue.
Affective Death Spirals
Human beings can fall into a feedback loop around something that they hold dear. Every situation they consider, they use their great idea to explain. Because their great idea explained this situation, it now gains weight. Therefore, they should use it to explain more situations. This loop can continue, until they believe Belgium controls the US banking system, or that they can use an invisible blue spirit force to locate parking spots.
Resist the Happy Death Spiral
You can avoid a Happy Death Spiral by (1) splitting the Great Idea into parts (2) treating every additional detail as burdensome (3) thinking about the specifics of the causal chain instead of the good or bad feelings (4) not rehearsing evidence (5) not adding happiness from claims that "you can't prove are wrong"; but not by (6) refusing to admire anything too much (7) conducting a biased search for negative points until you feel unhappy again (8) forcibly shoving an idea into a safe box.
Uncritical Supercriticality
One of the most dangerous mistakes that a human being with human psychology can make, is to begin thinking that any argument against their favorite idea must be wrong, because it is against their favorite idea. Alternatively, they could think that any argument that supports their favorite idea must be right. This failure of reasoning has led to massive amounts of suffering and death in world history.
Fake Fake Utility Functions
Describes Eliezer's motivations in the sequence leading up to his post on Fake Utility Functions.
Fake Utility Functions
Describes the seeming fascination that many have with trying to compress morality down to a single principle. The sequence leading up to this post tries to explain the cognitive twists whereby people smuggle all of their complicated other preferences into their choice of exactly which acts they try to justify using their single principle; but if they were really following only that single principle, they would choose other acts to justify.
Evaporative Cooling of Group Beliefs
When a cult encounters a blow to their own beliefs (a prediction fails to come true, their leader is caught in a scandal, etc) the cult will often become more fanatical. In the immediate aftermath, the cult members that leave will be the ones who were previously the voice of opposition, skepticism, and moderation. Without those members, the cult will slide further in the direction of fanaticism.
When None Dare Urge Restraint
The dark mirror to the happy death spiral is the spiral of hate. When everyone looks good for attacking someone, and anyone who disagrees with any attack must be a sympathizer to the enemy, the results are usually awful. It is too dangerous for there to be anyone in the world that we would prefer to say negative things about, over saying accurate things about.
The Robbers Cave Experiment
The Robbers Cave Experiment, by Sherif, Harvey, White, Hood, and Sherif (1954/1961), was designed to investigate the causes and remedies of problems between groups. Twenty-two middle school aged boys were divided into two groups and placed in a summer camp. From the first time the groups learned of each other's existence, a brutal rivalry was started. The only way the counselors managed to bring the groups together was by giving the two groups a common enemy. Any resemblance to modern politics is just your imagination.
Misc Meta
An obsolete meta post.
Every Cause Wants To Be A Cult
Simply having a good idea at the center of a group of people is not enough to prevent that group from becoming a cult. As long as the idea's adherents are human, they will be vulnerable to the flaws in reasoning that cause cults. Simply basing a group around the idea of being rational is not enough. You have to actually put in the work to oppose the slide into cultishness.
Reversed Stupidity Is Not Intelligence
The world's greatest fool may say the Sun is shining, but that doesn't make it dark out. Stalin also believed that 2 + 2 = 4. Stupidity or human evil do not anticorrelate with truth. Arguing against weaker advocates proves nothing, because even the strongest idea will attract weak advocates.
Argument Screens Off Authority
There are many cases in which we should take the authority of experts into account, when we decide whether or not to believe their claims. But, if there are technical arguments that are available, these can screen off the authority of experts.
Hug the Query
The more directly your arguments bear on a question, without intermediate inferences, the more powerful the evidence. We should try to observe evidence that is as near to the original question as possible, so that it screens off as many other arguments as possible.
Guardians of the Truth
There is an enormous psychological difference between believing that you absolutely, certainly, have the truth, versus trying to discover the truth. If you believe that you have the truth, and that it must be protected from heretics, torture and murder follow. Alternatively, if you believe that you are close to the truth, but perhaps not there yet, someone who disagrees with you is simply wrong, not a mortal enemy.
Guardians of the Gene Pool
It is a common misconception that the Nazis wanted their eugenics program to create a new breed of supermen. In fact, they wanted to breed back to the archetypal Nordic man. They located their ideals in the past, which is a counterintuitive idea for many of us.
Guardians of Ayn Rand
Ayn Rand, the leader of the Objectivists, praised reason and rationality. The group she created became a cult. Praising rationality does not provide immunity to the human trend towards cultishness.
The Litany Against Gurus
A piece of poetry written to describe the proper attitude to take towards a mentor, or a hero.
Politics and Awful Art
When producing art that has some sort of political purpose behind it (like persuading people, or conveying a message), don't forget to actually make it art. It can't just be politics.
Two Cult Koans
Two Koans about individuals concerned that they may have joined a cult.
False Laughter
Finding a blow to the hated enemy to be funny is a dangerous feeling, especially if that is the only reason why the joke is funny. Jokes should be funny on their own merits before they become deserving of laughter.
Effortless Technique
Things like the amount of effort put into a project, or the number of lines in a computer program, are positive things to maximize. But this is silly. Surely it is better to accomplish the same task with fewer lines of code.
Zen and the Art of Rationality
Rationality is very different in its propositional statements from Eastern religions, like Taoism or Buddhism. But, it is sometimes easier to express ideas in rationality using the language of Zen or the Tao.
The Amazing Virgin Pregnancy
A story in which Mary tells Joseph that God made her pregnant so Joseph won't realize she's been cheating on him with the village rabbi.
Asch's Conformity Experiment
The unanimous agreement of surrounding others can make subjects disbelieve (or at least, fail to report) what's right before their eyes. The addition of just one dissenter is enough to dramatically reduce the rates of improper conformity.
On Expressing Your Concerns
A way of breaking the conformity effect in some cases.
Lonely Dissent
Joining a revolution does take courage, but it is something that humans can reliably do. It is comparatively more difficult to risk death. But is is more difficult than either of these to be the first person in a rebellion. To be the only one who is saying something different. That doesn't feel like going to school in black. It feels like going to school in a clown suit.
To Lead, You Must Stand Up
By attempting to take a leadership role, you really have to get people's attention first. This is often harder than it seems. If what you attempt to do fails, or if people don't follow you, you risk embarrassment. Deal with it.
Cultish Countercultishness
People often nervously ask, "This isn't a cult, is it?" when encountering a group that thinks something weird. There are many reasons why this question doesn't make sense. For one thing, if you really were a member of a cult, you would not say so. Instead, what you should do when considering whether or not to join a group, is consider the details of the group itself. Is their reasoning sound? Do they do awful things to their members?
My Strange Beliefs
Eliezer explains that he references transhumanism on Overcoming Bias not for the purpose of proselytization, but because it is rather impossible for him to share lessons about rationality from his personal experiences otherwise, as he happens to be highly involved in the transhumanist community.
End of 2007 articles
2008 Articles
Posting on Politics
Eliezer warns readers that he is about to make a few posts directly discussing politics.
The Two-Party Swindle
Voters for either political party usually have more in common with each other than they do with the politicians they vote for. And yet, they support their own "team members" with fanatic devotion. Nobody is allowed to criticize their own team's politicians, without their fellow voters accusing them of treason.
The American System and Misleading Labels
The conclusions we draw from analyzing the American political system are often biased by our own previous understanding of it, which we got in elementary school. In fact, the power of voting for a particular candidate (which is not the same as the power to choose which candidates will run) is not the greatest power of the voters. Instead, voters' main ability is the threat to change which party controls the government, or, extremely rarely, to completely dethrone both political parties and replace them with a third.
Stop Voting For Nincompoops
Many people try to vote "strategically", by considering which candidate is more "electable". One of the most important factors in whether someone is "electable" is whether they have received attention from the media and the support of one of the two major parties. Naturally, those organizations put considerable thought into who is electable in making their decision. Ultimately, all arguments for "strategic voting" tend to fall apart. The voters themselves get so little say in who the next president is that the best we can do is just to not vote for nincompoops.
Rational vs. Scientific Ev-Psych
In Evolutionary Biology or Psychology, a nice-sounding but untested theory is referred to as a "just-so story", after the stories written by Rudyard Kipling. But, if there is a way to test the theory, people tend to consider it more likely to be correct. This is not a rational tendency.
A Failed Just-So Story
Part of the reason professional evolutionary biologists dislike just-so stories is that many of them are simply wrong.
But There's Still A Chance, Right?
Sometimes, you calculate the probability of a certain event and find that the number is so unbelievably small that your brain really can't keep track of how small it is, any more than you can spot an individual grain of sand on a beach from 100 meters off. But, because you're already thinking about that event enough to calculate the probability of it, it feels like it's still worth keeping track of. It's not.
The Fallacy of Gray
Nothing is perfectly black or white. Everything is gray. However, this does not mean that everything is the same shade of gray. It may be impossible to completely eliminate bias, but it is still worth reducing bias.
Absolute Authority
Those without the understanding of the Quantitative Way will often map the process of arriving at beliefs onto the social domains of Authority. They think that if Science is not infinitely certain, or if it has ever admitted a mistake, then it is no longer a trustworthy source, and can be ignored. This cultural gap is rather difficult to cross.
Infinite Certainty
If you say you are 99.9999% confident of a proposition, you're saying that you could make one million equally likely statements and be wrong, on average, once. Probability 1 indicates a state of infinite certainty. Furthermore, once you assign a probability 1 to a proposition, Bayes' theorem says that it can never be changed, in response to any evidence. Probability 1 is a lot harder to get to with a human brain than you would think.
0 And 1 Are Not Probabilities
In the ordinary way of writing probabilities, 0 and 1 both seem like entirely reachable quantities. But when you transform probabilities into odds ratios, or log-odds, you realize that in order to get a proposition to probability 1 would require an infinite amount of evidence.
Beautiful Math
The joy of mathematics is inventing mathematical objects, and then noticing that the mathematical objects that you just created have all sorts of wonderful properties that you never intentionally built into them. It is like building a toaster and then realizing that your invention also, for some unexplained reason, acts as a rocket jetpack and MP3 player.
Expecting Beauty
Mathematicians expect that if you dig deep enough, a stable, or even beautiful, pattern will emerge. Some people claim that this belief is unfounded. But, we have previously found order in many of the places we've looked for it.
Is Reality Ugly?
There are three reasons why a world governed by math can still seem messy. First, we may not actually know the math. Secondly, even if we do know all of the math, we may not have enough computing power to do the full calculation. And finally, even if we did know all the math, and we could compute it, we still don't know where in the mathematical system we are living.
Beautiful Probability
Bayesians expect probability theory, and rationality itself, to be math. Self consistent, neat, even beautiful. This is why Bayesians think that Cox's theorems are so important.
Trust in Math
When you find a seeming inconsistency in the rules of math, or logic, or probability theory, you might do well to consider that math has rightfully earned a bit more credibility than that. Check the proof. It is more likely that you have made a mistake in algebra, than that you have just discovered a fatal flaw in math itself.
Rationality Quotes 1
Rationality Quotes 2
Rationality Quotes 3
The Allais Paradox
(and subsequent followups) - Offered choices between gambles, people make decision-theoretically inconsistent decisions.
Zut Allais!
Offered choices between gambles, people make decision-theoretically inconsistent decisions.
Rationality Quotes 4
Allais Malaise
Offered choices between gambles, people make decision-theoretically inconsistent decisions.
Against Discount Rates
We really shouldn't care less about the future than we do about the present.
Circular Altruism
Our moral preferences shouldn't be circular. If a policy A is better than B, and B is better than C, and C is better than D, and so on, then policy A really should be better than policy Z.
Rationality Quotes 5
Rationality Quotes 6
Rationality Quotes 7
Rationality Quotes 8
Rationality Quotes 9
The "Intuitions" Behind "Utilitarianism"
Our intuitions, the underlying cognitive tricks that we use to build our thoughts, are an indispensable part of our cognition. The problem is that many of those intuitions are incoherent, or are undesirable upon reflection. But if you try to "renormalize" your intuition, you wind up with what is essentially utilitarianism.
Trust in Bayes
There is a long history of people claiming to have found paradoxes in Bayesian Probability Theory. Typically, these proofs are fallacious, but correct seeming, just as apparent proofs that 2 = 1 are. But in probability theory, the illegal operation is usually not a hidden division by zero, but rather an infinity that is not arrived as a limit of a finite calculation. Once you are more careful with your math, these paradoxes typically go away.
Something to Protect
Many people only start to grow as a rationalist when they find something that they care about more than they care about rationality itself. It takes something really scary to cause you to override your intuitions with math.
Newcomb's Problem and Regret of Rationality
Newcomb's problem is a very famous decision theory problem in which the rational move appears to be consistently punished. This is the wrong attitude to take. Rationalists should win. If your particular ritual of cognition consistently fails to yield good results, change the ritual.
OB Meetup: Millbrae, Thu 21 Feb, 7pm
The Parable of the Dagger
A word fails to connect to reality in the first place. Is Socrates a framster? Yes or no?
The Parable of Hemlock
Socrates is a human, and humans, by definition, are mortal. So if you defined humans to not be mortal, would Socrates live forever?
(alternate summary:)
Your argument, if it worked, could coerce reality to go a different way by choosing a different word definition. Socrates is a human, and humans, by definition, are mortal. So if you defined humans to not be mortal, would Socrates live forever?
You try to establish any sort of empirical proposition as being true "by definition". Socrates is a human, and humans, by definition, are mortal. So is it a logical truth if we empirically predict that Socrates should keel over if he drinks hemlock? It seems like there are logically possible, non-self-contradictory worlds where Socrates doesn't keel over - where he's immune to hemlock by a quirk of biochemistry, say. Logical truths are true in all possible worlds, and so never tell you which possible world you live in - and anything you can establish "by definition" is a logical truth.
You unconsciously slap the conventional label on something, without actually using the verbal definition you just gave. You know perfectly well that Bob is "human", even though, on your definition, you can never call Bob "human" without first observing him to be mortal.
Words as Hidden Inferences
The mere presence of words can influence thinking, sometimes misleading it.
(alternate summary:)
The mere presence of words can influence thinking, sometimes misleading it.
The act of labeling something with a word, disguises a challengable inductive inference you are making. If the last 11 egg-shaped objects drawn have been blue, and the last 8 cubes drawn have been red, it is a matter of induction to say this rule will hold in the future. But if you call the blue eggs "bleggs" and the red cubes "rubes", you may reach into the barrel, feel an egg shape, and think "Oh, a blegg."
Extensions and Intensions
You try to define a word using words, in turn defined with ever-more-abstract words, without being able to point to an example.
(alternate summary:)
You try to define a word using words, in turn defined with ever-more-abstract words, without being able to point to an example. "What is red?" "Red is a color." "What's a color?" "It's a property of a thing?" "What's a thing? What's a property?" It never occurs to you to point to a stop sign and an apple.
The extension doesn't match the intension. We aren't consciously aware of our identification of a red light in the sky as "Mars", which will probably happen regardless of your attempt to define "Mars" as "The God of War".
Buy Now Or Forever Hold Your Peace
If you really think that your reasoning is superior to that of prediction markets, there is free money available to you right now. If you aren't picking it up, you clearly don't really believe that you can beat the markets.
Similarity Clusters
Your verbal definition doesn't capture more than a tiny fraction of the category's shared characteristics, but you try to reason as if it does.
(alternate summary:)
Your verbal definition doesn't capture more than a tiny fraction of the category's shared characteristics, but you try to reason as if it does. When the philosophers of Plato's Academy claimed that the best definition of a human was a "featherless biped", Diogenes the Cynic is said to have exhibited a plucked chicken and declared "Here is Plato's Man." The Platonists promptly changed their definition to "a featherless biped with broad nails".
Typicality and Asymmetrical Similarity
You try to treat category membership as all-or-nothing, ignoring the existence of more and less typical subclusters.
(alternate summary:)
You try to treat category membership as all-or-nothing, ignoring the existence of more and less typical subclusters. Ducks and penguins are less typical birds than robins and pigeons. Interestingly, a between-groups experiment showed that subjects thought a disease was more likely to spread from robins to ducks on an island, than from ducks to robins.
The Cluster Structure of Thingspace
A verbal definition works well enough in practice to point out the intended cluster of similar things, but you nitpick exceptions.
(alternate summary:)
A verbal definition works well enough in practice to point out the intended cluster of similar things, but you nitpick exceptions. Not every human has ten fingers, or wears clothes, or uses language; but if you look for an empirical cluster of things which share these characteristics, you'll get enough information that the occasional nine-fingered human won't fool you.
Disguised Queries
You ask whether something "is" or "is not" a category member but can't name the question you really want answered.
(alternate summary:)
You ask whether something "is" or "is not" a category member but can't name the question you really want answered. What is a "man"? Is Barney the Baby Boy a "man"? The "correct" answer may depend considerably on whether the query you really want answered is "Would hemlock be a good thing to feed Barney?" or "Will Barney make a good husband?"
Neural Categories
You treat intuitively perceived hierarchical categories like the only correct way to parse the world, without realizing that other forms of statistical inference are possible even though your brain doesn't use them.
(alternate summary:)
You treat intuitively perceived hierarchical categories like the only correct way to parse the world, without realizing that other forms of statistical inference are possible even though your brain doesn't use them. It's much easier for a human to notice whether an object is a "blegg" or "rube"; than for a human to notice that red objects never glow in the dark, but red furred objects have all the other characteristics of bleggs. Other statistical algorithms work differently.
How An Algorithm Feels From Inside
You talk about categories as if they are manna fallen from the Platonic Realm, rather than inferences implemented in a real brain.
(alternate summary:)
You talk about categories as if they are manna fallen from the Platonic Realm, rather than inferences implemented in a real brain. The ancient philosophers said "Socrates is a man", not, "My brain perceptually classifies Socrates as a match against the 'human' concept".
You argue about a category membership even after screening off all questions that could possibly depend on a category-based inference. After you observe that an object is blue, egg-shaped, furred, flexible, opaque, luminescent, and palladium-containing, what's left to ask by arguing, "Is it a blegg?" But if your brain's categorizing neural network contains a (metaphorical) central unit corresponding to the inference of blegg-ness, it may still feel like there's a leftover question.
(see also the wiki page)
Disputing Definitions
An example of how the technique helps.
(alternate summary:)
You allow an argument to slide into being about definitions, even though it isn't what you originally wanted to argue about. If, before a dispute started about whether a tree falling in a deserted forest makes a "sound", you asked the two soon-to-be arguers whether they thought a "sound" should be defined as "acoustic vibrations" or "auditory experiences", they'd probably tell you to flip a coin. Only after the argument starts does the definition of a word become politically charged.
Feel the Meaning
You think a word has a meaning, as a property of the word itself; rather than there being a label that your brain associates to a particular concept.
(alternate summary:)
You think a word has a meaning, as a property of the word itself; rather than there being a label that your brain associates to a particular concept. When someone shouts, "Yikes! A tiger!", evolution would not favor an organism that thinks, "Hm... I have just heard the syllables 'Tie' and 'Grr' which my fellow tribemembers associate with their internal analogues of my own tiger concept and which aiiieeee CRUNCH CRUNCH GULP." So the brain takes a shortcut, and it seems that the meaning of tigerness is a property of the label itself. People argue about the correct meaning of a label like "sound".
The Argument from Common Usage
You argue over the meanings of a word, even after all sides understand perfectly well what the other sides are trying to say.
(alternate summary:)
You argue over the meanings of a word, even after all sides understand perfectly well what the other sides are trying to say. The human ability to associate labels to concepts is a tool for communication. When people want to communicate, we're hard to stop; if we have no common language, we'll draw pictures in sand. When you each understand what is in the other's mind, you are done.
You pull out a dictionary in the middle of an empirical or moral argument. Dictionary editors are historians of usage, not legislators of language. If the common definition contains a problem - if "Mars" is defined as the God of War, or a "dolphin" is defined as a kind of fish, or "Negroes" are defined as a separate category from humans, the dictionary will reflect the standard mistake.
You pull out a dictionary in the middle of any argument ever. Seriously, what the heck makes you think that dictionary editors are an authority on whether "atheism" is a "religion" or whatever? If you have any substantive issue whatsoever at stake, do you really think dictionary editors have access to ultimate wisdom that settles the argument?
You defy common usage without a reason, making it gratuitously hard for others to understand you. Fast stand up plutonium, with bagels without handle.
Empty Labels
You use complex renamings to create the illusion of inference.
(alternate summary:)
You use complex renamings to create the illusion of inference. Is a "human" defined as a "mortal featherless biped"? Then write: "All [mortal featherless bipeds] are mortal; Socrates is a [mortal featherless biped]; therefore, Socrates is mortal." Looks less impressive that way, doesn't it?
Classic Sichuan in Millbrae, Thu Feb 21, 7pm
Taboo Your Words
When a word poses a problem, the simplest solution is to eliminate the word and its synonyms.
(alternate summary:)
If Albert and Barry aren't allowed to use the word "sound", then Albert will have to say "A tree falling in a deserted forest generates acoustic vibrations", and Barry will say "A tree falling in a deserted forest generates no auditory experiences". When a word poses a problem, the simplest solution is to eliminate the word and its synonyms.
Replace the Symbol with the Substance
Description of the technique.
(alternate summary:)
The existence of a neat little word prevents you from seeing the details of the thing you're trying to think about.
(alternate summary:)
The existence of a neat little word prevents you from seeing the details of the thing you're trying to think about. What actually goes on in schools once you stop calling it "education"? What's a degree, once you stop calling it a "degree"? If a coin lands "heads", what's its radial orientation? What is "truth", if you can't say "accurate" or "correct" or "represent" or "reflect" or "semantic" or "believe" or "knowledge" or "map" or "real" or any other simple term?
Fallacies of Compression
You have only one word, but there are two or more different things-in-reality, so that all the facts about them get dumped into a single undifferentiated mental bucket.
(alternate summary:)
You have only one word, but there are two or more different things-in-reality, so that all the facts about them get dumped into a single undifferentiated mental bucket. It's part of a detective's ordinary work to observe that Carol wore red last night, or that she has black hair; and it's part of a detective's ordinary work to wonder if maybe Carol dyes her hair. But it takes a subtler detective to wonder if there are two Carols, so that the Carol who wore red is not the same as the Carol who had black hair.
Categorizing Has Consequences
You see patterns where none exist, harvesting other characteristics from your definitions even when there is no similarity along that dimension.
(alternate summary:)
You see patterns where none exist, harvesting other characteristics from your definitions even when there is no similarity along that dimension. In Japan, it is thought that people of blood type A are earnest and creative, blood type Bs are wild and cheerful, blood type Os are agreeable and sociable, and blood type ABs are cool and controlled.
Sneaking in Connotations
You try to sneak in the connotations of a word, by arguing from a definition that doesn't include the connotations.
(alternate summary:)
You try to sneak in the connotations of a word, by arguing from a definition that doesn't include the connotations. A "wiggin" is defined in the dictionary as a person with green eyes and black hair. The word "wiggin" also carries the connotation of someone who commits crimes and launches cute baby squirrels, but that part isn't in the dictionary. So you point to someone and say: "Green eyes? Black hair? See, told you he's a wiggin! Watch, next he's going to steal the silverware."
Arguing "By Definition"
You claim "X, by definition, is a Y!" On such occasions you're almost certainly trying to sneak in a connotation of Y that wasn't in your given definition.
(alternate summary:)
You claim "X, by definition, is a Y!" On such occasions you're almost certainly trying to sneak in a connotation of Y that wasn't in your given definition. You define "human" as a "featherless biped", and point to Socrates and say, "No feathers - two legs - he must be human!" But what you really care about is something else, like mortality. If what was in dispute was Socrates's number of legs, the other fellow would just reply, "Whaddaya mean, Socrates's got two legs? That's what we're arguing about in the first place!"
You claim "Ps, by definition, are Qs!" If you see Socrates out in the field with some biologists, gathering herbs that might confer resistance to hemlock, there's no point in arguing "Men, by definition, are mortal!" The main time you feel the need to tighten the vise by insisting that something is true "by definition" is when there's other information that calls the default inference into doubt.
You try to establish membership in an empirical cluster "by definition". You wouldn't feel the need to say, "Hinduism, by definition, is a religion!" because, well, of course Hinduism is a religion. It's not just a religion "by definition", it's, like, an actual religion. Atheism does not resemble the central members of the "religion" cluster, so if it wasn't for the fact that atheism is a religion by definition, you might go around thinking that atheism wasn't a religion. That's why you've got to crush all opposition by pointing out that "Atheism is a religion" is true by definition, because it isn't true any other way.
Where to Draw the Boundary?
Your definition draws a boundary around things that don't really belong together.
(alternate summary:)
Your definition draws a boundary around things that don't really belong together. You can claim, if you like, that you are defining the word "fish" to refer to salmon, guppies, sharks, dolphins, and trout, but not jellyfish or algae. You can claim, if you like, that this is merely a list, and there is no way a list can be "wrong". Or you can stop playing nitwit games and admit that you made a mistake and that dolphins don't belong on the fish list.
Entropy, and Short Codes
Which sounds more plausible, "God did a miracle" or "A supernatural universe-creating entity temporarily suspended the laws of physics"?
(alternate summary:)
You use a short word for something that you won't need to describe often, or a long word for something you'll need to describe often. This can result in inefficient thinking, or even misapplications of Occam's Razor, if your mind thinks that short sentences sound "simpler". Which sounds more plausible, "God did a miracle" or "A supernatural universe-creating entity temporarily suspended the laws of physics"?
Mutual Information, and Density in Thingspace
You draw your boundary around a volume of space where there is no greater-than-usual density, meaning that the associated word does not correspond to any performable Bayesian inferences.
(alternate summary:)
You draw your boundary around a volume of space where there is no greater-than-usual density, meaning that the associated word does not correspond to any performable Bayesian inferences. Since green-eyed people are not more likely to have black hair, or vice versa, and they don't share any other characteristics in common, why have a word for "wiggin"?
Superexponential Conceptspace, and Simple Words
You draw an unsimple boundary without any reason to do so.
(alternate summary:)
You draw an unsimple boundary without any reason to do so. The act of defining a word to refer to all humans, except black people, seems kind of suspicious. If you don't present reasons to draw that particular boundary, trying to create an "arbitrary" word in that location is like a detective saying: "Well, I haven't the slightest shred of support one way or the other for who could've murdered those orphans... but have we considered John Q. Wiffleheim as a suspect?"
Leave a Line of Retreat
If you are trying to judge whether some unpleasant idea is true you should visualise what the world would look like if it were true, and what you would do in that situation. This will allow you to be less scared of the idea, and reason about it without immediately trying to reject it.
The Second Law of Thermodynamics, and Engines of Cognition
To form accurate beliefs about something, you really do have to observe it. It's a very physical, very real process: any rational mind does "work" in the thermodynamic sense, not just the sense of mental effort. Engines of cognition are not so different from heat engines, though they manipulate entropy in a more subtle form than burning gasoline. So unless you can tell me which specific step in your argument violates the laws of physics by giving you true knowledge of the unseen, don't expect me to believe that a big, elaborate clever argument can do it either.
Perpetual Motion Beliefs
People learn under the traditional school regimen that the teacher tells you certain things, and you must believe them and recite them back; but if a mere student suggests a belief, you do not have to obey it. They map the domain of belief onto the domain of authority, and think that a certain belief is like an order that must be obeyed, but a probabilistic belief is like a mere suggestion. And when half-trained or tenth-trained rationalists abandon their art and try to believe without evidence just this once, they often build vast edifices of justification, confusing themselves just enough to conceal the magical steps. It can be quite a pain to nail down where the magic occurs - their structure of argument tends to morph and squirm away as you interrogate them. But there's always some step where a tiny probability turns into a large one - where they try to believe without evidence - where they step into the unknown, thinking, "No one can prove me wrong".
Searching for Bayes-Structure
If a mind is arriving at true beliefs, and we assume that the second law of thermodynamics has not been violated, that mind must be doing something at least vaguely Bayesian - at least one process with a sort-of Bayesian structure somewhere - or it couldn't possibly work.
Conditional Independence, and Naive Bayes
You use categorization to make inferences about properties that don't have the appropriate empirical structure, namely, conditional independence given knowledge of the class, to be well-approximated by Naive Bayes.
(alternate summary:)
You use categorization to make inferences about properties that don't have the appropriate empirical structure, namely, conditional independence given knowledge of the class, to be well-approximated by Naive Bayes. No way am I trying to summarize this one. Just read the blog post.
Words as Mental Paintbrush Handles
Visualize a "triangular lightbulb". What did you see?
(alternate summary:)
You think that words are like tiny little LISP symbols in your mind, rather than words being labels that act as handles to direct complex mental paintbrushes that can paint detailed pictures in your sensory workspace. Visualize a "triangular lightbulb". What did you see?
Rationality Quotes 10
Rationality Quotes 11
Variable Question Fallacies
"Martin told Bob the building was on his left." But "left" is a function-word that evaluates with a speaker-dependent variable grabbed from the surrounding context. Whose "left" is meant, Bob's or Martin's?
(alternate summary:)
You use a word that has different meanings in different places as though it meant the same thing on each occasion, possibly creating the illusion of something protean and shifting. "Martin told Bob the building was on his left." But "left" is a function-word that evaluates with a speaker-dependent variable grabbed from the surrounding context. Whose "left" is meant, Bob's or Martin's?
37 Ways That Words Can Be Wrong
Contains summaries of the sequence of posts about the proper use of words.
Gary Gygax Annihilated at 69
Dissolving the Question
This is where the "free will" puzzle is explicitly posed, along with criteria for what does and does not constitute a satisfying answer.
Wrong Questions
Where the mind cuts against reality's grain, it generates wrong questions - questions that cannot possibly be answered on their own terms, but only dissolved by understanding the cognitive algorithm that generates the perception of a question.
Righting a Wrong Question
When you are faced with an unanswerable question - a question to which it seems impossible to even imagine an answer - there is a simple trick which can turn the question solvable. Instead of asking, "Why do I have free will?", try asking, "Why do I think I have free will?"
Mind Projection Fallacy
E. T. Jaynes used the term Mind Projection Fallacy to denote the error of projecting your own mind's properties into the external world. the Mind Projection Fallacy generalizes as an error. It is in the argument over the real meaning of the word sound, and in the magazine cover of the monster carrying off a woman in the torn dress, and Kant's declaration that space by its very nature is flat, and Hume's definition of a priori ideas as those "discoverable by the mere operation of thought, without dependence on what is anywhere existent in the universe"...
Probability is in the Mind
Probabilities express uncertainty, and it is only agents who can be uncertain. A blank map does not correspond to a blank territory. Ignorance is in the mind.
The Quotation is not the Referent
It's very easy to derive extremely wrong conclusions if you don't make a clear enough distinction between your beliefs about the world, and the world itself.
Penguicon & Blook
Qualitatively Confused
Using qualitative, binary reasoning may make it easier to confuse belief and reality; if we use probability distributions, the distinction is much clearer.
Reductionism
We build models of the universe that have many different levels of description. But so far as anyone has been able to determine, the universe itself has only the single level of fundamental physics - reality doesn't explicitly compute protons, only quarks.
Explaining vs. Explaining Away
Apparently "the mere touch of cold philosophy", i.e., the truth, has destroyed haunts in the air, gnomes in the mine, and rainbows. This calls to mind a rather different bit of verse:
One of these things Is not like the others One of these things Doesn't belong
The air has been emptied of its haunts, and the mine de-gnomed—but the rainbow is still there!
Fake Reductionism
There is a very great distinction between being able to see where the rainbow comes from, and playing around with prisms to confirm it, and maybe making a rainbow yourself by spraying water droplets, versus some dour-faced philosopher just telling you, "No, there's nothing special about the rainbow. Didn't you hear? Scientists have explained it away. Just something to do with raindrops or whatever. Nothing to be excited about." I think this distinction probably accounts for a hell of a lot of the deadly existential emptiness that supposedly accompanies scientific reductionism.
Savanna Poets
Equations of physics aren't about strong emotions. They can inspire those emotions in the mind of a scientist, but the emotions are not as raw as the stories told about Jupiter (the god). And so it might seem that reducing Jupiter to a spinning ball of methane and ammonia takes away some of the poetry in those stories. But ultimately, we don't have to keep telling stories about Jupiter. It's not necessary for Jupiter to think and feel in order for us to tell stories, because we can always write stories with humans as its protagonists.
Joy in the Merely Real
If you can't take joy in things that turn out to be explicable, you're going to set yourself up for eternal disappointment. Don't worry if quantum physics turns out to be normal.
Joy in Discovery
It feels incredibly good to discover the answer to a problem that nobody else has answered. And we should enjoy finding answers. But we really shouldn't base our joy on the fact that nobody else has done it before. Even if someone else knows the answer to a puzzle, if you don't know it, it's still a mystery to you. And you should still feel joy when you discover the answer.
Bind Yourself to Reality
There are several reasons why it's worth talking about joy in the merely real in a discussion on reductionism. One is to leave a line of retreat. Another is to improve your own abilities as a rationalist by learning to invest your energy in the real world, and in accomplishing things here, rather than in a fantasy.
If You Demand Magic, Magic Won't Help
Magic (and dragons, and UFOs, and ...) get much of their charm from the fact that they don't actually exist. If dragons did exist, people would treat them like zebras; most people wouldn't bother to pay attention, but some scientists would get oddly excited about them. If we ever create dragons, or find aliens, we will have to learn to enjoy them, even though they happen to exist.
New York OB Meetup (ad-hoc) on Monday, Mar 24, @6pm
The Beauty of Settled Science
Most of the stuff reported in Science News is false, or at the very least, misleading. Scientific controversies are topics of such incredible difficulty that even people in the field aren't sure what's true. Read elementary textbooks. Study the settled science before you try to understand the outer fringes.
Amazing Breakthrough Day: April 1st
A proposal for a new holiday, in which journalists report on great scientific discoveries of the past as if they had just happened, and were still shocking.
Is Humanism A Religion-Substitute?
Trying to replace religion with humanism, atheism, or transhumanism doesn't work. If you try to write a hymn to the nonexistence of god, it will fail, because you are simply trying to imitate something that we don't really need to imitate. But that doesn't mean that the feeling of transcendence is something we should always avoid. After all, in a world in which religion never existed, people would still feel that same way.
Scarcity
Describes a few pieces of experimental evidence showing that objects or information which are believed to be in short supply are valued more than the same objects or information would be on their own.
To Spread Science, Keep It Secret
People don't study science, in part, because they perceive it to be public knowledge. In fact, it's not; you have to study a lot before you actually understand it. But because science is thought to be freely available, people ignore it in favor of cults that conceal their secrets, even if those secrets are wrong. In fact, it might be better if scientific knowledge was hidden from anyone who didn't undergo the initiation ritual, and study as an acolyte, and wear robes, and chant, and...
Initiation Ceremony
Brennan is inducted into the Conspiracy
Hand vs. Fingers
When you pick up a cup of water, is it your hand that picks it up, or is it your fingers, thumb, and palm working together? Just because something can be reduced to smaller parts doesn't mean that the original thing doesn't exist.
Angry Atoms
It is very hard, without the benefit of hindsight, to understand just how it is that these little bouncing billiard balls called atoms, could ever combine in such a way as to make something angry. If you try to imagine this problem without understanding the idea of neurons, information processing, computing, etc you realize just how challenging reductionism actually is.
Heat vs. Motion
For a very long time, people had a detailed understanding of kinetics, and they had a detailed understanding of heat. They understood concepts such as momentum and elastic rebounds, as well as concepts such as temperature and pressure. It took an extraordinary amount of work in order to understand things deeply enough to make us realize that heat and motion were really the same thing.
Brain Breakthrough! It's Made of Neurons!
Eliezer's contribution to Amazing Breakthrough Day.
Reductive Reference
Virtually every belief you have is not about elementary particle fields, which are (as far as we know) the actual reality. This doesn't mean that those beliefs aren't true. "Snow is white" does not mention quarks anywhere, and yet snow nevertheless is white. It's a computational shortcut, but it's still true.
Zombies! Zombies?
Don't try to put your consciousness or your personal identity outside physics. Whatever makes you say "I think therefore I am", causes your lips to move; it is within the chains of cause and effect that produce our observed universe.
Zombie Responses
A few more points on Zombies.
The Generalized Anti-Zombie Principle
The argument against zombies can be extended into a more general anti-zombie principle. But, figuring out what that more general principle is, is more difficult than it may seem.
GAZP vs. GLUT
Fleshes out the generalized anti-zombie principle a bit more, and describes the game "follow-the-improbability".
Belief in the Implied Invisible
That it's impossible even in principle to observe something sometimes isn't enough to conclude that it doesn't exist.
(alternate summary:)
If a spaceship goes over the cosmological horizon relative to us, so that it can no longer communicate with us, should we believe that the spaceship instantly ceases to exist?
Quantum Explanations
Quantum mechanics doesn't deserve its fearsome reputation.
(alternate summary:)
Quantum mechanics doesn't deserve its fearsome reputation. If you tell people something is supposed to be mysterious, they won't understand it. It's human intuitions that are "strange" or "weird"; physics itself is perfectly normal. Talking about historical erroneous concepts like "particles" or "waves" is just asking to confuse people; present the real, unified quantum physics straight out. The series will take a strictly realist perspective - quantum equations describe something that is real and out there. Warning: Although a large faction of physicists agrees with this, it is not universally accepted. Stronger warning: I am not even going to present non-realist viewpoints until later, because I think this is a major source of confusion.
Configurations and Amplitude
A preliminary glimpse at the stuff reality is made of. The classic split-photon experiment with half-silvered mirrors. Alternative pathways the photon can take, can cancel each other out. The mysterious measuring tool that tells us the relative squared moduli.
Joint Configurations
The laws of physics are inherently over mathematical entities, configurations, that involve multiple particles. A basic, ontologically existent entity, according to our current understanding of quantum mechanics, does not look like a photon - it looks like a configuration of the universe with "A photon here, a photon there." Amplitude flows between these configurations can cancel or add; this gives us a way to detect which configurations are distinct. It is an experimentally testable fact that "Photon 1 here, photon 2 there" is the same configuration as "Photon 2 here, photon 1 there".
Distinct Configurations
Since configurations are over the combined state of all the elements in a system, adding a sensor that detects whether a particle went one way or the other, becomes a new element of the system that can make configurations "distinct" instead of "identical". This confused the living daylights out of early quantum experimenters, because it meant that things behaved differently when they tried to "measure" them. But it's not only measuring instruments that do the trick - any sensitive physical element will do - and the distinctness of configurations is a physical fact, not a fact about our knowledge. There is no need to suppose that the universe cares what we think.
Where Philosophy Meets Science
In retrospect, supposing that quantum physics had anything to do with consciousness was a big mistake. Could philosophers have told the physicists so? But we don't usually see philosophers sponsoring major advances in physics; why not?
Can You Prove Two Particles Are Identical?
You wouldn't think that it would be possible to do an experiment that told you that two particles are completely identical - not just to the limit of experimental precision, but perfectly. You could even give a precise-sounding philosophical argument for why it was not possible - but the argument would have a deeply buried assumption. Quantum physics violates this deep assumption, making the experiment easy.
Classical Configuration Spaces
How to visualize the state of a system of two 1-dimensional particles, as a single point in 2-dimensional space. Understanding configuration spaces in classical physics is a useful first step, before trying to imagine quantum configuration spaces.
The Quantum Arena
Instead of a system state being associated with a single point in a classical configuration space, the instantaneous real state of a quantum system is a complex amplitude distribution over a quantum configuration space. What creates the illusion of "individual particles", like an electron caught in a trap, is a plaid distribution - one that happens to factor into the product of two parts. It is the whole distribution that evolves when a quantum system evolves. Individual configurations don't have physics; amplitude distributions have physics. Quantum entanglement is the general case; quantum independence is the special case.
Feynman Paths
Instead of thinking that a photon takes a single straight path through space, we can regard it as taking all possible paths through space, and adding the amplitudes for every possible path. Nearly all the paths cancel out - unless we do clever quantum things, so that some paths add instead of canceling out. Then we can make light do funny tricks for us, like reflecting off a mirror in such a way that the angle of incidence doesn't equal the angle of reflection. But ordinarily, nearly all the paths except an extremely narrow band, cancel out - this is one of the keys to recovering the hallucination of classical physics.
No Individual Particles
One of the chief ways to confuse yourself while thinking about quantum mechanics, is to think as if photons were little billiard balls bouncing around. The appearance of little billiard balls is a special case of a deeper level on which there are only multiparticle configurations and amplitude flows. It is easy to set up physical situations in which there exists no fact of the matter as to which electron was originally which.
Identity Isn't In Specific Atoms
As a consequence of quantum theory, we can see that the concept of swapping out all the atoms in you with "different" atoms is physical nonsense. It's not something that corresponds to anything that could ever be done, even in principle, because the concept is so confused. You are still you, no matter "which" atoms you are made of.
Zombies: The Movie
A satirical script for a zombie movie, but not about the lurching and drooling kind. The philosophical kind.
Three Dialogues on Identity
Given that there's no such thing as "the same atom", whether you are "the same person" from one time to another can't possibly depend on whether you're made out of the same atoms.
Decoherence
A quantum system that factorizes can evolve into a system that doesn't factorize, destroying the illusion of independence. But entangling a quantum system with its environment, can appear to destroy entanglements that are already present. Entanglement with the environment can separate out the pieces of an amplitude distribution, preventing them from interacting with each other. Decoherence is fundamentally symmetric in time, but appears asymmetric because of the second law of thermodynamics.
The So-Called Heisenberg Uncertainty Principle
Unlike classical physics, in quantum physics it is not possible to separate out a particle's "position" from its "momentum".
(alternate summary:)
Unlike classical physics, in quantum physics it is not possible to separate out a particle's "position" from its "momentum". The evolution of the amplitude distribution over time, involves things like taking the second derivative in space and multiplying by i to get the first derivative in time. The end result of this time evolution rule is that blobs of particle-presence appear to race around in physical space. The notion of "an exact particular momentum" or "an exact particular position" is not something that can physically happen, it is a tool for analyzing amplitude distributions by taking them apart into a sum of simpler waves. This uses the assumption and fact of linearity: the evolution of the whole wavefunction seems to always be the additive sum of the evolution of its pieces. Using this tool, we can see that if you take apart the same distribution into a sum of positions and a sum of momenta, they cannot both be sharply concentrated at the same time. When you "observe" a particle's position, that is, decohere its positional distribution by making it interact with a sensor, you take its wave packet apart into two pieces; then the two pieces evolve differently. The Heisenberg Principle definitely does not say that knowing about the particle, or consciously seeing it, will make the universe behave differently.
Which Basis Is More Fundamental?
The position basis can be computed locally in the configuration space; the momentum basis is not local. Why care about locality? Because it is a very deep principle; reality itself seems to favor it in some way.
Where Physics Meets Experience
Meet the Ebborians, who reproduce by fission. The Ebborian brain is like a thick sheet of paper that splits down its thickness. They frequently experience dividing into two minds, and can talk to their other selves. It seems that their unified theory of physics is almost finished, and can answer every question, when one Ebborian asks: When exactly does one Ebborian become two people?
Where Experience Confuses Physicists
It then turns out that the entire planet of Ebbore is splitting along a fourth-dimensional thickness, duplicating all the people within it. But why does the apparent chance of "ending up" in one of those worlds, equal the square of the fourth-dimensional thickness? Many mysterious answers are proposed to this question, and one non-mysterious one.
On Being Decoherent
When a sensor measures a particle whose amplitude distribution stretches over space - perhaps seeing if the particle is to the left or right of some dividing line - then the standard laws of quantum mechanics call for the sensor+particle system to evolve into a state of (particle left, sensor measures LEFT) + (particle right, sensor measures RIGHT). But when we humans look at the sensor, it only seems to say "LEFT" or "RIGHT", never a mixture like "LIGFT". This, of course, is because we ourselves are made of particles, and subject to the standard quantum laws that imply decoherence. Under standard quantum laws, the final state is (particle left, sensor measures LEFT, human sees "LEFT") + (particle right, sensor measures RIGHT, human sees "RIGHT").
The Conscious Sorites Paradox
Decoherence is implicit in quantum physics, not an extra law on top of it. Asking exactly when "one world" splits into "two worlds" may be like asking when, if you keep removing grains of sand from a pile, it stops being a "heap". Even if you're inside the world, there may not be a definite answer. This puzzle does not arise only in quantum physics; the Ebborians could face it in a classical universe, or we could build sentient flat computers that split down their thickness. Is this really a physicist's problem?
Decoherence is Pointless
There is no exact point at which decoherence suddenly happens. All of quantum mechanics is continuous and differentiable, and decoherent processes are no exception to this.
Decoherent Essences
Decoherence is implicit within physics, not an extra law on top of it. You can choose representations that make decoherence harder to see, just like you can choose representations that make apples harder to see, but exactly the same physical process still goes on; the apple doesn't disappear and neither does decoherence. If you could make decoherence magically go away by choosing the right representation, we wouldn't need to shield quantum computers from the environment.
The Born Probabilities
The last serious mysterious question left in quantum physics: When a quantum world splits in two, why do we seem to have a greater probability of ending up in the larger blob, exactly proportional to the integral of the squared modulus? It's an open problem, but non-mysterious answers have been proposed. Try not to go funny in the head about it.
Decoherence as Projection
Since quantum evolution is linear and unitary, decoherence can be seen as projecting a wavefunction onto orthogonal subspaces. This can be neatly illustrated using polarized photons and the angle of the polarized sheet that will absorb or transmit them.
Entangled Photons
Using our newly acquired understanding of photon polarizations, we see how to construct a quantum state of two photons in which, when you measure one of them, the person in the same world as you, will always find that the opposite photon has opposite quantum state. This is not because any influence is transmitted; it is just decoherence that takes place in a very symmetrical way, as can readily be observed in our calculations.
Bell's Theorem: No EPR "Reality"
(Note: This post was designed to be read as a stand-alone, if desired.) Originally, the discoverers of quantum physics thought they had discovered an incomplete description of reality - that there was some deeper physical process they were missing, and this was why they couldn't predict exactly the results of quantum experiments. The math of Bell's Theorem is surprisingly simple, and we walk through it. Bell's Theorem rules out being able to locally predict a single, unique outcome of measurements - ruling out a way that Einstein, Podolsky, and Rosen once defined "reality". This shows how deep implicit philosophical assumptions can go. If worlds can split, so that there is no single unique outcome, then Bell's Theorem is no problem. Bell's Theorem does, however, rule out the idea that quantum physics describes our partial knowledge of a deeper physical state that could locally produce single outcomes - any such description will be inconsistent.
Spooky Action at a Distance: The No-Communication Theorem
As Einstein argued long ago, the quantum physics of his era - that is, the single-global-world interpretation of quantum physics, in which experiments have single unique random results - violates Special Relativity; it imposes a preferred space of simultaneity and requires a mysterious influence to be transmitted faster than light; which mysterious influence can never be used to transmit any useful information. Getting rid of the single global world dispels this mystery and puts everything back to normal again.
Decoherence is Simple
The idea that decoherence fails the test of Occam's Razor is wrong as probability theory.
Decoherence is Falsifiable and Testable
(Note: Designed to be standalone readable.) An epistle to the physicists. To probability theorists, words like "simple", "falsifiable", and "testable" have exact mathematical meanings, which are there for very strong reasons. The (minority?) faction of physicists who say that many-worlds is "not falsifiable" or that it "violates Occam's Razor" or that it is "untestable", are committing the same kind of mathematical crime as non-physicists who invent their own theories of gravity that go as inverse-cube. This is one of the reasons why I, a non-physicist, dared to talk about physics - because I saw (some!) physicists using probability theory in a way that was simply wrong. Not just criticizable, but outright mathematically wrong: 2 + 2 = 3.
Quantum Non-Realism
"Shut up and calculate" is the best approach you can take when none of your theories are very good. But that is not the same as claiming that "Shut up!" actually is a theory of physics. Saying "I don't know what these equations mean, but they seem to work" is a very different matter from saying: "These equations definitely don't mean anything, they just work!"
Collapse Postulates
Early physicists simply didn't think of the possibility of more than one world - it just didn't occur to them, even though it's the straightforward result of applying the quantum laws at all levels. So they accidentally invented a completely and strictly unnecessary part of quantum theory to ensure there was only one world - a law of physics that says that parts of the wavefunction mysteriously and spontaneously disappear when decoherence prevents us from seeing them any more. If such a law really existed, it would be the only non-linear, non-unitary, non-differentiable, non-local, non-CPT-symmetric, acausal, faster-than-light phenomenon in all of physics.
If Many-Worlds Had Come First
If early physicists had never made the mistake, and thought immediately to apply the quantum laws at all levels to produce macroscopic decoherence, then "collapse postulates" would today seem like a completely crackpot theory. In addition to their other problems, like FTL, the collapse postulate would be the only physical law that was informally specified - often in dualistic (mentalistic) terms - because it was the only fundamental law adopted without precise evidence to nail it down. Here, we get a glimpse at that alternate Earth.
Many Worlds, One Best Guess
Summarizes the arguments that nail down macroscopic decoherence, aka the "many-worlds interpretation". Concludes that many-worlds wins outright given the current state of evidence. The argument should have been over fifty years ago. New physical evidence could reopen it, but we have no particular reason to expect this.
The Failures of Eld Science
A short story set in the same world as "Initiation Ceremony". Future physics students look back on the cautionary tale of quantum physics.
The Dilemma: Science or Bayes?
The failure of first-half-of-20th-century-physics was not due to straying from the scientific method. Science and rationality - that is, Science and Bayesianism - aren't the same thing, and sometimes they give different answers.
Science Doesn't Trust Your Rationality
The reason Science doesn't always agree with the exact, Bayesian, rational answer, is that Science doesn't trust you to be rational. It wants you to go out and gather overwhelming experimental evidence.
When Science Can't Help
If you have an idea, Science tells you to test it experimentally. If you spend 10 years testing the idea and the result comes out negative, Science slaps you on the back and says, "Better luck next time." If you want to spend 10 years testing a hypothesis that will actually turn out to be right, you'll have to try to do the thing that Science doesn't trust you to do: think rationally, and figure out the answer before you get clubbed over the head with it.
Science Isn't Strict Enough
Science lets you believe any damn stupid idea that hasn't been refuted by experiment. Bayesianism says there is always an exactly rational degree of belief given your current evidence, and this does not shift a nanometer to the left or to the right depending on your whims. Science is a social freedom - we let people test whatever hypotheses they like, because we don't trust the village elders to decide in advance - but you shouldn't confuse that with an individual standard of rationality.
Do Scientists Already Know This Stuff?
No. Maybe someday it will be part of standard scientific training, but for now, it's not, and the absence is visible.
No Safe Defense, Not Even Science
Why am I trying to break your trust in Science? Because you can't think and trust at the same time. The social rules of Science are verbal rather than quantitative; it is possible to believe you are following them. With Bayesianism, it is never possible to do an exact calculation and get the exact rational answer that you know exists. You are visibly less than perfect, and so you will not be tempted to trust yourself.
Changing the Definition of Science
Many of these ideas are surprisingly conventional, and being floated around by other thinkers. I'm a good deal less of a lonely iconoclast than I seem; maybe it's just the way I talk.
Conference on Global Catastrophic Risks
Faster Than Science
Is it really possible to arrive at the truth faster than Science does? Not only is it possible, but the social process of science relies on scientists doing so - when they choose which hypotheses to test. In many answer spaces it's not possible to find the true hypothesis by accident. Science leaves it up to experiment to socially declare who was right, but if there weren't some people who could get it right in the absence of overwhelming experimental proof, science would be stuck.
Einstein's Speed
Albert was unusually good at finding the right theory in the presence of only a small amount of experimental evidence. Even more unusually, he admitted it - he claimed to know the theory was right, even in advance of the public proof. It's possible to arrive at the truth by thinking great high-minded thoughts of the sort that Science does not trust you to think, but it's a lot harder than arriving at the truth in the presence of overwhelming evidence.
That Alien Message
Einstein used evidence more efficiently than other physicists, but he was still extremely inefficient in an absolute sense. If a huge team of cryptographers and physicists were examining a interstellar transmission, going over it bit by bit, we could deduce principles on the order of Galilean gravity just from seeing one or two frames of a picture. As if the very first human to see an apple fall, had, on the instant, realized that its position went as the square of the time and that this implied constant acceleration.
My Childhood Role Model
I looked up to the ideal of a Bayesian superintelligence, not Einstein.
Mach's Principle: Anti-Epiphenomenal Physics
Could you tell if the whole universe were shifted an inch to the left? Could you tell if the whole universe was traveling left at ten miles per hour? Could you tell if the whole universe was accelerating left at ten miles per hour? Could you tell if the whole universe was rotating?
A Broken Koan
Relative Configuration Space
Maybe the reason why we can't observe absolute speeds, absolute positions, absolute accelerations, or absolute rotations, is that particles don't have absolute positions - only positions relative to each other. That is, maybe quantum physics takes place in a relative configuration space.
Timeless Physics
What time is it? How do you know? The question "What time is it right now?" may make around as much sense as asking "Where is the universe?" Not only that, our physics equations may not need a t in them!
Timeless Beauty
To get rid of time you must reduce it to nontime. In timeless physics, everything that exists is perfectly global or perfectly local. The laws of physics are perfectly global; the configuration space is perfectly local. Every fundamentally existent ontological entity has a unique identity and a unique value. This beauty makes ugly theories much more visibly ugly; a collapse postulate becomes a visible scar on the perfection.
Timeless Causality
Using the modern, Bayesian formulation of causality, we can define causality without talking about time - define it purely in terms of relations. The river of time never flows, but it has a direction.
Einstein's Superpowers
There's an unfortunate tendency to talk as if Einstein had superpowers - as if, even before Einstein was famous, he had an inherent disposition to be Einstein - a potential as rare as his fame and as magical as his deeds. Yet the way you acquire superpowers is not by being born with them, but by seeing, with a sudden shock, that they are perfectly normal.
Class Project
From the world of Initiation Ceremony. Brennan and the others are faced with their midterm exams.
(alternate summary:)
The students are given one month to develop a theory of quantum gravity.
A Premature Word on AI
A response to opinions expressed by Robin Hanson, Roger Schank, and others, and arguing against the notion that producing a friendly general artificial intelligence is an insurmountable problem.
The Rhythm of Disagreement
A discussion of a number of disagreements Eliezer Yudkowsky has been in, with a few comments on rational disagreement.
Principles of Disagreement
You do have to pay attention to other people's authority a fair amount of the time. But above all, try to get the actual right answer. Clever tricks are only valuable if they help you learn what the truth actually is. If a clever argument doesn't actually work, don't use it.
Timeless Identity
How can you be the same person tomorrow as today, in the river that never flows, when not a drop of water is shared between one time and another? Having used physics to completely trash all naive theories of identity, we reassemble a conception of persons and experiences from what is left. With a surprising practical application...
Why Quantum?
Why do a series on quantum mechanics? Some of the many morals that are best illustrated by the tale of quantum mechanics and its misinterpretation.
Living in Many Worlds
The many worlds of quantum mechanics are not some strange, alien universe into which you have been thrust. They are where you have always lived. Egan's Law: "It all adds up to normality." Then why care about quantum physics at all? Because there's still the question of what adds up to normality, and the answer to this question turns out to be, "Quantum physics." If you're thinking of building any strange philosophies around many-worlds, you probably shouldn't - that's not what it's for.
Thou Art Physics
If the laws of physics control everything we do, then how can our choices be meaningful? Because you are physics. You aren't competing with physics for control of the universe, you are within physics. Anything you control is necessarily controlled by physics.
Timeless Control
We throw away "time" but retain causality, and with it, the concepts "control" and "decide". To talk of something as having been "always determined" is mixing up a timeless and a timeful conclusion, with paradoxical results. When you take a perspective outside time, you have to be careful not to let your old, timeful intuitions run wild in the absence of their subject matter.
(alternate summary:)
(from The Quantum Physics Sequence)
Bloggingheads: Yudkowsky and Horgan
Against Devil's Advocacy
Playing Devil's Advocate is occasionally helpful, but much less so than it appears. Ultimately, you should only be able to create plausible arguments for things that are actually plausible.
Eliezer's Post Dependencies; Book Notification; Graphic Designer Wanted
The Quantum Physics Sequence
An Intuitive Explanation of Quantum Mechanics
(just the science, for students confused by their physics textbooks)
Quantum Physics Revealed As Non-Mysterious
(quantum physics does not make the universe any more mysterious than it was previously)
And the Winner is... Many-Worlds!
An index of posts explaining quantum mechanics and the many-worlds interpretation.
(the many-worlds interpretations wins outright given the current state of evidence)
(alternate summary:)
An index of posts explaining quantum mechanics and the many-worlds interpretation.
Quantum Mechanics and Personal Identity
A shortened index into the Quantum Physics Sequence describing only the prerequisite knowledge to understand the statement that "science can rule out a notion of personal identity that depends on your being composed of the same atoms - because modern physics has taken the concept of 'same atom' and thrown it out the window. There are no little billiard balls with individual identities. It's experimentally ruled out." The key post in this sequence is Timeless Identity, in which "Having used physics to completely trash all naive theories of identity, we reassemble a conception of persons and experiences from what is left" but this finale might make little sense without the prior discussion.
(alternate summary:)
(the ontology of quantum mechanics, in which there are no particles with individual identities, rules out theories of personal continuity that invoke "the same atoms" as a concept)
Causality and Moral Responsibility
Knowing that you are a deterministic system does not make you any less responsible for the consequences of your actions. You still make your decisions; you do have psychological traits, and experiences, and goals. Determinism doesn't change any of that.
Possibility and Could-ness
Our sense of "could-ness", as in "I could have not rescued the child from the burning orphanage", comes from our own decision making algorithms labeling some end states as "reachable". If we wanted to achieve the world-state of the child being burned, there is a series of actions that would lead to that state.
The Ultimate Source
There is a school of thought in philosophy that says that even if you make a decision, that still isn't enough to conclude that you have free will. You have to have been the ultimate source of your decision. Nothing else can have influenced it previously. This doesn't work. There is no such thing as "the ultimate source" of your decisions.
Passing the Recursive Buck
When confronted with a difficult question, don't try to point backwards to a misunderstood black box. Ask yourself, what's inside the black box? If the answer is another black box, you likely have a problem.
Grasping Slippery Things
An illustration of a few ways that trying to perform reductionism can go wrong.
Ghosts in the Machine
There is a way of thinking about programming a computer that conforms well to human intuitions: telling the computer what to do. The problem is that the computer isn't going to understand you, unless you program the computer to understand. If you are programming an AI, you are not giving instructions to a ghost in the machine; you are creating the ghost.
LA-602 vs. RHIC Review
A comparison of LA-602, the classified report investigating the possibility of a nuclear bomb igniting the atmosphere and killing everyone, and RHIC, the document explaining why the LHC is not going to destroy the world. There is a key difference between these documents: one of them is a genuine discussion of the risks, taking them seriously, and the other is a work of public relations. Work on existential risk needs to be more like the former.
Heading Toward Morality
A description of the last several months of sequence posts, that identifies the topic that Eliezer actually wants to explain: morality.
The Outside View's Domain
A dialogue on the proper application of the inside and outside views.
Surface Analogies and Deep Causes
Just because two things share surface similarities doesn't mean that they work the same way, or can be expected to be similar in other respects. If you want to understand what something does, it typically doesn't help you to understand something else. That type of reasoning only works if the two things are especially similar, on a deep level.
Optimization and the Singularity
An introduction to optimization processes and why Yudkowsky thinks that a singularity would be far more powerful than calculations based on human progress would suggest.
The Psychological Unity of Humankind
Because humans are a sexually reproducing species, human brains are nearly identical. All human beings share similar emotions, tell stories, and employ identical facial expressions. We naively expect all other minds to work like ours, which cause problems when trying to predict the actions of non-human intelligences.
The Design Space of Minds-In-General
When people talk about "AI", they're talking about an incredibly wide range of possibilities. Having a word like "AI" is like having a word for everything which isn't a duck.
No Universally Compelling Arguments
Because minds are physical processes, it is theoretically possible to specify a mind which draws any conclusion in response to any argument. There is no argument that will convince every possible mind.
2-Place and 1-Place Words
It is possible to talk about "sexiness" as a property of an observer and a subject. It is also equally possible to talk about "sexiness" as a property of a subject, as long as each observer can have a different process to determine how sexy someone is. Failing to do either of these will cause you trouble.
The Opposite Sex
A few thoughts from Eliezer Yudkowsky about a discussion of sexism on Overcoming Bias.
What Would You Do Without Morality?
If your own theory of morality was disproved, and you were persuaded that there was no morality, that everything was permissible and nothing was forbidden, what would you do? Would you still tip cabdrivers?
The Moral Void
If there were some great stone tablet upon which Morality was written, and you read it, and it was something horrible, that would be a rather unpleasant scenario. What would you want that tablet to say, if you could choose it? What would be the best case scenario?
Why don't you just do that, and ignore the tablet completely?
Created Already In Motion
There is no computer program so persuasive that you can run it on a rock. A mind, in order to be a mind, needs some sort of dynamic rules of inference or action. A mind has to be created already in motion.
I'd take it
The Bedrock of Fairness
What does "fairness" actually refer to? Why is it "fair" to divide a pie into three equal pieces for three different people?
2 of 10, not 3 total
Moral Complexities
Key questions for two different moral intuitions: morality-as-preference, and morality-as-given.
Is Morality Preference?
A dialogue on the idea that morality is a subset of our desires.
Is Morality Given?
A dialogue on the idea that morality is an absolute external truth.
Will As Thou Wilt
Eliezer mentions four interpretations of "A man can do as he wills, but not will as he wills.", a quote by Arthur Schopenhauer.
Where Recursive Justification Hits Bottom
Ultimately, when you reflect on how your mind operates, and consider questions like "why does occam's razor work?" and "why do I expect the future to be like the past?", you have no other option but to use your own mind. There is no way to jump to an ideal state of pure emptiness and evaluate these claims without using your existing mind.
The Fear of Common Knowledge
A discussion of an interesting kind of lie, in which someone tells a lie that the person they're speaking to knows is a lie, but doesn't know that the person who told the lie knows that they know it's a lie.
My Kind of Reflection
A few key differences between Eliezer Yudkowsky's ideas on reflection and the ideas of other philosophers.
The Genetic Fallacy
The genetic fallacy seems like a strange kind of fallacy. The problem is that the original justification for a belief does not always equal the sum of all the evidence that we currently have available. But, on the other hand, it is very easy for people to still believe untruths from a source that they have since rejected.
Fundamental Doubts
There are some things that are so fundamental, that you really can't doubt them effectively. Be careful you don't use this as an excuse, but ultimately, you really can't start out by saying that you won't trust anything that is the output of a neuron.
Rebelling Within Nature
When we rebel against our own nature, we act in accordance with our own nature. There isn't any other way it could be.
Probability is Subjectively Objective
Probabilities exist only in minds. The probability you calculate for winning the lottery depends on your prior, which depends on which mind you have. However, this calculation does not refer to your mind. Thus, your calculated probability is subjectively objective. You conclude that someone who assigns a different probability (given the same information) is objectively wrong: You expect that they will lose on average.
Lawrence Watt-Evans's Fiction
A review of Lawrence Watt-Evans's fiction.
Posting May Slow
Whither Moral Progress?
Does moral progress actually happen? And if it does so, how?
The Gift We Give To Tomorrow
How did love ever come into the universe? How did that happen, and how special was it, really?
Could Anything Be Right?
You do know quite a bit about morality. It's not perfect information, surely, or absolutely reliable, but you have someplace to start. If you didn't, you'd have a much harder time thinking about morality than you do.
Existential Angst Factory
As a general rule, if you find yourself suffering from existential angst, check and see if you're not just feeling unhappy because of something else going on in your life. An awful lot of existential angst comes from people trying to solve the wrong problem.
Touching the Old
Seeing history in person is a very strong feeling.
Should We Ban Physics?
There is a chance, however remote, that novel physics experiments could destroy the earth. Is banning physics experiments a good idea?
Fake Norms, or "Truth" vs. Truth
Our society has a moral norm for applauding "truth", but actual truths get much less applause (this is a bad thing).
When (Not) To Use Probabilities
When you don't have a numerical procedure to generate probabilities, you're probably better off using your own evolved abilities to reason in the presence of uncertainty.
Can Counterfactuals Be True?
How can we explain counterfactuals having a truth value, if we don't talk about "nearby possible worlds" or any of the other explanations offered by philosophers?
Math is Subjunctively Objective
It really does seem like "2+3=5" is true. Things get confusing if you ask what you mean when you say "2+3=5 is true". But because the simple rules of addition function so well to predict observations, it really does seem like it really must be true.
Does Your Morality Care What You Think?
If, for whatever reason, evolution or education had convinced you to believe that it was moral to do something that you now believe is immoral, you would go around saying "This is moral to do no matter what anyone else thinks of it." How much does this matter?
Changing Your Metaethics
Discusses the various lines of retreat that have been set up in the discussion on metaethics.
Setting Up Metaethics
What exactly does a correct theory of metaethics need to look like?
The Meaning of Right
Eliezer's long-awaited theory of meta-ethics.
Interpersonal Morality
A few clarifications on how Yudkowsky's theory of metaethics applies to interpersonal interactions.
Humans in Funny Suits
It's really hard to imagine aliens that are fundamentally different from human beings.
Detached Lever Fallacy
There is a lot of machinery hidden beneath the words, and rationalist's taboo is one way to make a step towards exposing it.
A Genius for Destruction
The Comedy of Behaviorism
The behaviorists thought that speaking about anything like a mind, or emotions, or thoughts, was unscientific. After all, they said, you can't observe anger. You can just observe behavior. But, it is possible, using empathy, to correctly predict wide varieties of behavior, which you can't account for by Pavlovian conditioning.
No Logical Positivist I
Logical positivism was based around the idea that the only meaningful statements were those that could be verified by experiment. Unfortunately for positivism, there are meaningful statements that are very likely true and very likely false, and yet cannot be tested.
Anthropomorphic Optimism
Don't bother coming up with clever, persuasive arguments for why evolution will do things the way you prefer. It really isn't listening.
Contaminated by Optimism
Avoid situations, as much as you possibly can, in which optimistic thinking suggests ideas for conscious consideration. In real life problems, if you've done that, you've probably already screwed up.
Hiroshima Day
Morality as Fixed Computation
A clarification about Yudkowsky's metaethics.
Inseparably Right; or, Joy in the Merely Good
Don't go looking for some pure essence of goodness, distinct from, you know, actual good.
Sorting Pebbles Into Correct Heaps
A parable about an imaginary society that has arbitrary, alien values.
Moral Error and Moral Disagreement
How can you make errors about morality?
Abstracted Idealized Dynamics
A bit of explanation on the idea of morality as "computation".
"Arbitrary"
When we say that something is arbitrary, we are saying that it feels like it should come with a justification, but doesn't.
Is Fairness Arbitrary?
When we say that a fair division of pie among N people is for each person to get 1/N of the pie, we aren't being arbitrary. We're being fair.
The Bedrock of Morality: Arbitrary?
Humans are built in such a way as to do what is right. Other optimization processes may not. So what?
Hot Air Doesn't Disagree
"Disagreement" between rabbits and foxes is sheer anthropomorphism. Rocks and hot air don't disagree, even though one decreases in elevation and one increases in elevation.
When Anthropomorphism Became Stupid
Anthropomorphism didn't become obviously wrong until we realized that the tangled neurons inside the brain were performing complex information processing, and that this complexity arose as a result of evolution.
The Cartoon Guide to Lob's Theorem
An explanation, using cartoons, of Lob's theorem.
Dumb Deplaning
You Provably Can't Trust Yourself
Lob's theorem provides, by analogy, a nice explanation for why you really can't trust yourself. Don't trust thoughts because you think them, trust them because they were generated by trustworthy rules.
No License To Be Human
Good things aren't good because humans care about what's good. Good things are good because they save lives, make people happy, give us control over our own lives, involve us with others and prevent us from collapsing into total self-absorption, keep life complex and non-repeating and aesthetic and interesting, etc.
Invisible Frameworks
A particular system of values is analyzed, and is used to demonstrate the idea that anytime you consider changing your morals, you do so using your own current meta-morals. Forget this at your peril.
Mirrors and Paintings
CEV is not the essence of goodness. If functioning properly, it is supposed to work analogously to a mirror -- a mirror is not inherently apple-shaped, but in the presence of an apple, it reflects the image of an apple. In the presence of the Pebblesorters, an AI running CEV would begin transforming the universe into heaps containing prime numbers of pebbles. In the presence of humankind, an AI running CEV would begin doing whatever is right for it to do.
Unnatural Categories
There are some mental categories we draw that are relatively simple and straightforward. Others get trickier, because they are primarily drawn in such a way that whether or not something fits into that category is important information to our utility function. Deciding whether someone is "alive", for instance. Is someone like Terry Schaivo alive? This issue is why, in part, technology creates new moral dilemmas, and why teaching morality to a computer is so hard.
Magical Categories
We underestimate the complexity of our own unnatural categories. This doesn't work when you're trying to build a FAI.
Three Fallacies of Teleology
Theories of teleology have a few problems. First, theories of teleology often wind up drawing causal arrows from the future to the past. It also leads you to make predictions based on anthropomorphism. Finally, it opens you up to the Mind Projection Fallacy, assuming that the purpose of something is an inherent property of that thing, as opposed to a property of the agent or process that produced it.
Dreams of AI Design
It can feel as though you understand how to build an AI, when really, you're still making all your predictions based on empathy. Your AI design will not work until you figure out a way to reduce the mental to the non-mental.
Against Modal Logics
Unfortunately, very little of philosophy is actually helpful in AI research, for a few reasons.
Harder Choices Matter Less
If a choice is hard, that means the alternatives are around equally balanced, right?
Qualitative Strategies of Friendliness
Qualitative strategies to achieve friendliness tend to run into difficulty.
Dreams of Friendliness
Why programming an AI that only answers questions is not a trivial problem, for many of the same reasons that programming an FAI isn't trivial.
Brief Break
Rationality Quotes 12
Rationality Quotes 13
The True Prisoner's Dilemma
The standard visualization for the Prisoner's Dilemma doesn't really work on humans. We can't pretend we're completely selfish.
The Truly Iterated Prisoner's Dilemma
According to classic game theory, if you know how many iterations there are going to be in the iterated prisoner's dilemma, then you shouldn't use tit for tat. Does this really seem right?
Rationality Quotes 14
Rationality Quotes 15
Rationality Quotes 16
Singularity Summit 2008
Points of Departure
Hollywood seems to model "emotionless" AI's as humans with some slight differences. For the most part, they act as emotionally repressed humans, despite the fact that this is a very unlikely way for AI's to behave.
Rationality Quotes 17
Excluding the Supernatural
Don't rule out supernatural explanations because they're supernatural. Test them the way you would test any other hypothesis. And probably, you will find out that they aren't true.
Psychic Powers
Some of the previous post was incorrect. Psychic powers, if indeed they were ever discovered, would actually be strong evidence in favor of non-reductionism.
Optimization
A discussion of the concept of optimization.
My Childhood Death Spiral
My Best and Worst Mistake
When Eliezer went into his death spiral around intelligence, he would up making a lot of mistakes that later became very useful.
Raised in Technophilia
When Eliezer was quite young, it took him a very long time to get to the point where he was capable of considering that the dangers of technology might outweigh the benefits.
A Prodigy of Refutation
Eliezer's skills at defeating other people's ideas led him to believe that his own (mistaken) ideas must have been correct.
The Sheer Folly of Callow Youth
Eliezer's big mistake was when he took a mysterious view of morality.
Say It Loud
If you're uncertain about something, communicate that uncertainty. Do so as clearly as you can. You don't help yourself by hiding how confused you are.
Ban the Bear
How Many LHC Failures Is Too Many?
If the LHC, or some sort of similar project, continually seemed to fail right before it did something we thought might destroy the world, this is something we should notice.
Horrible LHC Inconsistency
An illustration of inconsistent probability assignments.
That Tiny Note of Discord
Eliezer started to dig himself out of his philosophical hole when he noticed a tiny inconsistency.
Fighting a Rearguard Action Against the Truth
When Eliezer started to consider the possibility of Friendly AI as a contingency plan, he permitted himself a line of retreat. He was now able to slowly start to reconsider positions in his metaethics, and move gradually towards better ideas.
My Naturalistic Awakening
Eliezer actually looked back and realized his mistakes when he imagined the idea of an optimization process.
The Level Above Mine
There are people who have acquired more mastery over various fields than Eliezer has over his.
Competent Elites
People in higher levels of business, science, etc, often really are there because they're significantly more competent than everyone else.
Above-Average AI Scientists
A lot of AI researchers aren't really all that exceptional. This is a problem, though most people don't seem to see it.
Friedman's "Prediction vs. Explanation"
The Magnitude of His Own Folly
Eliezer considers his training as a rationalist to have started the day he realized just how awfully he had screwed up.
Awww, a Zebra
Intrade and the Dow Drop
Trying to Try
As a human, if you try to try something, you will put much less work into it than if you try something.
Use the Try Harder, Luke
A fictional exchange between Mark Hamill and George Lucas over the scene in Empire Strikes Back where Luke Skywalker attempts to lift his X-wing with the force.
Rationality Quotes 18
Beyond the Reach of God
Compare the world in which there is a God, who will intervene at some threshold, against a world in which everything happens as a result of physical laws. Which universe looks more like our own?
My Bayesian Enlightenment
The story of how Eliezer Yudkowsky became a Bayesian.
Bay Area Meetup for Singularity Summit
On Doing the Impossible
A lot of projects seem impossible, meaning that we don't immediately see a way to do them. But after working on them for a long time, they start to look merely extremely difficult.
Make an Extraordinary Effort
It takes an extraordinary amount of rationality before you stop making stupid mistakes. Doing better requires making extraordinary efforts.
Shut up and do the impossible!
The ultimate level of attacking a problem is the point at which you simply shut up and solve the impossible problem.
AIs and Gatekeepers Unite!
Crisis of Faith
The Ritual
Depiction of crisis of faith in Beisutsukai world.
(alternate summary:)
Jeffreyssai carefully undergoes a crisis of faith.
Rationality Quotes 19
Why Does Power Corrupt?
There are simple evolutionary reasons why power corrupts humans. They can be beaten, though.
Ends Don't Justify Means (Among Humans)
Entangled Truths, Contagious Lies
Traditional Capitalist Values
Before you start talking about a system of values, try to actually understand the values of that system as believed by its practitioners.
Dark Side Epistemology
If you want to tell a truly convincing lie, to someone who knows what they're talking about, you either have to lie about lots of specific object level facts, or about more general laws, or about the laws of thought. Lots of the memes out there about how you learn things originally came from people who were trying to convince other people to believe false statements.
Protected From Myself
Ethics can protect you from your own mistakes, especially when your mistakes are about really fundamental things.
Ethical Inhibitions
Humans may have a sense of ethical inhibition because various ancestors, who didn't follow ethical norms when they thought they could get away with it, nevertheless got caught.
Ethical Injunctions
Prices or Bindings?
Are ethical rules simply actions that have a high cost associated with them? Or are they bindings, expected to hold in all situations, no matter the cost otherwise?
Ethics Notes
Some responses to comments about the idea of Ethical Injunctions.
Which Parts Are "Me"?
Everything you are, is inside your brain. But not everything inside your brain is you. You can draw mental separation lines, which can make you more reflective.
Inner Goodness
San Jose Meetup, Sat 10/25 @ 7:30pm
Expected Creative Surprises
The unpredictability of intelligence is a very special and unusual kind of surprise, which is not at all like noise or randomness. There is a weird balance between the unpredictability of actions and the predictability of outcomes.
Belief in Intelligence
What does a belief that an agent is intelligent look like? What predictions does it make?
Aiming at the Target
When you make plans, you are trying to steer the future into regions higher in your preference ordering.
Measuring Optimization Power
Efficient Cross-Domain Optimization
To speak of intelligence, rather than optimization power, we need to divide optimization power by the resources needed, or the amount of prior optimization that had to be done on the system.
Economic Definition of Intelligence?
Could economics help provide a definition and a general measure of intelligence?
Intelligence in Economics
There are a few connections between economics and intelligence, so economics might have something to contribute to a definition of intelligence.
Mundane Magic
A list of abilities that would be amazing if they were magic, or if only a few people had them.
BHTV: Jaron Lanier and Yudkowsky
Building Something Smarter
It is possible for humans to create something better than ourselves. It's been done. It's not paradoxical.
Complexity and Intelligence
Today's Inspirational Tale
Hanging Out My Speaker's Shingle
Back Up and Ask Whether, Not Why
When someone asks you why you're doing "X", don't ask yourself why you're doing "X". Ask yourself whether someone should do "X".
Recognizing Intelligence
Suppose we landed on another planet and found a large metal object that contained wires made of superconductors, and hundreds of tightly matched gears. Would we be able to infer the presence of an optimization process?
Lawful Creativity
Creativity seems to consist of breaking rules, and violating expectations. But there is one rule that cannot be broken: creative solutions must have something good about them. Creativity is a surprise, but most surprises aren't creative.
Ask OB: Leaving the Fold
Lawful Uncertainty
Facing a random scenario, the correct solution is really not to behave randomly. Faced with an irrational universe, throwing away your rationality won't help.
Worse Than Random
If a system does better when randomness is added into its processing, then it must somehow have been performing worse than random. And if you can recognize that this is the case, you ought to be able to generate a non-randomized system.
The Weighted Majority Algorithm
An illustration of a case in Artificial Intelligence in which a randomized algorithm is purported to work better than a non-randomized algorithm, and a discussion of why this is the case.
Bay Area Meetup: 11/17 8PM Menlo Park
Selling Nonapples
In most cases, if you say that something isn't working, then you have to specify a new thing that you think could work. You can't just say that you have to not do what you have been doing. If you observe that selling apples isn't working out for you financially, you can't just decide to sell nonapples.
The Nature of Logic
What logic actually does is preserve truth in a model. It says that if all of the premises are true, then this conclusion is indeed true. But that's not all that minds do. There's an awful lot else that you need, before you start actually getting anything like intelligence.
Boston-area Meetup: 11/18/08 9pm MIT/Cambridge
Logical or Connectionist AI?
The difference between Logical and Connectionist AIs is portrayed as a grand dichotomy between two different sides of the force. The truth is that they're just two different designs out of many possible ones.
Whither OB?
Failure By Analogy
It's very tempting to reason that your invention X will do Y, because it is similar to thing Z, which also does Y. But reality very often ignores this justification for why your new invention will work.
Failure By Affective Analogy
Making analogies to things that have positive or negative connotations is an even better way to make sure you fail.
The Weak Inside View
On cases where the causal factors creating a circumstance are changing, the outside view may be misleading. In that case, the best you can do may just be to take the inside view, but not try to assign predictions that were too precise.
The First World Takeover
The first replicator was the original black swan. A couple of molecules that, despite not having a particularly good optimization process, could explore new regions of pattern-space. This is an event that would have implications that would have seemed absurd to predict.
Whence Your Abstractions?
Figuring out how to place concepts in categories is an important part of the problem. Before we classify AI into the same group as human intelligence, farming, and industry, we need to think about why we want to put them into that same category.
Observing Optimization
Trying to derive predictions from a theory that says that sexual reproduction increases the rate of evolution is more difficult than it first appears.
Life's Story Continues
A discussion of some of the classical big steps in the evolution of life, and how they relate to the idea of optimization.
Surprised by Brains
If you hadn't ever seen brains before, but had only seen evolution, you might start making astounding predictions about their ability. You might, for instance, think that creatures with brains might someday be able to create complex machinery in only a millenium.
Cascades, Cycles, Insight...
Cascades, cycles, and insight are three ways in which the development of intelligence appears discontinuous. Cascades are when one development makes more developments possible. Cycles are when completing a process causes that process to be completed more. And insight is when we acquire a chunk of information that makes solving a lot of other problems easier.
...Recursion, Magic
If you have a system that gets better at making itself get better, it will appear to discontinuously advance. Add in the ability of intelligences to accomplish tasks which previous intelligences labeled impossible, and you have the potential for dramatic advancement.
The Complete Idiot's Guide to Ad Hominem
Engelbart: Insufficiently Recursive
The development of the mouse did lead to a productivity increase. But it didn't lead to a major productivity increase at creating future productivity increases. Therefore, the recursive process didn't take off properly.
Total Nano Domination
If you get a small advantage in nanotechnology, that might not be enough to take over the world. But if you use that small advantage in nanotechnology to gain a major advancement in bots, you could gain an extraordinary amount of power very fast.
Thanksgiving Prayer
Chaotic Inversion
Singletons Rule OK
It is possible to create a singleton that won't do nasty things. This may be preferable to a scenario in which many agents start competing for resources without any way of securing themselves other than constant defense and deterrence.
Disappointment in the Future
A list of Ray Kurzweil's predictions for the period 1999-2009.
Recursive Self-Improvement
When you take a process that is capable of making significant progress developing other processes, and turn it on itself, you should either see it flatline, or FOOM. The likelihood of it doing anything that looks like human-scale progress is unbelievably low.
Hard Takeoff
It seems likely that there will be a discontinuity in the process of AI self-improvement around the time when AIs become capable of doing AI theory. A lot of things have to go exactly right in order to get a slow takeoff, and there is no particular reason to expect them all to happen that way.
Permitted Possibilities, & Locality
Yudkowsky's attempt to summarize Hanson's positions, list the possible futures discussed so far, and identify which ones seem most and least likely to Yudkowsky.
Underconstrained Abstractions
The problem with selecting abstractions is that for your data, there are probably lots of abstractions that fit the data equally well. In that case, we need some other way to decide which abstractions are useful.
Sustained Strong Recursion
Sustained strong recursion has a much larger effect on growth than other possible mechanisms for growth.
Is That Your True Rejection?
People's stated reason for a rejection may not be the same as the actual reason for that rejection.
Artificial Mysterious Intelligence
Attempting to create an intelligence without actually understanding what intelligence is, is a common failure mode. If you want to make actual progress, you need to truly understand what it is that you are trying to make.
True Sources of Disagreement
Yudkowsky's guesses about what the key sticking points in the AI FOOM debate are.
Disjunctions, Antipredictions, Etc.
A few tricks Yudkowsky uses to think about the future.
Bay Area Meetup Wed 12/10 @8pm
The Mechanics of Disagreement
Reasons why aspiring rationalists might still disagree after trading arguments.
What I Think, If Not Why
Yudkowsky's attempt to summarize what he thinks on the subject of Friendly AI, without providing any of the justifications for what he believes.
You Only Live Twice
Yudkowsky's addition to Hanson's endorsement of cryonics.
BHTV: de Grey and Yudkowsky
For The People Who Are Still Alive
Given that we live in a big universe, and that we can't actually determine whether or not a particular person exists (because they will exist anyway in some other Hubble volume or Everett branch), then it makes more sense to care about whether or not people we can influence are having happy lives, than about whether certain people exist in our own local area.
Not Taking Over the World
It's rather difficult to imagine a way in which you could create an AI, and not somehow either take over or destroy the world. How can you use unlimited power in such a way that you don't become a malevolent deity, in the Epicurean sense?
Visualizing Eutopia
Trying to imagine a Eutopia is actually difficult. But it is worth trying.
Prolegomena to a Theory of Fun
Fun Theory is an attempt to actually answer questions about eternal boredom that are more often posed and left hanging. Attempts to visualize Utopia are often defeated by standard biases, such as the attempt to imagine a single moment of good news ("You don't have to work anymore!") rather than a typical moment of daily life ten years later. People also believe they should enjoy various activities that they actually don't. But since human values have no supernatural source, it is quite reasonable for us to try to understand what we want. There is no external authority telling us that the future of humanity should not be fun.
High Challenge
Life should not always be made easier for the same reason that video games should not always be made easier. Think in terms of eliminating low-quality work to make way for high-quality work, rather than eliminating all challenge. One needs games that are fun to play and not just fun to win. Life's utility function is over 4D trajectories, not just 3D outcomes. Values can legitimately be over the subjective experience, the objective result, and the challenging process by which it is achieved - the traveller, the destination and the journey.
Complex Novelty
Are we likely to run out of new challenges, and be reduced to playing the same video game over and over? How large is Fun Space? This depends on how fast you learn; the faster you generalize, the more challenges you see as similar to each other. Learning is fun, but uses up fun; you can't have the same stroke of genius twice. But the more intelligent you are, the more potential insights you can understand; human Fun Space is larger than chimpanzee Fun Space, and not just by a linear factor of our brain size. In a well-lived life, you may need to increase in intelligence fast enough to integrate your accumulating experiences. If so, the rate at which new Fun becomes available to intelligence, is likely to overwhelmingly swamp the amount of time you could spend at that fixed level of intelligence. The Busy Beaver sequence is an infinite series of deep insights not reducible to each other or to any more general insight.
Sensual Experience
Much of the anomie and disconnect in modern society can be attributed to our spending all day on tasks (like office work) that we didn't evolve to perform (unlike hunting and gathering on the savanna). Thus, many of the tasks we perform all day do not engage our senses - even the most realistic modern video game is not the same level of sensual experience as outrunning a real tiger on the real savanna. Even the best modern video game is low-bandwidth fun - a low-bandwidth connection to a relatively simple challenge, which doesn't fill our brains well as a result. But future entities could have different senses and higher-bandwidth connections to more complicated challenges, even if those challenges didn't exist on the savanna.
Living By Your Own Strength
Our hunter-gatherer ancestors strung their own bows, wove their own baskets and whittled their own flutes. Part of our alienation from our design environment is the number of tools we use that we don't understand and couldn't make for ourselves. It's much less fun to read something in a book than to discover it for yourself. Specialization is critical to our current civilization. But the future does not have to be a continuation of this trend in which we rely more and more on things outside ourselves which become less and less comprehensible. With a surplus of power, you could begin to rethink the life experience as a road to internalizing new strengths, not just staying alive efficiently through extreme specialization.
Rationality Quotes 20
Imaginary Positions
People who are not members of a minority group may somehow come to believe that members of this group possess certain traits which seem to "fit". These traits are not required to have any connection to the real traits of that group.
Harmful Options
Offering people more choices that differ along many dimensions, may diminish their satisfaction with their final choice. Losses are more painful than the corresponding gains are pleasurable, so people think of the dimensions along which their final choice was inferior, and of all the other opportunities passed up. If you can only choose one dessert, you're likely to be happier choosing from a menu of two than from a menu of fourteen. Refusing tempting choices consumes mental energy and decreases performance on other cognitive tasks. A video game that contained an always-visible easier route through, would probably be less fun to play even if that easier route were deliberately foregone. You can imagine a Devil who follows someone around, making their life miserable, solely by offering them options which are never actually taken. And what if a worse option is taken due to a predictable mistake? There are many ways to harm people by offering them more choices.
Devil's Offers
It is dangerous to live in an environment in which a single failure of resolve, throughout your entire life, can result in a permanent addiction or in a poor edit of your own brain. For example, a civilization which is constantly offering people tempting ways to shoot off their own feet - for example, offering them a cheap escape into eternal virtual reality, or customized drugs. It requires a constant stern will that may not be much fun. And it's questionable whether a superintelligence that descends from above to offer people huge dangerous temptations that they wouldn't encounter on their own, is helping.
Nonperson Predicates
An AI, trying to develop highly accurate models of the people it interacts with, may develop models which are conscious themselves. For ethical reasons, it would be preferable if the AI wasn't creating and destroying people in the course of interpersonal interactions. Resolving this issue requires making some progress on the hard problem of conscious experience. We need some rule which definitely identifies all conscious minds as conscious. We can make do if it still identifies some nonconscious minds as conscious.
Nonsentient Optimizers
Discusses some of the problems of, and justification for, creating AIs that are knowably not conscious / sentient / people / citizens / subjective experiencers. We don't want the AI's models of people to be people - we don't want conscious minds trapped helplessly inside it. So we need how to tell that something is definitely not a person, and in this case, maybe we would like the AI itself to not be a person, which would simplify a lot of ethical issues if we could pull it off. Creating a new intelligent species is not lightly to be undertaken from a purely ethical perspective; if you create a new kind of person, you have to make sure it leads a life worth living.
Nonsentient Bloggers
Eliezer informs readers that he had accidentally published the previous post, "Nonsentient Optimizers", when it was only halfway done.
Can't Unbirth a Child
As a piece of meta advice for how to act when you have more power than you probably should, avoid doing things that cannot be undone. Creating a new sentient being is one of those things to avoid. If you need to rewrite the source code of a nonsentient optimization process, this is less morally problematic than rewriting the source code of a sentient intelligence who doesn't want to be rewritten. Creating new life forms creates such massive issues that it's really better to just not try, at least until we know a lot more.
Amputation of Destiny
C. S. Lewis's Narnia has a problem, and that problem is the super-lion Aslan - who demotes the four human children from the status of main characters, to mere hangers-on while Aslan does all the work. Iain Banks's Culture novels have a similar problem; the humans are mere hangers-on of the superintelligent Minds. We already have strong ethical reasons to prefer to create nonsentient AIs rather than sentient AIs, at least at first. But we may also prefer in just a fun-theoretic sense that we not be overshadowed by hugely more powerful entities occupying a level playing field with us. Entities with human emotional makeups should not be competing on a level playing field with superintelligences - either keep the superintelligences off the playing field, or design the smaller (human-level) minds with a different emotional makeup that doesn't mind being overshadowed.
Dunbar's Function
Robin Dunbar's original calculation showed that the maximum human group size was around 150. But a typical size for a hunter-gatherer band would be 30-50, cohesive online groups peak at 50-60, and small task forces may peak in internal cohesiveness around 7. Our attempt to live in a world of six billion people has many emotional costs: We aren't likely to know our President or Prime Minister, or to have any significant influence over our country's politics, although we go on behaving as if we did. We are constantly bombarded with news about improbably pretty and wealthy individuals. We aren't likely to find a significant profession where we can be the best in our field. But if intelligence keeps increasing, the number of personal relationships we can track will also increase, along with the natural degree of specialization. Eventually there might be a single community of sentients that really was a single community.
A New Day
Try spending a day doing as many new things as possible.
End of 2008 articles
2009 Articles
Free to Optimize
It may be better to create a world that operates by better rules, that you can understand, so that you can optimize your own future, than to create a world that includes some sort of deity that can be prayed to. The human reluctance to have their future controlled by an outside source is a nontrivial part of morality.
The Uses of Fun (Theory)
Fun Theory is important for replying to critics of human progress; for inspiring people to keep working on human progress; for refuting religious arguments that the world could possibly have been benevolently designed; for showing that religious Heavens show the signature of the same human biases that torpedo other attempts at Utopia; and for appreciating the great complexity of our values and of a life worth living, which requires a correspondingly strong effort of AI design to create AIs that can play good roles in a good future.
Growing Up is Hard
Each part of the human brain is optimized for behaving correctly, assuming that the rest of the brain is operating exactly as expected. Change one part, and the rest of your brain may not work as well. Increasing a human's intelligence is not a trivial problem.
Changing Emotions
Creating new emotions seems like a desirable aspect of many parts of Fun Theory, but this is not to be trivially postulated. It's the sort of thing best done with superintelligent help, and slowly and conservatively even then. We can illustrate these difficulties by trying to translate the short English phrase "change sex" into a cognitive transformation of extraordinary complexity and many hidden subproblems.
Rationality Quotes 21
Emotional Involvement
Since the events in video games have no actual long-term consequences, playing a video game is not likely to be nearly as emotionally involving as much less dramatic events in real life. The supposed Utopia of playing lots of cool video games forever, is life as a series of disconnected episodes with no lasting consequences. Our current emotions are bound to activities that were subgoals of reproduction in the ancestral environment - but we now pursue these activities as independent goals regardless of whether they lead to reproduction.
Rationality Quotes 22
Serious Stories
Stories and lives are optimized according to rather different criteria. Advice on how to write fiction will tell you that "stories are about people's pain" and "every scene must end in disaster". I once assumed that it was not possible to write any story about a successful Singularity because the inhabitants would not be in any pain; but something about the final conclusion that the post-Singularity world would contain no stories worth telling seemed alarming. Stories in which nothing ever goes wrong, are painful to read; would a life of endless success have the same painful quality? If so, should we simply eliminate that revulsion via neural rewiring? Pleasure probably does retain its meaning in the absence of pain to contrast it; they are different neural systems. The present world has an imbalance between pain and pleasure; it is much easier to produce severe pain than correspondingly intense pleasure. One path would be to address the imbalance and create a world with more pleasures, and free of the more grindingly destructive and pointless sorts of pain. Another approach would be to eliminate pain entirely. I feel like I prefer the former approach, but I don't know if it can last in the long run.
Rationality Quotes 23
Continuous Improvement
Humans seem to be on a hedonic treadmill; over time, we adjust to any improvements in our environment - after a month, the new sports car no longer seems quite as wonderful. This aspect of our evolved psychology is not surprising: it is a rare organism in a rare environment whose optimal reproductive strategy is to rest with a smile on its face, feeling happy with what it already has. To entirely delete the hedonic treadmill seems perilously close to tampering with Boredom itself. Is there enough fun in the universe for a transhuman to jog off the treadmill - improve their life continuously, leaping to ever-higher hedonic levels before adjusting to the previous one? Can ever-higher levels of pleasure be created by the simple increase of ever-larger floating-point numbers in a digital pleasure center, or would that fail to have the full subjective quality of happiness? If we continue to bind our pleasures to novel challenges, can we find higher levels of pleasure fast enough, without cheating? The rate at which value can increase as more bits are added, and the rate at which value must increase for eudaimonia, together determine the lifespan of a mind. If minds must use exponentially more resources over time in order to lead a eudaimonic existence, their subjective lifespan is measured in mere millennia even if they can draw on galaxy-sized resources.
Eutopia is Scary
If a citizen of the Past were dropped into the Present world, they would be pleasantly surprised along at least some dimensions; they would also be horrified, disgusted, and frightened. This is not because our world has gone wrong, but because it has gone right. A true Future gone right would, realistically, be shocking to us along at least some dimensions. This may help explain why most literary Utopias fail; as George Orwell observed, "they are chiefly concerned with avoiding fuss". Heavens are meant to sound like good news; political utopias are meant to show how neatly their underlying ideas work. Utopia is reassuring, unsurprising, and dull. Eutopia would be scary. (Of course the vast majority of scary things are not Eutopian, just entropic.) Try to imagine a genuinely better world in which you would be out of place - not a world that would make you smugly satisfied at how well all your current ideas had worked. This proved to be a very important exercise when I tried it; it made me realize that all my old proposals had been optimized to sound safe and reassuring.
Building Weirdtopia
Utopia and Dystopia both confirm the moral sensibilities you started with; whether the world is a libertarian utopia of government non-interference, or a hellish dystopia of government intrusion and regulation, either way you get to say "Guess I was right all along." To break out of this mold, write down the Utopia, and the Dystopia, and then try to write down the Weirdtopia - an arguably-better world that zogs instead of zigging or zagging. (Judging from the comments, this exercise seems to have mostly failed.)
She has joined the Conspiracy
Justified Expectation of Pleasant Surprises
A pleasant surprise probably has a greater hedonic impact than being told about the same positive event long in advance - hearing about the positive event is good news in the moment of first hearing, but you don't have the gift actually in hand. Then you have to wait, perhaps for a long time, possibly comparing the expected pleasure of the future to the lesser pleasure of the present. This argues that if you have a choice between a world in which the same pleasant events occur, but in the first world you are told about them long in advance, and in the second world they are kept secret until they occur, you would prefer to live in the second world. The importance of hope is widely appreciated - people who do not expect their lives to improve in the future are less likely to be happy in the present - but the importance of vague hope may be understated.
Seduced by Imagination
Vagueness usually has a poor name in rationality, but the Future is something about which, in fact, we do not possess strong reliable specific information. Vague (but justified!) hopes may also be hedonically better. But a more important caution for today's world is that highly specific pleasant scenarios can exert a dangerous power over human minds - suck out our emotional energy, make us forget what we don't know, and cause our mere actual lives to pale by comparison. (This post is not about Fun Theory proper, but it contains an important warning about how not to use Fun Theory.)
Getting Nearer
How should rationalists use their near and far modes of thinking? And how should knowing about near versus far modes influence how we present the things we believe to other people?
In Praise of Boredom
"Boredom" is an immensely subtle and important aspect of human values, nowhere near as straightforward as it sounds to a human. We don't want to get bored with breathing or with thinking. We do want to get bored with playing the same level of the same video game over and over. We don't want changing the shade of the pixels in the game to make it stop counting as "the same game". We want a steady stream of novelty, rather than spending most of our time playing the best video game level so far discovered (over and over) and occasionally trying out a different video game level as a new candidate for "best". These considerations would not arise in most utility functions in expected utility maximizers.
Sympathetic Minds
Mirror neurons are neurons that fire both when performing an action oneself, and watching someone else perform the same action - for example, a neuron that fires when you raise your hand or watch someone else raise theirs. We predictively model other minds by putting ourselves in their shoes, which is empathy. But some of our desire to help relatives and friends, or be concerned with the feelings of allies, is expressed as sympathy, feeling what (we believe) they feel. Like "boredom", the human form of sympathy would not be expected to arise in an arbitrary expected-utility-maximizing AI. Most such agents would regard any agents in its environment as a special case of complex systems to be modeled or optimized; it would not feel what they feel.
Interpersonal Entanglement
Our sympathy with other minds makes our interpersonal relationships one of the most complex aspects of human existence. Romance, in particular, is more complicated than being nice to friends and kin, negotiating with allies, or outsmarting enemies - it contains aspects of all three. Replacing human romance with anything simpler or easier would decrease the peak complexity of the human species - a major step in the wrong direction, it seems to me. This is my problem with proposals to give people perfect, nonsentient sexual/romantic partners, which I usually refer to as "catgirls" ("catboys"). The human species does have a statistical sex problem: evolution has not optimized the average man to make the average woman happy or vice versa. But there are less sad ways to solve this problem than both genders giving up on each other and retreating to catgirls/catboys.
Failed Utopia #4-2
A fictional short story illustrating some of the ideas in Interpersonal Entanglement above. (Many commenters seemed to like this story, and some said that the ideas were easier to understand in this form.)
Investing for the Long Slump
What should you do if you think that the world's economy is going to stay bad for a very long time? How could such a scenario happen?
Higher Purpose
Having a Purpose in Life consistently shows up as something that increases stated well-being. Of course, the problem with trying to pick out "a Purpose in Life" in order to make yourself happier, is that this doesn't take you outside yourself; it's still all about you. To find purpose, you need to turn your eyes outward to look at the world and find things there that you care about - rather than obsessing about the wonderful spiritual benefits you're getting from helping others. In today's world, most of the highest-priority legitimate Causes consist of large groups of people in extreme jeopardy: Aging threatens the old, starvation threatens the poor, extinction risks threaten humanity as a whole. If the future goes right, many and perhaps all such problems will be solved - depleting the stream of victims to be helped. Will the future therefore consist of self-obsessed individuals, with nothing to take them outside themselves? I suggest, though, that even if there were no large groups of people in extreme jeopardy, we would still, looking around, find things outside ourselves that we cared about - friends, family; truth, freedom... Nonetheless, if the Future goes sufficiently well, there will come a time when you could search the whole of civilization, and never find a single person so much in need of help, as dozens you now pass on the street. If you do want to save someone from death, or help a great many people, then act now; your opportunity may not last, one way or another.
Rationality Quotes 24
The Fun Theory Sequence
Describes some of the many complex considerations that determine what sort of happiness we most prefer to have - given that many of us would decline to just have an electrode planted in our pleasure centers.
BHTV: Yudkowsky / Wilkinson
31 Laws of Fun
A brief summary of principles for writing fiction set in a eutopia.
OB Status Update
Rationality Quotes 25
Value is Fragile
An interesting universe, that would be incomprehensible to the universe today, is what the future looks like if things go right. There are a lot of things that humans value that if you did everything else right, when building an AI, but left out that one thing, the future would wind up looking dull, flat, pointless, or empty. Any Future not shaped by a goal system with detailed reliable inheritance from human morals and metamorals, will contain almost nothing of worth.
Three Worlds Collide (0/8)
The Baby-Eating Aliens (1/8)
Future explorers discover an alien civilization, and learns something unpleasant about their civilization.
War and/or Peace (2/8)
The true prisoner's dilemma against aliens. The conference struggles to decide the appropriate course of action.
The Super Happy People (3/8)
Humanity encounters new aliens that see the existence of pain amongst humans as morally unacceptable.
Interlude with the Confessor (4/8)
Akon talks things over with the Confessor, and receives a history lesson.
Three Worlds Decide (5/8)
The Superhappies propose a compromise.
Normal Ending: Last Tears (6/8)
Humanity accepts the Superhappies' bargain.
True Ending: Sacrificial Fire (7/8)
The Impossible Possible World tries to save humanity.
Epilogue: Atonement (8/8)
The last moments aboard the Impossible Possible World.
The Thing That I Protect
The cause that drives Yudkowsky isn't Friendly AI, and it isn't even specifically about preserving human values. It's simply about a future that's a lot better than the present.
...And Say No More Of It
In the previous couple of months, Overcoming Bias had focused too much on singularity related issues and not enough on rationality. A two month moratorium on the topic of the singularity/intelligence explosion is imposed.
(Moral) Truth in Fiction?
It is possible to convey moral ideas in a clearer way through fiction than through abstract argument. Stories may also help us get closer to thinking about moral issues in near mode. Don't discount moral arguments just because they're written as fiction.
Informers and Persuaders
A purely hypothetical scenario about a world containing some authors trying to persuade people of a particular theory, and some authors simply trying to share valuable information.
Cynicism in Ev-Psych (and Econ?)
Evolutionary Psychology and Microeconomics seem to develop different types of cynical theories, and are cynical about different things.
The Evolutionary-Cognitive Boundary
It's worth drawing a sharp boundary between ideas about evolutionary reasons for behavior, and cognitive reasons for behavior.
An Especially Elegant Evpsych Experiment
An experiment comparing expected parental grief at the death of a child at different ages, to the reproductive success rate of children at that age in a hunter gatherer tribe.
Rationality Quotes 26
An African Folktale
A story that seems to point to some major cultural differences.
Cynical About Cynicism
Much of cynicism seems to be about signaling sophistication, rather than sharing uncommon, true, and important insights.
Good Idealistic Books are Rare
Much of our culture is the official view, not the idealistic view.
Against Maturity
Dividing the world up into "childish" and "mature" is not a useful way to think.
Pretending to be Wise
Trying to signal wisdom or maturity by taking a neutral position is very seldom the right course of action.
Wise Pretensions v.0
An earlier post, on the same topic as yesterday's post.
Rationality Quotes 27
Fairness vs. Goodness
An experiment in which two unprepared subjects play an asymmetric version of the Prisoner's Dilemma. Is the best outcome the one where each player gets as many points as possible, or the one in which each player gets about the same number of points?
On Not Having an Advance Abyssal Plan
Don't say that you'll figure out a solution to the worst case scenario if the worst case scenario happens. Plan it out in advance.
About Less Wrong
Formative Youth
People underestimate the extent to which their own beliefs and attitudes are influenced by their experiences as a child.
Tell Your Rationalist Origin Story
Markets are Anti-Inductive
The standard theory of efficient markets says that exploitable regularities in the past, shouldn't be exploitable in the future. If everybody knows that "stocks have always gone up", then there's no reason to sell them.
Issues, Bugs, and Requested Features
The Most Important Thing You Learned
The Most Frequently Useful Thing
That You'd Tell All Your Friends
Test Your Rationality
You should try hard and often to test your rationality, but how can you do that?
Unteachable Excellence
If it were possible to teach people reliably how to become exceptional, then it would no longer be exceptional.
The Costs of Rationality
Teaching the Unteachable
There are many things we do that we can't easily understand how we do them. Teaching them is therefore a challenge.
No, Really, I've Deceived Myself
Some people who have fallen into self-deception haven't actually deceived themselves. Some of them simply believe that they have deceived themselves, but have not actually done this.
The ethic of hand-washing and community epistemic practice
Belief in Self-Deception
Deceiving yourself is harder than it seems. What looks like a successively adopted false belief may actually be just a belief in false belief.
Rationality and Positive Psychology
Posting now enabled
Kinnaird's truels
Recommended Rationalist Resources
Information cascades
Is it rational to take psilocybin?
Does blind review slow down science?
Formalization is a rationality technique
Slow down a little... maybe?
Checklists
The Golem
Simultaneously Right and Wrong
Moore's Paradox
People often mistake reasons for endorsing a proposition for reasons to believe that proposition.
It's the Same Five Dollars!
Lies and Secrets
The Mystery of the Haunted Rationalist
Don't Believe You'll Self-Deceive
It may be wise to tell yourself that you will not be able to successfully deceive yourself, because by telling yourself this, you may make it true.
The Wrath of Kahneman
The Mistake Script
LessWrong anti-kibitzer (hides comment authors and vote counts)
You May Already Be A Sinner
Striving to Accept
Trying extra hard to believe something seems like Dark Side Epistemology, but what about trying extra hard to accept something that you know is true.
Software tools for community truth-seeking
Wanted: Python open source volunteers
Selective processes bring tag-alongs (but not always!)
Adversarial System Hats
Beginning at the Beginning
The Apologist and the Revolutionary
Raising the Sanity Waterline
Behind every particular failure of social rationality is a larger and more general failure of social rationality; even if all religious content were deleted tomorrow from all human minds, the larger failures that permit religion would still be present. Religion may serve the function of an asphyxiated canary in a coal mine - getting rid of the canary doesn't get rid of the gas. Even a complete social victory for atheism would only be the beginning of the real work of rationalists. What could you teach people without ever explicitly mentioning religion, that would raise their general epistemic waterline to the point that religion went underwater?
So you say you're an altruist...
A Sense That More Is Possible
The art of human rationality may have not been much developed because its practitioners lack a sense that vastly more is possible. The level of expertise that most rationalists strive to develop is not on a par with the skills of a professional mathematician - more like that of a strong casual amateur. Self-proclaimed "rationalists" don't seem to get huge amounts of personal mileage out of their craft, and no one sees a problem with this. Yet rationalists get less systematic training in a less systematic context than a first-dan black belt gets in hitting people.
Talking Snakes: A Cautionary Tale
Boxxy and Reagan
Dialectical Bootstrapping
Is Santa Real?
Epistemic Viciousness
An essay by Gillian Russell on "Epistemic Viciousness in the Martial Arts" generalizes amazingly to possible and actual problems with building a community around rationality. Most notably the extreme dangers associated with "data poverty" - the difficulty of testing the skills in the real world. But also such factors as the sacredness of the dojo, the investment in teachings long-practiced, the difficulty of book learning that leads into the need to trust a teacher, deference to historical masters, and above all, living in data poverty while continuing to act as if the luxury of trust is possible.
On the Care and Feeding of Young Rationalists
The Least Convenient Possible World
Closet survey #1
Soulless morality
The Skeptic's Trilemma
Schools Proliferating Without Evidence
The branching schools of "psychotherapy", another domain in which experimental verification was weak (nonexistent, actually), show that an aspiring craft lives or dies by the degree to which it can be tested in the real world. In the absence of that testing, one becomes prestigious by inventing yet another school and having students, rather than excelling at any visible performance criterion. The field of hedonic psychology (happiness studies) began, to some extent, with the realization that you could measure happiness - that there was a family of measures that by golly did validate well against each other. The act of creating a new measurement creates new science; if it's a good measurement, you get good science.
Really Extreme Altruism
Storm by Tim Minchin
3 Levels of Rationality Verification
How far the craft of rationality can be taken, depends largely on what methods can be invented for verifying it. Tests seem usefully stratifiable into reputational, experimental, and organizational. A "reputational" test is some real-world problem that tests the ability of a teacher or a school (like running a hedge fund, say) - "keeping it real", but without being able to break down exactly what was responsible for success. An "experimental" test is one that can be run on each of a hundred students (such as a well-validated survey). An "organizational" test is one that can be used to preserve the integrity of organizations by validating individuals or small groups, even in the face of strong incentives to game the test. The strength of solution invented at each level will determine how far the craft of rationality can go in the real world.
The Tragedy of the Anticommons
Are You a Solar Deity?
In What Ways Have You Become Stronger?
Brainstorming verification tests, asking along what dimensions you think you've improved due to "rationality".
Taboo "rationality," please.
Science vs. art
What Do We Mean By "Rationality"?
When we talk about rationality, we're generally talking about either epistemic rationality (systematic methods of finding out the truth) or instrumental rationality (systematic methods of making the world more like we would like it to be). We can discuss these in the forms of probability theory and decision theory, but this doesn't fully cover the difficulty of being rational as a human. There is a lot more to rationality than just the formal theories.
Comments for "Rationality"
The "Spot the Fakes" Test
On Juvenile Fiction
Rational Me or We?
Dead Aid
Tarski Statements as Rationalist Exercise
The Pascal's Wager Fallacy Fallacy
People hear about a gamble involving a big payoff, and dismiss it as a form of Pascal's Wager. But the size of the payoff is not the flaw in Pascal's Wager. Just because an option has a very large potential payoff does not mean that the probability of getting that payoff is small, or that there are other possibilities that will cancel with it.
Never Leave Your Room
Rationalist Storybooks: A Challenge
A corpus of our community's knowledge
Little Johny Bayesian
How to Not Lose an Argument
Counterfactual Mugging
Rationalist Fiction
What works of fiction are out there that show characters who have acquired their skills at rationality through practice, and who we can watch in the act of employing those skills?
Rationalist Poetry Fans, Unite!
Precommitting to paying Omega.
Why Our Kind Can't Cooperate
The atheist/libertarian/technophile/sf-fan/early-adopter/programmer/etc crowd, aka "the nonconformist cluster", seems to be stunningly bad at coordinating group projects. There are a number of reasons for this, but one of them is that people are as reluctant to speak agreement out loud, as they are eager to voice disagreements - the exact opposite of the situation that obtains in more cohesive and powerful communities. This is not rational either! It is dangerous to be half a rationalist (in general), and this also applies to teaching only disagreement but not agreement, or only lonely defiance but not coordination. The pseudo-rationalist taboo against expressing strong feelings probably doesn't help either.
Just a reminder: Scientists are, technically, people.
Support That Sounds Like Dissent
Tolerate Tolerance
One of the likely characteristics of someone who sets out to be a "rationalist" is a lower-than-usual tolerance for flawed thinking. This makes it very important to tolerate other people's tolerance - to avoid rejecting them because they tolerate people you wouldn't - since otherwise we must all have exactly the same standards of tolerance in order to work together, which is unlikely. Even if someone has a nice word to say about complete lunatics and crackpots - so long as they don't literally believe the same ideas themselves - try to be nice to them? Intolerance of tolerance corresponds to punishment of non-punishers, a very dangerous game-theoretic idiom that can lock completely arbitrary systems in place even when they benefit no one at all.
Mind Control and Me
Individual Rationality Is a Matter of Life and Death
The Power of Positivist Thinking
Don't Revere The Bearer Of Good Info
You're Calling *Who* A Cult Leader?
Paul Graham gets exactly the same accusations about "cults" and "echo chambers" and "coteries" that I do, in exactly the same tone - e.g. comparing the long hours worked by Y Combinator startup founders to the sleep-deprivation tactic used in cults, or claiming that founders were asked to move to the Bay Area startup hub as a cult tactic of separation from friends and family. This is bizarre, considering our relative surface risk factors. It just seems to be a failure mode of the nonconformist community in general. By far the most cultish-looking behavior on Hacker News is people trying to show off how willing they are to disagree with Paul Graham, which, I can personally testify, feels really bizarre when you're the target. Admiring someone shouldn't be so scary - I don't hold back so much when praising e.g. Douglas Hofstadter; in this world there are people who have pulled off awesome feats and it is okay to admire them highly.
Cached Selves
Eliezer Yudkowsky Facts
When Truth Isn't Enough
BHTV: Yudkowsky & Adam Frank on "religious experience"
I'm confused. Could someone help?
Playing Video Games In Shuffle Mode
Book: Psychiatry and the Human Condition
Thoughts on status signals
Bogus Pipeline, Bona Fide Pipeline
On Things that are Awesome
Seven thoughts: I can list more than one thing that is awesome; when I think of "Douglas Hofstadter" I am really thinking of his all-time greatest work; the greatest work is not the person; when we imagine other people we are imagining their output, so the real Douglas Hofstadter is the source of "Douglas Hofstadter"; I most strongly get the sensation of awesomeness when I see someone outdoing me overwhelmingly, at some task I've actually tried; we tend to admire unique detailed awesome things and overlook common nondetailed awesome things; religion and its bastard child "spirituality" tends to make us overlook human awesomeness.
Hyakujo's Fox
Terrorism is not about Terror
The Implicit Association Test
Contests vs. Real World Problems
The Sacred Mundane
There are a lot of bad habits of thought that have developed to defend religious and spiritual experience. They aren't worth saving, even if we discard the original lie. Let's just admit that we were wrong, and enjoy the universe that's actually here.
Extreme updating: The devil is in the missing details
Spock's Dirty Little Secret
The Good Bayesian
Fight Biases, or Route Around Them?
Why *I* fail to act rationally
Open Thread: March 2009
Two Blegs
Your Price for Joining
The game-theoretical puzzle of the Ultimatum game has its reflection in a real-world dilemma: How much do you demand that an existing group adjust toward you, before you will adjust toward it? Our hunter-gatherer instincts will be tuned to groups of 40 with very minimal administrative demands and equal participation, meaning that we underestimate the inertia of larger and more specialized groups and demand too much before joining them. In other groups this resistance can be overcome by affective death spirals and conformity, but rationalists think themselves too good for this - with the result that people in the nonconformist cluster often set their joining prices way way way too high, like an 50-way split with each player demanding 20% of the money. Nonconformists need to move in the direction of joining groups more easily, even in the face of annoyances and apparent unresponsiveness. If an issue isn't worth personally fixing by however much effort it takes, it's not worth a refusal to contribute.
Sleeping Beauty gets counterfactually mugged
The Mind Is Not Designed For Thinking
Crowley on Religious Experience
Can Humanism Match Religion's Output?
Anyone with a simple and obvious charitable project - responding with food and shelter to a tidal wave in Thailand, say - would be better off by far pleading with the Pope to mobilize the Catholics, rather than with Richard Dawkins to mobilize the atheists. For so long as this is true, any increase in atheism at the expense of Catholicism will be something of a hollow victory, regardless of all other benefits. Can no rationalist match the motivation that comes from the irrational fear of Hell? Or does the real story have more to do with the motivating power of physically meeting others who share your cause, and group norms of participating?
On Seeking a Shortening of the Way
Altruist Coordination -- Central Station
Less Wrong Facebook Page
The Hidden Origins of Ideas
Defense Against The Dark Arts: Case Study #1
Church vs. Taskforce
Churches serve a role of providing community - but they aren't explicitly optimized for this, because their nominal role is different. If we desire community without church, can we go one better in the course of deleting religion? There's a great deal of work to be done in the world; rationalist communities might potentially organize themselves around good causes, while explicitly optimizing for community.
When It's Not Right to be Rational
The Zombie Preacher of Somerset
Hygienic Anecdotes
Rationality: Common Interest of Many Causes
Many causes benefit particularly from the spread of rationality - because it takes a little more rationality than usual to see their case, as a supporter, or even just a supportive bystander. Not just the obvious causes like atheism, but things like marijuana legalization. In the case of my own work this effect was strong enough that after years of bogging down I threw up my hands and explicitly recursed on creating rationalists. If such causes can come to terms with not individually capturing all the rationalists they create, then they can mutually benefit from mutual effort on creating rationalists. This cooperation may require learning to shut up about disagreements between such causes, and not fight over priorities, except in specialized venues clearly marked.
Ask LW: What questions to test in our rationality questionnaire?
Requesting suggestions for an actual survey to be run.
Bay area OB/LW meetup, today, Sunday, March 29, at 5pm
Akrasia, hyperbolic discounting, and picoeconomics
Deliberate and spontaneous creativity
Most Rationalists Are Elsewhere
Framing Effects in Anthropology
Kling, Probability, and Economics
Helpless Individuals
When you consider that our grouping instincts are optimized for 50-person hunter-gatherer bands where everyone knows everyone else, it begins to seem miraculous that modern-day large institutions survive at all. And in fact, the vast majority of large modern-day institutions simply fail to exist in the first place. This is why funding of Science is largely through money thrown at Science rather than donations from individuals - research isn't a good emotional fit for the rare problems that individuals can manage to coordinate on. In fact very few things are, which is why e.g. 200 million adult Americans have such tremendous trouble supervising the 535 members of Congress. Modern humanity manages to put forth very little in the way of coordinated individual effort to serve our collective individual interests.
The Benefits of Rationality?
Money: The Unit of Caring
Omohundro's resource balance principle implies that the inside of any approximately rational system has a common currency of expected utilons. In our world, this common currency is called "money" and it is the unit of how much society cares about something - a brutal yet obvious point. Many people, seeing a good cause, would prefer to help it by donating a few volunteer hours. But this avoids the tremendous gains of comparative advantage, professional specialization, and economies of scale - the reason we're not still in caves, the only way anything ever gets done in this world, the tools grownups use when anyone really cares. Donating hours worked within a professional specialty and paying-customer priority, whether directly, or by donating the money earned to hire other professional specialists, is far more effective than volunteering unskilled hours.
Building Communities vs. Being Rational
Degrees of Radical Honesty
Introducing CADIE
Purchase Fuzzies and Utilons Separately
Wealthy philanthropists typically make the mistake of trying to purchase warm fuzzy feelings, status among friends, and actual utilitarian gains, simultaneously; this results in vague pushes along all three dimensions and a mediocre final result. It should be far more effective to spend some money/effort on buying altruistic fuzzies at maximum optimized efficiency (e.g. by helping people in person and seeing the results in person), buying status at maximum efficiency (e.g. by donating to something sexy that you can brag about, regardless of effectiveness), and spending most of your money on expected utilons (chosen through sheer cold-blooded shut-up-and-multiply calculation, without worrying about status or fuzzies).
Proverbs and Cached Judgments: the Rolling Stone
You don't need Kant
Accuracy Versus Winning
Wrong Tomorrow
Selecting Rationalist Groups
Trying to breed e.g. egg-laying chickens by individual selection can produce odd side effects on the farm level, since a more dominant hen can produce more egg mass at the expense of other hens. Group selection is nearly impossible in Nature, but easy to impose in the laboratory, and group-selecting hens produced substantial increases in efficiency. Though most of my essays are about individual rationality - and indeed, Traditional Rationality also praises the lone heretic more than evil Authority - the real effectiveness of "rationalists" may end up determined by their performance in groups.
Aumann voting; or, How to vote when you're ignorant
"Robot scientists can think for themselves"
Where are we?
The Brooklyn Society For Ethical Culture
Open Thread: April 2009
Rationality is Systematized Winning
The idea behind the statement "Rationalists should win" is not that rationality will make you invincible. It means that if someone who isn't behaving according to your idea of rationality is outcompeting you, predictably and consistently, you should consider that you're not the one being rational.
Another Call to End Aid to Africa
First London Rationalist Meeting upcoming
On dollars, utility, and crack cocaine
Incremental Progress and the Valley
The optimality theorems for probability theory and decision theory, are for perfect probability theory and decision theory. There is no theorem that incremental changes toward the ideal, starting from a flawed initial form, must yield incremental progress at each step along the way. Since perfection is unattainable, why dare to try for improvement? But my limited experience with specialized applications suggests that given enough progress, one can achieve huge improvements over baseline - it just takes a lot of progress to get there.
The First London Rationalist Meetup
Why Support the Underdog?
Off-Topic Discussion Thread: April 2009
Voting etiquette
Formalizing Newcomb's
Supporting the underdog is explained by Hanson's Near/Far distinction
Real-Life Anthropic Weirdness
Extremely rare events can create bizarre circumstances in which people may not be able to effectively communicate about improbability.
Rationalist Wiki
Rationality Toughness Tests
Heuristic is not a bad word
Rationalists should beware rationalism
Newcomb's Problem standard positions
Average utilitarianism must be correct?
Rationalist wiki, redux
What do fellow rationalists think about Mensa?
Extenuating Circumstances
You can excuse other people's shortcomings on the basis of extenuating circumstances, but you shouldn't do that with yourself.
On Comments, Voting, and Karma - Part I
Newcomb's Problem vs. One-Shot Prisoner's Dilemma
What isn't the wiki for?
Eternal Sunshine of the Rational Mind
Of Lies and Black Swan Blowups
Whining-Based Communities
Many communities feed emotional needs by offering their members someone or something to blame for failure - say, those looters who don't approve of your excellence. You can easily imagine some group of "rationalists" congratulating themselves on how reasonable they were, while blaming the surrounding unreasonable society for keeping them down. But this is not how real rationality works - there's no assumption that other agents are rational. We all face unfair tests (and yes, they are unfair to different degrees for different people); and how well you do with your unfair tests, is the test of your existence. Rationality is there to help you win anyway, not to provide a self-handicapping excuse for losing. There are no first-person extenuating circumstances. There is absolutely no point in going down the road of mutual bitterness and consolation, about anything, ever.
Help, help, I'm being oppressed!
Zero-based karma coming through
E-Prime
Mandatory Secret Identities
This post was not well-received, but the point was to suggest that a student must at some point leave the dojo and test their skills in the real world. The aspiration of an excellent student should not consist primarily of founding their own dojo and having their own students.
Rationality, Cryonics and Pascal's Wager
Less Wrong IRC Meetup
"Stuck In The Middle With Bruce"
Extreme Rationality: It's Not That Great
"Playing to Win"
The term "playing to win" comes from Sirlin's book and can be described as using every means necessary to win as long as those means are legal within the structure of the game being played.
Secret Identities vs. Groupthink
Silver Chairs, Paternalism, and Akrasia
Extreme Rationality: It Could Be Great
The uniquely awful example of theism
Beware of Other-Optimizing
Aspiring rationalists often vastly overestimate their own ability to optimize other people's lives. They read nineteen webpages offering productivity advice that doesn't work for them... and then encounter the twentieth page, or invent a new method themselves, and wow, it really works - they've discovered the true method. Actually, they've just discovered the one method in twenty that works for them, and their confident advice is no better than randomly selecting one of the twenty blog posts. Other-Optimizing is exceptionally dangerous when you have power over the other person - for then you'll just believe that they aren't trying hard enough.
How theism works
That Crisis thing seems pretty useful
Spay or Neuter Your Irrationalities
The Unfinished Mystery of the Shangri-La Diet
An intriguing dietary theory which appears to allow some people to lose substantial amounts of weight, but doesn't appear to work at all for others.
Akrasia and Shangri-La
The Shangri-La diet works amazingly well for some people, but completely fails for others, for no known reason. Since the diet has a metabolic rationale and is not supposed to require willpower, its failure in my and other cases is unambigiously mysterious. If it required a component of willpower, then I and others might be tempted to blame myself for not having willpower. The art of combating akrasia (willpower failure) has the same sort of mysteries and is in the same primitive state; we don't know the deeper rule that explains why a trick works for one person but not another.
Maybe Theism Is OK
Metauncertainty
Is masochism necessary?
Missed Distinctions
Toxic Truth
Too much feedback can be a bad thing
Twelve Virtues booklet printing?
How Much Thought
Awful Austrians
Sunk Cost Fallacy
It's okay to be (at least a little) irrational
Marketing rationalism
Bystander Apathy
The bystander effect is when groups of people are less likely to take action than an individual. There are a few explanations for why this might be the case.
Persuasiveness vs Soundness
GroupThink, Theism ... and the Wiki
Collective Apathy and the Internet
The causes of bystander apathy are even worse on the Internet. There may be an opportunity here for a startup to deliberately try to avert bystander apathy in online group coordination.
Tell it to someone who doesn't care
Bayesians vs. Barbarians
Suppose that a country of rationalists is attacked by a country of Evil Barbarians who know nothing of probability theory or decision theory. There's a certain concept of "rationality" which says that the rationalists inevitably lose, because the Barbarians believe in a heavenly afterlife if they die in battle, while the rationalists would all individually prefer to stay out of harm's way. So the rationalist civilization is doomed; it is too elegant and civilized to fight the savage Barbarians... And then there's the idea that rationalists should be able to (a) solve group coordination problems, (b) care a lot about other people and (c) win...
Actions and Words: Akrasia and the Fruit of Self-Knowledge
Mechanics without wrenches
I Changed My Mind Today - Canned Laughter
Of Gender and Rationality
Analysis of the gender imbalance that appears in "rationalist" communities, suggesting nine possible causes of the effect, and possible corresponding solutions.
Welcome to Less Wrong!
Instrumental Rationality is a Chimera
Practical rationality questionnaire
My Way
I sometimes think of myself as being like the protagonist in a classic SF labyrinth story, wandering further and further into some alien artifact, trying to radio back a description of what I'm seeing, so that I can be followed. But what I'm finding is not just the Way, the thing that lies at the center of the labyrinth; it is also my Way, the path that I would take to come closer to the center, from whatever place I started out. And yet there is still a common thing we are all trying to find. We should be aware that others' shortest paths may not be the same as our own, but this is not the same as giving up the ability to judge or to share.
The Art of Critical Decision Making
The Trouble With "Good"
While we're on the subject of meta-ethics...
Chomsky on reason and science
Anti-rationality quotes
Two-Tier Rationalism
My main problem with utilitarianism
Just for fun - let's play a game.
Rationality Quotes - April 2009
The Epistemic Prisoner's Dilemma
How a pathological procrastinor can lose weight (Anti-akrasia)
Atheist or Agnostic?
Great Books of Failure
Weekly Wiki Workshop and suggested articles
The True Epistemic Prisoner's Dilemma
Spreading the word?
The ideas you're not ready to post
Evangelical Rationality
The Sin of Underconfidence
When subjects know about a bias or are warned about a bias, overcorrection is not unheard of as an experimental result. That's what makes a lot of cognitive subtasks so troublesome - you know you're biased but you're not sure how much, and if you keep tweaking you may overcorrect. The danger of underconfidence (overcorrecting for overconfidence) is that you pass up opportunities on which you could have been successful; not challenging difficult enough problems; losing forward momentum and adopting defensive postures; refusing to put the hypothesis of your inability to the test; losing enough hope of triumph to try hard enough to win. You should ask yourself "Does this way of thinking make me stronger, or weaker?"
Masochism vs. Self-defeat
Well-Kept Gardens Die By Pacifism
Good online communities die primarily by refusing to defend themselves, and so it has been since the days of Eternal September. Anyone acculturated by academia knows that censorship is a very grave sin... in their walled gardens where it costs thousands and thousands of dollars to enter. A community with internal politics will treat any attempt to impose moderation as a coup attempt (since internal politics seem of far greater import than invading barbarians). In rationalist communities this is probably an instance of underconfidence - mildly competent moderators are probably quite trustworthy to wield the banhammer. On Less Wrong, the community is the moderator (via karma) and you will need to trust yourselves enough to wield the power and keep the garden clear.
UC Santa Barbara Rationalists Unite - Saturday, 6PM
LessWrong Boo Vote (Stochastic Downvoting)
Proposal: Use the Wiki for Concepts
Escaping Your Past
Go Forth and Create the Art!
I've developed primarily the art of epistemic rationality, in particular, the arts required for advanced cognitive reductionism... arts like distinguishing fake explanations from real ones and avoiding affective death spirals. There is much else that needs developing to create a craft of rationality - fighting akrasia; coordinating groups; teaching, training, verification, and becoming a proper experimental science; developing better introductory literature... And yet it seems to me that there is a beginning barrier to surpass before you can start creating high-quality craft of rationality, having to do with virtually everyone who tries to think lofty thoughts going instantly astray, or indeed even realizing that a craft of rationality exists and that you ought to be studying cognitive science literature to create it. It's my hope that my writings, as partial as they are, will serve to surpass this initial barrier. The rest I leave to you.
Fix it and tell us what you did
This Didn't Have To Happen
Just a bit of humor...
What's in a name? That which we call a rationalist...
Rational Groups Kick Ass
Instrumental vs. Epistemic -- A Bardic Perspective
Programmatic Prediction markets
Cached Procrastination
Practical Advice Backed By Deep Theories
Knowledge of this heuristic might be useful in fighting akrasia.
(alternate summary:)
Practical advice is genuinely much, much more useful when it's backed up by concrete experimental results, causal models that are actually true, or valid math that is validly interpreted. (Listed in increasing order of difficulty.) Stripping out the theories and giving the mere advice alone wouldn't have nearly the same impact or even the same message; and oddly enough, translating experiments and math into practical advice seems to be a rare niche activity relative to academia. If there's a distinctive LW style, this is it.
"Self-pretending" is not as useful as we think
Where's Your Sense of Mystery?
Less Meta
The fact that this final series was on the craft and the community seems to have delivered a push in something of the wrong direction, (a) steering toward conversation about conversation and (b) making present accomplishment pale in the light of grander dreams. Time to go back to practical advice and deep theories, then.
SIAI call for skilled volunteers and potential interns
The Craft and the Community
Excuse me, would you like to take a survey?
Should we be biased?
Theism, Wednesday, and Not Being Adopted
The End (of Sequences)
Final Words
The conclusion of the Beisutsukai series.
Bayesian Cabaret
Verbal Overshadowing and The Art of Rationality
How Not to be Stupid: Starting Up
How Not to be Stupid: Know What You Want, What You Really Really Want
Epistemic vs. Instrumental Rationality: Approximations
What is control theory, and why do you need to know about it?
Re-formalizing PD
Generalizing From One Example
Generalization From One Example is a tendency to pay too much attention to the few anecdotal pieces of evidence you experienced, and model some general phenomenon based on them. This is a special case of availability bias, and the way in which the mistake unfolds is closely related to the correspondence bias and the hindsight bias.
Wednesday depends on us.
How to come up with verbal probabilities
Fighting Akrasia: Incentivising Action
Fire and Motion
Fiction of interest
How Not to be Stupid: Adorable Maybes
Rationalistic Losing
Rationalist Role in the Information Age
Conventions and Confusing Continuity Conundrums
Open Thread: May 2009
Second London Rationalist Meeting upcoming - Sunday 14:00
TED Talks for Less Wrong
The mind-killer
What I Tell You Three Times Is True
Return of the Survey
Essay-Question Poll: Dietary Choices
Allais Hack -- Transform Your Decisions!
Without models
Bead Jar Guesses
Applied scenario about forming priors.
Special Status Needs Special Support
How David Beats Goliath
How to use "philosophical majoritarianism"
Off Topic Thread: May 2009
Introduction Thread: May 2009
Consider Representative Data Sets
No Universal Probability Space
Wiki.lesswrong.com Is Live
Hardened Problems Make Brittle Models
Beware Trivial Inconveniences
On the Fence? Major in CS
Rationality is winning - or is it?
The First Koan: Drinking the Hot Iron Ball
Epistemic vs. Instrumental Rationality: Case of the Leaky Agent
Replaying History
Framing Consciousness
A Request for Open Problems
How Not to be Stupid: Brewing a Nice Cup of Utilitea
Step Back
You Are A Brain
No One Knows Stuff
Willpower Hax #487: Execute by Default
Rationality in the Media: Don't (New Yorker, May 2009)
Survey Results
A Parable On Obsolete Ideologies
"Open-Mindedness" - the video
Religion, Mystery, and Warm, Soft Fuzzies
Cheerios: An "Untested New Drug"
Essay-Question Poll: Voting
Outward Change Drives Inward Change
Wanting to Want
"What Is Wrong With Our Thoughts"
Bad reasons for a rationalist to lose
Supernatural Math
Rationality quotes - May 2009
Positive Bias Test (C++ program)
Catchy Fallacy Name Fallacy (and Supporting Disagreement)
Inhibition and the Mind
Least Signaling Activities?
Brute-force Music Composition
Changing accepted public opinion and Skynet
Homogeneity vs. heterogeneity (or, What kind of sex is most moral?)
Saturation, Distillation, Improvisation: A Story About Procedural Knowledge And Cookies
This Failing Earth
The Wire versus Evolutionary Psychology
Dissenting Views
Eric Drexler on Learning About Everything
Anime Explains the Epimenides Paradox
Do Fandoms Need Awfulness?
Can we create a function that provably predicts the optimization power of intelligences?
Link: The Case for Working With Your Hands
Image vs. Impact: Can public commitment be counterproductive for achievement?
A social norm against unjustified opinions?
Taking Occam Seriously
The Onion Goes Inside The Biased Mind
The Frontal Syndrome
Open Thread: June 2009
Concrete vs Contextual values
Bioconservative and biomoderate singularitarian positions
Would You Slap Your Father? Article Linkage and Discussion
With whom shall I diavlog?
Mate selection for the men here
Third London Rationalist Meeting
Post Your Utility Function
Probability distributions and writing style
My concerns about the term 'rationalist'
Honesty: Beyond Internal Truth
Macroeconomics, The Lucas Critique, Microfoundations, and Modeling in General
indexical uncertainty and the Axiom of Independence
London Rationalist Meetups bikeshed painting thread
The Aumann's agreement theorem game (guess 2/3 of the average)
Expected futility for humans
You can't believe in Bayes
Less wrong economic policy
The Terrible, Horrible, No Good, Very Bad Truth About Morality and What To Do About It
Let's reimplement EURISKO!
If it looks like utility maximizer and quacks like utility maximizer...
Typical Mind and Politics
Why safety is not safe
Rationality Quotes - June 2009
Readiness Heuristics
The two meanings of mathematical terms
The Laws of Magic
Intelligence enhancement as existential risk mitigation
Rationalists lose when others choose
Ask LessWrong: Human cognitive enhancement now?
Don't Count Your Chickens...
Applied Picoeconomics
Representative democracy awesomeness hypothesis
The Physiology of Willpower
Time to See If We Can Apply Anything We Have Learned
Cascio in The Atlantic, more on cognitive enhancement as existential risk mitigation
ESR's comments on some EY:OB/LW posts
Nonparametric Ethics
Shane Legg on prospect theory and computational finance
The Domain of Your Utility Function
The Monty Maul Problem
Guilt by Association
Lie to me?
Richard Dawkins TV - Baloney Detection Kit video
Coming Out
The Great Brain is Located Externally
People don't actually remember much of what they know, they only remember how to find it, and the fact that there is something to find. Thus, it's important to know about what's known in various domains, even without knowing the content.
Controlling your inner control circuits
What's In A Name?
Atheism = Untheism + Antitheism
Book Review: Complications
Open Thread: July 2009
Fourth London Rationalist Meeting?
Rationality Quotes - July 2009
Harnessing Your Biases
Avoiding Failure: Fallacy Finding
Not Technically Lying
The enemy within
Media bias
Can chess be a game of luck?
The Dangers of Partial Knowledge of the Way: Failing in School
An interesting speed dating study
Can self-help be bad for you?
Causality does not imply correlation
Recommended reading for new rationalists
Formalized math: dream vs reality
Causation as Bias (sort of)
Debate: Is short term planning in humans due to a short life or due to bias?
Jul 12 Bay Area meetup - Hanson, Vassar, Yudkowsky
Our society lacks good self-preservation mechanisms
Good Quality Heuristics
How likely is a failure of nuclear deterrence?
The Strangest Thing An AI Could Tell You
"Sex Is Always Well Worth Its Two-Fold Cost"
The Dirt on Depression
Fair Division of Black-Hole Negentropy: an Introduction to Cooperative Game Theory
Absolute denial for atheists
Causes of disagreements
The Popularization Bias
Zwicky's Trifecta of Illusions
Are You Anosognosic?
Article upvoting
Sayeth the Girl
Timeless Decision Theory: Problems I Can't Solve
An Akrasia Anecdote
Being saner about gender and rationality
Are you crazy?
Counterfactual Mugging v. Subjective Probability
Creating The Simple Math of Everything
Joint Distributions and the Slow Spread of Good Ideas
Chomsky, Sports Talk Radio, Media Bias, and Me
Outside Analysis and Blind Spots
Shut Up And Guess
Of Exclusionary Speech and Gender Politics
Missing the Trees for the Forest
Deciding on our rationality focus
Fairness and Geometry
It's all in your head-land
An observation on cryocrastination
The Price of Integrity
Are calibration and rational decisions mutually exclusive? (Part one)
The Nature of Offense
People are offended by grabs for status.