Rationality: From AI to Zombies

Internet rationality celebrity Eliezer Yudkowsky drags us through an epic logical journey to the land of the "Bayesian Conspiracy" and the idea that real rationality is about "winning."

Rationality: From AI to Zombies

Internet celebrity Eliezer Yudkowsky drags us through an epic logical journey in his sprawling "Rationality". Drawing on mathematics, philosophy, history, made-up fables, and a deep obsession with science fiction, Yudkowsky lays out his case for the "Bayesian Conspiracy" and his view that real rationality is about "winning." This 1,000 page monster of a book tracks Yudkowsky's intellectual development as he struggles to build a "friendly" AI - a challenge which he believes is critical for our survival as a species. Of course, our self-appointed benefactor has a rather lofty assessment of his own intelligence and his frequent references to how smart he is really grated on me as I slogged through this behemoth. Yudkowsky halfway redeems himself with snappy lines like "Science has heroes, but no gods" and some dry humor:

Back when the Greek philosophers were debating what this “real world” thingy might be made of, there were many positions. Heraclitus said, “All is fire.” Thales said, “All is water.” Pythagoras said, “All is number.” Score: Heraclitus: 0 Thales: 0 Pythagoras: 1

Yudkowsky is really down on religion, school, and academia, and really big on Occam's Razor, atheism, and Bayesian inference. Indeed, Bayesianism is really the thread that ties the whole book together. And I've got to admit, Yudkowsky is pretty convincing. Combined with the last chapters of Godfrey-Smith's "Theory and Reality", Yudkowsky's exposition of Bayesian epistemology left me thinking that perhaps Bayesianism is the one true philosophy of science. It's a bit of a let-down because you can never be certain of anything in Bayes-land, but it does offer a clear framework for deciding between competing options and maybe that's good enough. Certainly it's a useful set of ideas for my 2017 reading theme on "The Integrity of Western Science". I do actually quite like Yudkowsky's idea about the public perception of science:

I strongly suspect that a major part of science’s PR problem in the population at large is people who instinctively believe that if knowledge is given away for free, it cannot be important. If you had to undergo a fearsome initiation ritual to be told the truth about evolution, maybe people would be more satisfied with the answer.

Of course, Yudkowsky himself owes a great (and acknowledged) debt to his intellectual forebears. He quotes Orwell, Hofstadter, Cialdini, Bostrom (see "Superintelligence"), Tegmark, and many others. I even caught echos of "Kindly Inquisitors" in his railing against postmodern epistemology.

And I would pay good money to see a Yudkowsky vs. Taleb smackdown (intellectual... or otherwise - does Yudkowsky even lift?!). Although they are similar in their contempt for academia and "experts," I suspect their belief systems actually diverge pretty sharply when we get into the specifics. For example, I recently finished Taleb's new "Skin in the Game" in which he talks about how the "intellectual yet idiot" class can't understand complex systems because they don't appreciate "emergent behavior." Yet Yudkowsky goes all-in against this dark magic:

It is far better to say “magic,” than “complexity” or “emergence”; the latter words create an illusion of understanding.

Well, one can hope. At least it'd be entertaining - Yudkowsky has the same flair for controversial statements as Taleb. I'm just waiting to slide this one in to conversation at a dinner party with the comparative literature crowd:

I hold that everyone needs to learn at least one technical subject: physics, computer science, evolutionary biology, Bayesian probability theory, or something. Someone with no technical subjects under their belt has no referent for what it means to “explain” something.

My (copious) highlights below. Hey - cut me some slack. This spawn of Cthulu was 1,000 pages and it did have plenty of good stuff in it.


Preface

A third huge mistake I made was to focus too much on rational belief, too little on rational action.

(I want to single out Scott Alexander in particular here, who is a nicer person than I am and an increasingly amazing writer on these topics, and may deserve part of the credit for making the culture of Less Wrong a healthy one.)

Biases: An Introduction by Rob Bensinger

A cognitive bias is a systematic way that your innate patterns of thought fall short of truth (or some other attainable goal, such as happiness).

There’s a completely different notion of “rationality” studied by mathematicians, psychologists, and social scientists. Roughly, it’s the idea of doing the best you can with what you’ve got.

And the bias blind spot, unlike many biases, is especially severe among people who are especially intelligent, thoughtful, and open-minded.16

Book I - Map and Territory


Part A - Predictably Wrong

Epistemic rationality: systematically improving the accuracy of your beliefs.
Instrumental rationality: systematically achieving your values.

So rationality is about forming true beliefs and making winning decisions.

Experimental psychologists use two gold standards: probability theory, and decision theory.

I use the term “rationality” normatively, to pick out desirable patterns of thought.

So is rationality orthogonal to feeling? No; our emotions arise from our models of reality.

Becoming more rational—arriving at better estimates of how-the-world-is—can diminish feelings or intensify them.

Error is not an exceptional condition; it is success that is a priori so improbable that it requires an explanation.

conjunction fallacy occurs because we “substitute judgment of representativeness for judgment of probability.”

Adding detail can make a scenario SOUND MORE PLAUSIBLE, even though the event necessarily BECOMES LESS PROBABLE.

The Sydney Opera House may be the most legendary construction overrun of all time, originally estimated to be completed in 1963 for $7 million, and finally completed in 1973 for $102 million.

More generally, this phenomenon is known as the “planning fallacy.” The planning fallacy is that people think they can plan, ha ha.

Reality, it turns out, usually delivers results somewhat worse than the “worst case.”

A similar finding is that experienced outsiders, who know less of the details, but who have relevant memory to draw upon, are often much less optimistic and much more accurate than the actual planners and implementers.

Be not too quick to blame those who misunderstand your perfectly clear sentences, spoken or written. Chances are, your words are more ambiguous than you think.

When I observe failures of explanation, I usually see the explainer taking one step back, when they need to take two or more steps back.

The whole idea of Science is, simply, reflective reasoning about a more reliable process for making the contents of your mind mirror the contents of the world.

Science makes sense, when you think about it. But mice can’t think about thinking, which is why they don’t have Science.

Part B - Fake Beliefs

It is a great strength of Homo sapiens that we can, better than any other species in the world, learn to model the unseen. It is also one of our great weak points. Humans often believe in things that are not only unseen but unreal.

Above all, don’t ask what to believe — ask what to anticipate. Every question of belief should flow from a question of anticipation, and that question of anticipation should be the center of the inquiry.

People don’t believe in belief in belief, they just believe in belief.

He said, “Well, um, I guess we may have to agree to disagree on this.” I said: “No, we can’t, actually. There’s a theorem of rationality called Aumann’s Agreement Theorem which shows that no two rationalists can agree to disagree. If two people disagree with each other, at least one of them must be doing something wrong.”

This I call “pretending to be Wise.” Of course there are many ways to try and signal wisdom. But trying to signal wisdom by refusing to make guesses — refusing to sum up evidence — refusing to pass judgment — refusing to take sides — staying above the fray and looking down with a lofty and condescending gaze — which is to say, signaling wisdom by saying and doing nothing — well, that I find particularly pretentious.

A playground is a great place to be a bully, and a terrible place to be a victim, if the teachers don’t care who started it.

On this point I’d advise remembering that neutrality is a definite judgment. It is not staying above anything. It is putting forth the definite and particular position that the balance of evidence in a particular case licenses only one summation, which happens to be neutral. This, too, can be wrong; propounding neutrality is just as attackable as propounding any particular side.

There’s a difference between: Passing neutral judgment; Declining to invest marginal resources; Pretending that either of the above is a mark of deep wisdom, maturity, and a superior vantage point; with the corresponding implication that the original sides occupy lower vantage points that are not importantly different from up there.

The orthogonality of religion and factual questions is a recent and strictly Western concept. The people who wrote the original scriptures didn’t even know the difference.

In contrast, the people who invented the Old Testament stories could make up pretty much anything they liked. Early Egyptologists were genuinely shocked to find no trace whatsoever of Hebrew tribes having ever been in Egypt — they weren’t expecting to find a record of the Ten Plagues, but they expected to find something. As it turned out, they did find something. They found out that, during the supposed time of the Exodus, Egypt ruled much of Canaan. That’s one huge historical error, but if there are no libraries, nobody can call you on it.

The modern concept of religion as purely ethical derives from every other area’s having been taken over by better institutions. Ethics is what’s left.

The idea that religion is a separate magisterium that cannot be proven or disproven is a Big Lie — a lie which is repeated over and over again, so that people will say it without thinking; yet which is, on critical examination, simply false.

On the other hand, it is very easy for a human being to genuinely, passionately, gut-level belong to a group, to cheer for their favorite sports team. (This is the foundation on which rests the swindle of “Republicans vs. Democrats” and analogous false dilemmas in other countries, but that’s a topic for another time.) Identifying with a tribe is a very strong emotional force. People will die for it. And once you get people to identify with a tribe, the beliefs which are attire of that tribe will be spoken with the full passion of belonging to that tribe.

What does it mean to call for a “democratic” solution if you don’t have a conflict-resolution mechanism in mind? I think it means that you have said the word “democracy,” so the audience is supposed to cheer. It’s not so much a propositional statement, as the equivalent of the “Applause” light that tells a studio audience when to clap.

But if no specifics follow, the sentence is probably an applause light.

Part C - Noticing Confusion

This is why rationalists put such a heavy premium on the paradoxical-seeming claim that a belief is only really worthwhile if you could, in principle, be persuaded to believe otherwise.

Therefore rational beliefs are contagious, among honest folk who believe each other to be honest. And it’s why a claim that your beliefs are not contagious — that you believe for private reasons which are not transmissible — is so suspicious. If your beliefs are entangled with reality, they should be contagious among honest folk.

Science is made up of generalizations which apply to many particular instances, so that you can run new real-world experiments which test the generalization, and thereby verify for yourself that the generalization is true, without having to trust anyone’s authority. Science is the publicly reproducible knowledge of humankind.

You begin to see, I hope, why I identify Science with generalizations, rather than the history of any one experiment. A historical event happens once; generalizations apply over many events. History is not reproducible; scientific generalizations are.

But should the closed-access journal be further canonized as “science”? Should we allow it into the special, protected belief pool? For myself, I think science would be better served by the dictum that only open knowledge counts as the public, reproducible knowledge pool of humankind.

It is convenient to measure evidence in bits — not like bits on a hard drive, but mathematician’s bits, which are conceptually different. Mathematician’s bits are the logarithms, base 1/2, of probabilities. For example, if there are four possible outcomes A, B, C, and D, whose probabilities are 50%, 25%, 12.5%, and 12.5%, and I tell you the outcome was “D,” then I have transmitted three bits of information to you, because I informed you of an outcome whose probability was 1/8.

Subconscious processes can’t find one out of a million targets using only 19 bits of entanglement any more than conscious processes can. Hunches can be mysterious to the huncher, but they can’t violate the laws of physics.

The more complex an explanation is, the more evidence you need just to find it in belief-space. (In Traditional Rationality this is often phrased misleadingly, as “The more complex a proposition is, the more evidence is required to argue for it.”)

Occam’s Razor is often phrased as “The simplest explanation that fits the facts.” Robert Heinlein replied that the simplest explanation is “The lady down the street is a witch; she did it.”

Why, exactly, is the length of an English sentence a poor measure of complexity? Because when you speak a sentence aloud, you are using labels for concepts that the listener shares — the receiver has already stored the complexity in them.

“Witch,” itself, is a label for some extraordinary assertions — just because we all know what it means doesn’t mean the concept is simple.

It’s enormously easier (as it turns out) to write a computer program that simulates Maxwell’s equations, compared to a computer program that simulates an intelligent emotional mind like Thor.

The formalism of Solomonoff induction measures the “complexity of a description” by the length of the shortest computer program which produces that description as an output.

Your strength as a rationalist is your ability to be more confused by fiction than by reality. If you are equally good at explaining any outcome, you have zero knowledge.

EITHER YOUR MODEL IS FALSE OR THIS STORY IS WRONG.

But in probability theory, absence of evidence is always evidence of absence.

Hindsight will lead us to systematically undervalue the surprisingness of scientific findings, especially the discoveries we understand — the ones that seem real to us, the ones we can retrofit into our models of the world.

Part D - Mysterious Answers

“Magic!” you cry. “That’s not a scientific explanation!” Indeed, the phrases “because of heat conduction” and “because of magic” are readily recognized as belonging to different literary genres. “Heat conduction” is something that Spock might say on Star Trek, whereas “magic” would be said by Giles in Buffy the Vampire Slayer. However, as Bayesians, we take no notice of literary genres.

This is not a hypothesis about the metal plate. This is not even a proper belief. It is an attempt to guess the teacher’s password.

This is not school; we are not testing your memory to see if you can write down the diffusion equation. This is Bayescraft; we are scoring your anticipations of experience.

Fake explanations don’t feel fake. That’s what makes them dangerous.

Jonathan Wallace suggested that “God!” functions as a semantic stopsign — that it isn’t a propositional assertion, so much as a cognitive traffic signal: do not think past this point. Saying “God!” doesn’t so much resolve the paradox, as put up a cognitive traffic signal to halt the obvious continuation of the question-and-answer chain.

To worship a phenomenon because it seems so wonderfully mysterious is to worship your own ignorance.

Therefore I call theories such as vitalism mysterious answers to mysterious questions.

These are the signs of mysterious answers to mysterious questions:

  • First, the explanation acts as a curiosity-stopper rather than an anticipation-controller.
  • Second, the hypothesis has no moving parts—the model is not a specific complex mechanism, but a blankly solid substance or force. The mysterious substance or mysterious force may be said to be here or there, to cause this or that; but the reason why the mysterious force behaves thus is wrapped in a blank unity.
  • Third, those who proffer the explanation cherish their ignorance; they speak proudly of how the phenomenon defeats ordinary science or is unlike merely mundane phenomena.
  • Fourth, even after the answer is given, the phenomenon is still a mystery and possesses the same quality of wonderful inexplicability that it had at the start.

Dare I step out on a limb, and name some current theory which I deem analogously flawed? I name emergence or emergent phenomena — usually defined as the study of systems whose high-level behaviors arise or “emerge” from the interaction of many low-level elements.

Humans are still humans, even if they’ve taken a few science classes in college. Once they find a way to escape the shackles of settled science, they get up to the same shenanigans as their ancestors — dressed up in the literary genre of “science,” but humans are still humans, and human psychology is still human psychology.

What you must avoid is skipping over the mysterious part; you must linger at the mystery to confront it directly.

I suspect that in academia there is a huge pressure to sweep problems under the rug so that you can present a paper with the appearance of completeness. You’ll get more kudos for a seemingly complete model that includes some “emergent phenomena,” versus an explicitly incomplete map where the label says “I got no clue how this part works” or “then a miracle occurs.”

Marcello and I developed a convention in our AI work: when we ran into something we didn’t understand, which was often, we would say “magic” — as in, “X magically does Y” — to remind ourselves that here was an unsolved problem, a gap in our understanding. It is far better to say “magic,” than “complexity” or “emergence”; the latter words create an illusion of understanding. Wiser to say “magic,” and leave yourself a placeholder, a reminder of work you will have to do later.

So much of a rationalist’s skill is below the level of words. It makes for challenging work in trying to convey the Art through words.

It is a counterintuitive idea that, given incomplete information, the optimal betting strategy does not resemble a typical sequence of cards. It is a counterintuitive idea that the optimal strategy is to behave lawfully, even in an environment that has random elements. It seems like your behavior ought to be unpredictable, just like the environment — but no! A random key does not open a random lock just because they are “both random.”

When your knowledge is incomplete — meaning that the world will seem to you to have an element of randomness — randomizing your actions doesn’t solve the problem. Randomizing your actions takes you further from the target, not closer. In a world already foggy, throwing away your intelligence just makes things worse.

You need one whole hell of a lot of rationality before it does anything but lead you into new and interesting mistakes.

There is an inverse error to generalizing from fictional evidence: failing to be sufficiently moved by historical evidence. The trouble with generalizing from fictional evidence is that it is fiction — it never actually happened.

Remember how, century after century, the world changed in ways you did not guess. Maybe then you will be less shocked by what happens next.

I’m talking about the raw emotion of curiosity — the feeling of being intrigued. Why should your curiosity be diminished because someone else, not you, knows how the light bulb works? Is this not spite?

The world around you is full of puzzles. Prioritize, if you must. But do not complain that cruel Science has emptied the world of mystery. With reasoning such as that, I could get you to overlook an elephant in your living room.

classic paper by Drew McDermott, “Artificial Intelligence Meets Natural Stupidity,” criticized AI programs that would try to represent notions like happiness is a state of mind using a semantic network:1 HAPPINESS ---IS-A---> STATE-OF-MIND

I realized it would be a really good idea to always ask myself: “How would I regenerate this knowledge if it were deleted from my mind?”

That which you cannot make yourself, you cannot remake when the situation calls for it.

Strive to make yourself the source of every thought worth thinking. If the thought originally came from outside, make sure it comes from inside as well. Continually ask yourself: “How would I regenerate the thought if it were deleted?” When you have an answer, imagine that knowledge being deleted as well. And when you find a fountain, see what else it can pour.

Interlude - The Simple Truth

The one returns: “This notion of ‘truth’ is quite naive; what do you mean by ‘true’?” Many people, so questioned, don’t know how to answer in exquisitely rigorous detail. Nonetheless they would not be wise to abandon the concept of “truth.” There was a time when no one knew the equations of gravity in exquisitely rigorous detail, yet if you walked off a cliff, you would fall.

from mere shepherds. You probably believe that snow is white, don’t you.” “Um . . . yes?” says Autrey. “It doesn’t bother you that Joseph Stalin believed that snow is white?”

Mark draws himself up haughtily. “This mere shepherd,” he says, gesturing at me, “has claimed that there is such a thing as reality. This offends me, for I know with deep and abiding certainty that there is no truth. The concept of ‘truth’ is merely a stratagem for people to impose their own beliefs on others. Every culture has a different ‘truth,’ and no culture’s ‘truth’ is superior to any other. This that I have said holds at all times in all places, and I insist that you agree.” “Hold on a second,” says Autrey. “If nothing is true, why should I believe you when you say that nothing is true?” “I didn’t say that nothing is true—” says Mark. “Yes, you did,” interjects Autrey, “I heard you.” “—I said that ‘truth’ is an excuse used by some cultures to enforce their beliefs on others. So when you say something is ‘true,’ you mean only that it would be advantageous to your own social group to have it believed.” “And this that you have said,” I say, “is it true?” “Absolutely, positively true!” says Mark emphatically. “People create their own realities.”

“There you go again,” says Mark exasperatedly, “trying to apply your Western concepts of logic, rationality, reason, coherence, and self-consistency.”

“It’s not separate,” says Mark. “Look, you’re taking the wrong attitude by treating my statements as hypotheses, and carefully deriving their consequences. You need to think of them as fully general excuses, which I apply when anyone says something I don’t like. It’s not so much a model of how the universe works, as a Get Out of Jail Free card. The key is to apply the excuse selectively. When I say that there is no such thing as truth, that applies only to your claim that the magic bucket works whether or not I believe in it. It does not apply to my claim that there is no such thing as truth.”

Book II - How to Actually Change Your Mind


Rationality: An Introduction by Rob Bensinger

But, writes Robin Hanson:1 You are never entitled to your opinion. Ever! You are not even entitled to “I don’t know.” You are entitled to your desires, and sometimes to your choices. You might own a choice, and if you can choose your preferences, you may have the right to do so. But your beliefs are not about you; beliefs are about the world. Your beliefs should be your best available estimate of the way things are; anything else is a lie.

One of the defining insights of 20th-century psychology, animating everyone from the disciples of Freud to present-day cognitive psychologists, is that human behavior is often driven by sophisticated unconscious processes, and the stories we tell ourselves about our motives and reasons are much more biased and confabulated than we realize.

Scott Alexander, “Why I Am Not Rene Descartes,” Slate Star Codex (blog) (2014), http://slatestarcodex.com/2014/11/27/why-i-am-not-rene-descartes/

Part E - Overly Convenient Excuses

“To be humble is to take specific actions in anticipation of your own errors. To confess your fallibility and then do nothing about it is not humble; it is boasting of your modesty.”

Beware when you find yourself arguing that a policy is defensible rather than optimal; or that it has some benefit compared to the null action, rather than the best benefit of any action. False dilemmas are often presented to justify unethical policies that are, by some vast coincidence, very convenient.

Did I spend five minutes with my eyes closed, brainstorming wild and creative options, trying to think of a better alternative? It has to be five minutes by the clock, because otherwise you blink — close your eyes and open them again — and say, “Why, yes, I searched for alternatives, but there weren’t any.” Blinking makes a good black hole down which to dump your duties. An actual, physical clock is recommended.

The fantasy is of wealth that arrives without effort — without conscientiousness, learning, charisma, or even patience. Which makes the lottery another kind of sink: a sink of emotional energy. It encourages people to invest their dreams, their hopes for a better future, into an infinitesimal probability. If not for the lottery, maybe they would fantasize about going to technical school, or opening their own business, or getting a promotion at work—things they might be able to actually do, hopes that would make them want to become stronger.

The process of overcoming bias requires:

  1. first noticing the bias,
  2. analyzing the bias in detail,
  3. deciding that the bias is bad,
  4. figuring out a workaround, and then
  5. implementing it.

Yep, offering people tempting daydreams that will not actually happen sure is a valuable service, all right. People are willing to pay; it must be valuable. The alternative is that consumers are making mistakes, and we all know that can’t happen.

It first occurred to me that human intuitions were making a qualitative distinction between “No chance” and “A very tiny chance, but worth keeping track of.”

The Sophisticate: “The world isn’t black and white. No one does pure good or pure bad. It’s all gray. Therefore, no one is better than anyone else.” The Zetet: “Knowing only gray, you conclude that all grays are the same shade. You mock the simplicity of the two-color view, yet you replace it with a one-color view . . .” —Marc Stiegler, David’s Sling

Years ago, one of the strange little formative moments in my career as a rationalist was reading this paragraph from Player of Games by Iain M. Banks,

That which I cannot eliminate may be well worth reducing.

“Everyone is imperfect.” Mohandas Gandhi was imperfect and Joseph Stalin was imperfect, but they were not the same shade of imperfection. “Everyone is imperfect” is an excellent example of replacing a two-color view with a one-color view. If you say, “No one is perfect, but some people are less imperfect than others,” you may not gain applause; but for those who strive to do better, you have held out hope. No one is perfectly imperfect, after all.

Likewise the folly of those who say, “Every scientific paradigm imposes some of its assumptions on how it interprets experiments,” and then act like they’d proven science to occupy the same level with witchdoctoring. Every worldview imposes some of its structure on its observations, but the point is that there are worldviews which try to minimize that imposition, and worldviews which glory in it. There is no white, but there are shades of gray that are far lighter than others, and it is folly to treat them as if they were all on the same level.

It’s a most peculiar psychology — this business of “Science is based on faith too, so there!” Typically this is said by people who claim that faith is a good thing. Then why do they say “Science is based on faith too!” in that angry-triumphal tone, rather than as a compliment? And a rather dangerous compliment to give, one would think, from their perspective. If science is based on “faith,” then science is of the same kind as religion — directly comparable. If science is a religion, it is the religion that heals the sick and reveals the secrets of the stars. It would make sense to say, “The priests of science can blatantly, publicly, verifiably walk on the Moon as a faith-based miracle, and your priests’ faith can’t do the same.” Are you sure you wish to go there, oh faithist? Perhaps, on further reflection, you would prefer to retract this whole business of “Science is a religion too!”

G2 points us to Asimov’s “The Relativity of Wrong”: When people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together.

For every statement you can find of which someone is “absolutely certain,” you can probably find someone “absolutely certain” of its opposite, because such fanatic professions of belief do not arise in the absence of opposition.

A probability of 1.0 isn’t just certainty, it’s infinite certainty. In fact, it seems to me that to prevent public misunderstanding, maybe scientists should go around saying “We are not INFINITELY certain” rather than “We are not certain.” For the latter case, in ordinary discourse, suggests you know some specific reason for doubt.

So odds are more manageable for Bayesian updates — if you use probabilities, you’ve got to deploy Bayes’s Theorem in its complicated version.

Why does it matter that odds ratios are just as legitimate as probabilities? Probabilities as ordinarily written are between 0 and 1, and both 0 and 1 look like they ought to be readily reachable quantities — it’s easy to see 1 zebra or 0 unicorns. But when you transform probabilities onto odds ratios, 0 goes to 0, but 1 goes to positive infinity. Now absolute truth doesn’t look like it should be so easy to reach.

When you transform probabilities to log odds, 0 goes onto negative infinity and 1 goes onto positive infinity. Now both infinite certainty and infinite improbability seem a bit more out-of-reach.

When you work in log odds, the distance between any two degrees of uncertainty equals the amount of evidence you would need to go from one to the other. That is, the log odds gives us a natural measure of spacing among degrees of confidence.

Furthermore, all sorts of standard theorems in probability have special cases if you try to plug 1s or 0s into them — like what happens if you try to do a Bayesian update on an observation to which you assigned probability 0. So I propose that it makes sense to say that 1 and 0 are not in the probabilities; just as negative and positive infinity, which do not obey the field axioms, are not in the real numbers.

The main reason this would upset probability theorists is that we would need to rederive theorems previously obtained by assuming that we can marginalize over a joint probability by adding up all the pieces and having them sum to 1. However, in the real world, when you roll a die, it doesn’t literally have infinite certainty of coming up some number between 1 and 6. The die might land on its edge; or get struck by a meteor; or the Dark Lords of the Matrix might reach in and write “37” on one side.

What business is it of mine, if someone else chooses to believe what is pleasant rather than what is true? Can’t we each choose for ourselves whether to care about the truth? An obvious snappy comeback is: “Why do you care whether I care whether someone else cares about the truth?” It is somewhat inconsistent for your utility function to contain a negative term for anyone else’s utility function having a term for someone else’s utility function. But that is only a snappy comeback, not an answer.

So here then is my answer: I believe that it is right and proper for me, as a human being, to have an interest in the future, and what human civilization becomes in the future. One of those interests is the human pursuit of truth, which has strengthened slowly over the generations (for there was not always Science). I wish to strengthen that pursuit further, in this generation. That is a wish of mine, for the Future. For we are all of us players upon that vast gameboard, whether we accept the responsibility or not. And that makes your rationality my business.

Let’s argue against bad ideas but not set their bearers on fire.

Part F - Politics and Rationality

Like it or not, there’s a birth lottery for intelligence — though this is one of the cases where the universe’s unfairness is so extreme that many people choose to deny the facts. The experimental evidence for a purely genetic component of 0.6–0.8 is overwhelming, but even if this were to be denied, you don’t choose your parental upbringing or your early schools either.

To understand why people act the way they do, we must first realize that everyone sees themselves as behaving normally.

Realistically, most people don’t construct their life stories with themselves as the villains. Everyone is the hero of their own story. The Enemy’s story, as seen by the Enemy, is not going to make the Enemy look bad. If you try to construe motivations that would make the Enemy look bad, you’ll end up flat wrong about what actually goes on in the Enemy’s mind.

A car with a broken engine cannot drive backward at 200 mph, even if the engine is really really broken.

The least convenient path is the only valid one.

A good technical argument is one that eliminates reliance on the personal authority of the speaker.

So it seems there’s an asymmetry between argument and authority. If we know authority we are still interested in hearing the arguments; but if we know the arguments fully, we have very little left to learn from authority.

In practice you can never completely eliminate reliance on authority. Good authorities are more likely to know about any counterevidence that exists and should be taken into account; a lesser authority is less likely to know this, which makes their arguments less reliable.

If you really want an artist’s perspective on rationality, then read Orwell; he is mandatory reading for rationalists as well as authors. Orwell was not a scientist, but a writer; his tools were not numbers, but words; his adversary was not Nature, but human evil. If you wish to imprison people for years without trial, you must think of some other way to say it than “I’m going to imprison Mr. Jennings for years without trial.” You must muddy the listener’s thinking, prevent clear images from outraging conscience.

With enough static noun phrases, you can keep anything unpleasant from actually happening.

Nonfiction conveys knowledge, fiction conveys experience.

What is above all needed is to let the meaning choose the word, and not the other way around. In prose, the worst thing one can do with words is surrender to them.

Orwell saw the destiny of the human species, and he put forth a convulsive effort to wrench it off its path. Orwell’s weapon was clear writing. Orwell knew that muddled language is muddled thinking; he knew that human evil and muddled thinking intertwine like conjugate strands of DNA

Orwell was clear on the goal of his clarity: If you simplify your English, you are freed from the worst follies of orthodoxy. You cannot speak any of the necessary dialects, and when you make a stupid remark its stupidity will be obvious, even to yourself.

I am continually aghast at apparently intelligent folks — such as Robin Hanson’s colleague Tyler Cowen — who don’t think that overcoming bias is important.

The truth does have enemies. If Overcoming Bias were a newsletter in the old Soviet Union, every poster and commenter of Overcoming Bias would have been shipped off to labor camps.

In all human history, every great leap forward has been driven by a new clarity of thought. Except for a few natural catastrophes, every great woe has been driven by a stupidity. Our last enemy is ourselves; and this is a war, and we are soldiers.

Part G - Against Rationalization

Attitude polarization. Exposing subjects to an apparently balanced set of pro and con arguments will exaggerate their initial polarization.

Sophistication effect. Politically knowledgeable subjects, because they possess greater ammunition with which to counter-argue incongruent facts and arguments, will be more prone to the above biases.

He didn’t try to point out any particular sophisticated argument, any particular flaw—just shook his head and sighed sadly over how I was apparently using my own intelligence to defeat itself. He had acquired yet another Fully General Counterargument.

Rationality is not for winning debates, it is for deciding which side to join.

The problem with using black-and-white, binary, qualitative reasoning is that any single observation either destroys the theory or it does not. When not even a single contrary observation is allowed, it creates cognitive dissonance and has to be argued away. And this rules out incremental progress; it rules out correct integration of all the evidence. Reasoning probabilistically, we realize that on average, a correct theory will generate a greater weight of support than countersupport. And so you can, without fear, say to yourself: “This is gently contrary evidence, I will shift my belief downward.” Yes, down. It does not destroy your cherished theory.

What fool devised such confusingly similar words, “rationality” and “rationalization,” to describe such extraordinarily different mental processes? I would prefer terms that made the algorithmic difference obvious, like “rationality” versus “giant sucking cognitive black hole.”

In Orthodox Judaism you’re allowed to notice inconsistencies and contradictions, but only for purposes of explaining them away, and whoever comes up with the most complicated explanation gets a prize.

Gilovich’s distinction between motivated skepticism and motivated credulity highlights how conclusions a person does not want to believe are held to a higher standard than conclusions a person wants to believe. A motivated skeptic asks if the evidence compels them to accept the conclusion; a motivated credulist asks if the evidence allows them to accept the conclusion.

A major historical scandal in statistics was R. A. Fisher, an eminent founder of the field, insisting that no causal link had been established between smoking and lung cancer. “Correlation is not causation,” he testified to Congress. Perhaps smokers had a gene which both predisposed them to smoke and predisposed them to lung cancer. Or maybe Fisher’s being employed as a consultant for tobacco firms gave him a hidden motive to decide that the evidence already gathered was insufficient to come to a conclusion, and it was better to keep looking. Fisher was also a smoker himself, and died of colon cancer in 1962. (Ad hominem note: Fisher was a frequentist. Bayesians are more reasonable about inferring probable causality.)

Who can argue against gathering more evidence? I can. Evidence is often costly, and worse, slow, and there is certainly nothing virtuous about refusing to integrate the evidence you already have.

Similarly, one may try to insist that the Bible is valuable as a literary work. Then why not revere Lord of the Rings, a vastly superior literary work? And despite the standard criticisms of Tolkien’s morality, Lord of the Rings is at least superior to the Bible as a source of ethics. So why don’t people wear little rings around their neck, instead of crosses?

I suspect that, in general, if two rationalists set out to resolve a disagreement that persisted past the first exchange, they should expect to find that the true sources of the disagreement are either hard to communicate, or hard to expose. E.g.: Uncommon, but well-supported, scientific knowledge or math; Long inferential distances; Hard-to-verbalize intuitions, perhaps stemming from specific visualizations; Zeitgeists inherited from a profession (that may have good reason for it); Patterns perceptually recognized from experience; Sheer habits of thought; Emotional commitments to believing in a particular outcome; Fear of a past mistake being disproven; Deep self-deception for the sake of pride or other personal benefits.

Once you tell a lie, the truth is your enemy; and every truth connected to that truth, and every ally of truth in general; all of these you must oppose, to protect the lie. Whether you’re lying to others, or to yourself. You have to deny that beliefs require evidence, and then you have to deny that maps should reflect territories, and then you have to deny that truth is a good thing... Thus comes into being the Dark Side.

“Everyone has a right to their own opinion.” When you think about it, where was that proverb generated? Is it something that someone would say in the course of protecting a truth, or in the course of protecting from the truth? But people don’t perk up and say, “Aha! I sense the presence of the Dark Side!” As far as I can tell, it’s not widely realized that the Dark Side is out there.

Part H - Against Doublethink

One of the chief pieces of advice I give to aspiring rationalists is “Don’t try to be clever.” And, “Listen to those quiet, nagging doubts.” If you don’t know, you don’t know what you don’t know, you don’t know how much you don’t know, and you don’t know how much you needed to know. There is no second-order rationality. There is only a blind leap into what may or may not be a flaming lava pit. Once you know, it will be too late for blindness.

Also there is more to life than happiness; and other happinesses than your own may be at stake in your decisions. But that is moot. By the time you realize you have a choice, there is no choice. You cannot unsee what you see. The other way is closed.

She has taken the old idol off its throne, and replaced it with an explicit worship of the Dark Side Epistemology that was once invented to defend the idol; she worships her own attempt at self-deception. The attempt failed, but she is honestly unaware of this. And so humanity’s token guardians of sanity (motto: “pooping your deranged little party since Epicurus”) must now fight the active worship of self-deception — the worship of the supposed benefits of faith, in place of God.

Part I - Seeing with Fresh Eyes

One might naturally think that on being told a proposition, we would first comprehend what the proposition meant, then consider the proposition, and finally accept or reject it. This obvious-seeming model of cognitive process flow dates back to Descartes. But Descartes’s rival, Spinoza, disagreed; Spinoza suggested that we first passively accept a proposition in the course of comprehending it, and only afterward actively disbelieve propositions which are rejected by consideration.

One of the single greatest puzzles about the human brain is how the damn thing works at all when most neurons fire 10–20 times per second, or 200Hz tops. In neurology, the “hundred-step rule” is that any postulated operation has to complete in at most 100 sequential steps — you can be as parallel as you like, but you can’t postulate more than 100 (preferably fewer) neural spikes one after the other.

It’s a good guess that the actual majority of human cognition consists of cache lookups.

He told her angrily, “Narrow it down to the front of one building on the main street of Bozeman. The Opera House. Start with the upper left-hand brick.” Her eyes, behind the thick-lensed glasses, opened wide. She came in the next class with a puzzled look and handed him a five-thousand-word essay on the front of the Opera House on the main street of Bozeman, Montana.

Any professional negotiator knows that to control the terms of a debate is very nearly to control the outcome of the debate.

Yet in my estimation, the most damaging aspect of using other authors’ imaginations is that it stops people from using their own.

When you understand things in detail, you can see how they are not alike, and start enthusiastically subtracting edges off your graph.

(“A fanatic is someone who can’t change his mind and won’t change the subject.” I endeavor to at least be capable of changing the subject.)

But my suspicion is that I came across as “deep” because I coherently violated the cached pattern for “deep wisdom” in a way that made immediate sense.

I suspect this is one reason Eastern philosophy seems deep to Westerners — it has nonstandard but coherent cache for Deep Wisdom. Symmetrically, in works of Japanese fiction, one sometimes finds Christians depicted as repositories of deep wisdom and/or mystical secrets.

To seem deep, study nonstandard philosophies. Seek out discussions on topics that will give you a chance to appear deep. Do your philosophical thinking in advance, so you can concentrate on explaining well. Above all, practice staying within the one-inferential-step bound.

To be deep, think for yourself about “wise” or important or emotionally fraught topics.

Maier enacted an edict to enhance group problem solving: “Do not propose solutions until the problem has been discussed as thoroughly as possible without suggesting any.”

Traditional Rationality emphasizes falsification — the ability to relinquish an initial opinion when confronted by clear evidence against it. But once an idea gets into your head, it will probably require way too much evidence to get it out again. Worse, we don’t always have the luxury of overwhelming evidence. I suspect that a more powerful (and more difficult) method is to hold off on thinking of an answer.

“It ain’t a true crisis of faith unless things could just as easily go either way,” said Thor Shenkel.

You should be extremely suspicious if you have many ideas suggested by a source that you now know to be untrustworthy, but by golly, it seems that all the ideas still ended up being right — the Bible being the obvious archetypal example.

Part J - Death Spirals

According to Hsee — in a paper entitled “Less is better: When low-value options are valued more highly than high-value options” — if you buy someone a $45 scarf, you are more likely to be seen as generous than if you buy them a $55 coat.

Once upon a time, there was a man who was convinced that he possessed a Great Idea. Indeed, as the man thought upon the Great Idea more and more, he realized that it was not just a great idea, but the most wonderful idea ever. The Great Idea would unravel the mysteries of the universe, supersede the authority of the corrupt and error-ridden Establishment, confer nigh-magical powers upon its wielders, feed the hungry, heal the sick, make the whole world a better place, etc., etc., etc. The man was Francis Bacon, his Great Idea was the scientific method, and he was the only crackpot in all history to claim that level of benefit to humanity and turn out to be completely right.

Probably the single most reliable sign of a cult guru is that the guru claims expertise, not in one area, not even in a cluster of related areas, but in everything.

Cut up your Great Thingy into smaller independent ideas, and treat them as independent.

If your brother, the son of your father or of your mother, or your son or daughter, or the spouse whom you embrace, or your most intimate friend, tries to secretly seduce you, saying, “Let us go and serve other gods,” unknown to you or your ancestors before you, gods of the peoples surrounding you, whether near you or far away, anywhere throughout the world, you must not consent, you must not listen to him; you must show him no pity, you must not spare him or conceal his guilt. No, you must kill him, your hand must strike the first blow in putting him to death and the hands of the rest of the people following. You must stone him to death, since he has tried to divert you from Yahweh your God. —Deuteronomy 13:7–11,

And it is triple ultra forbidden to respond to criticism with violence. There are a very few injunctions in the human art of rationality that have no ifs, ands, buts, or escape clauses. This is one of them. Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever.

In Festinger, Riecken, and Schachter’s classic When Prophecy Fails, one of the cult members walked out the door immediately after the flying saucer failed to land. Who gets fed up and leaves first? An average cult member? Or a relatively more skeptical member, who previously might have been acting as a voice of moderation, a brake on the more fanatic members?

When Ayn Rand’s long-running affair with Nathaniel Branden was revealed to the Objectivist membership, a substantial fraction of the Objectivist membership broke off and followed Branden into espousing an “open system” of Objectivism not bound so tightly to Ayn Rand. Who stayed with Ayn Rand even after the scandal broke? The ones who really, really believed in her — and perhaps some of the undecideds, who, after the voices of moderation left, heard arguments from only one side. This may account for how the Ayn Rand Institute is (reportedly) more fanatic after the breakup, than the original core group of Objectivists under Branden and Rand.

Yes, it matters that the 9/11 hijackers weren’t cowards. Not just for understanding the enemy’s realistic psychology. There is simply too much damage done by spirals of hate. It is just too dangerous for there to be any target in the world, whether it be the Jews or Adolf Hitler, about whom saying negative things trumps saying accurate things.

(Sometimes I think humanity’s second-greatest need is a supervillain. Maybe I’ll go into that line of work after I finish my current job.)

But the Inquisitors were not Truth-Seekers. They were Truth-Guardians.

I once read an argument (I can’t find the source) that a key component of a zeitgeist is whether it locates its ideals in its future or its past.

I don’t mean to provide a grand overarching single-factor view of history. I do mean to point out a deep psychological difference between seeing your grand cause in life as protecting, guarding, preserving, versus discovering, creating, improving. Does the “up” direction of time point to the past or the future? It’s a distinction that shades everything, casts tendrils everywhere.

I would also argue that this basic psychological difference is one of the reasons why an academic field that stops making active progress tends to turn mean. (At least by the refined standards of science. Reputational assassination is tame by historical standards; most defensive-posture belief systems went for the real thing.) If major shakeups don’t arrive often enough to regularly promote young scientists based on merit rather than conformity, the field stops resisting the standard degeneration into authority. When there’s not many discoveries being made, there’s nothing left to do all day but witch-hunt the heretics.

Max Gluckman once said: “A science is any discipline in which the fool of this generation can go beyond the point reached by the genius of the last generation.” Science moves forward by slaying its heroes, as Newton fell to Einstein.

Michael Shermer goes into detail on how he thinks that Rand’s philosophy ended up descending into cultishness. In particular, Shermer says (it seems) that Objectivism failed because Rand thought that certainty was possible, while science is never certain. I can’t back Shermer on that one. The atomic theory of chemistry is pretty damned certain. But chemists haven’t become a cult.

So where is the true art of rationality to be found? Studying up on the math of probability theory and decision theory. Absorbing the cognitive sciences like evolutionary psychology, or heuristics and biases. Reading history books... “Study science, not just me!” is probably the most important piece of advice Ayn Rand should’ve given her followers and didn’t.

Science has heroes, but no gods. The great Names are not our superiors, or even our rivals; they are passed milestones on our road. And the most important milestone is the hero yet to come.

Ever after, he would not allow his students to cite his words in their debates, saying, “Use the techniques and do not mention them.”

Being the first dissenter is a valuable (and costly!) social service, but you’ve got to keep it up.

The most fearsome possibility raised by Asch’s experiments on conformity is the specter of everyone agreeing with the group, swayed by the confident voices of others, careful not to let their own doubts show—not realizing that others are suppressing similar worries. This is known as “pluralistic ignorance.”

I think the most important lesson to take away from Asch’s experiments is to distinguish “expressing concern” from “disagreement.” Raising a point that others haven’t voiced is not a promise to disagree with the group at the end of its discussion.

If you perform the group service of being the one who gives voice to the obvious problems, don’t expect the group to thank you for it.

Individualism is easy, experiment shows, when you have company in your defiance.

As the case of cryonics testifies, the fear of thinking really different is stronger than the fear of death.

And there are islands of genuine tolerance in the world, such as science fiction conventions.

In the modern world, joining a cult is probably one of the worse things that can happen to you. The best-case scenario is that you’ll end up in a group of sincere but deluded people, making an honest mistake but otherwise well-behaved, and you’ll spend a lot of time and money but end up with nothing to show. Actually, that could describe any failed Silicon Valley startup. Which is supposed to be a hell of a harrowing experience, come to think. So yes, very scary.

The fear of strange ideas, the impulse to conformity, has no doubt warned many potential victims away from flying-saucer cults. When you’re out, it keeps you out. But when you’re in, it keeps you in. Conformity just glues you to wherever you are, whether that’s a good place or a bad place.

Living with doubt is not a virtue — the purpose of every doubt is to annihilate itself in success or failure, and a doubt that just hangs around accomplishes nothing.

People who talk about “rationality” also have an added risk factor. Giving people advice about how to think is an inherently dangerous business. But it is a risk factor, not a disease.

Part K - Letting Go

I just finished reading a history of Enron’s downfall, The Smartest Guys in the Room, which hereby wins my award for “Least Appropriate Book Title.”

After I had finally and fully admitted my mistake, I looked back upon the path that had led me to my Awful Realization. And I saw that I had made a series of small concessions, minimal concessions, grudgingly conceding each millimeter of ground, realizing as little as possible of my mistake on each occasion, admitting failure only in small tolerable nibbles. I could have moved so much faster, I realized, if I had simply screamed “OOPS!” And I thought: I must raise the level of my game. There is a powerful advantage to admitting you have made a large mistake. It’s painful. It can also change your whole life. It is important to have the watershed moment, the moment of humbling realization. To acknowledge a fundamental problem, not divide it into palatable bite-size mistakes.

While this behavior may seem to be merely stupid, it also puts me in mind of two Nobel-Prize-winning economists... namely Merton and Scholes of Long-Term Capital Management. While LTCM raked in giant profits over its first three years, in 1998 the inefficiences that LTCM were exploiting had started to vanish — other people knew about the trick, so it stopped working.

Every profession has a different way to be smart — different skills to learn and rules to follow. You might therefore think that the study of “rationality,” as a general discipline, wouldn’t have much to contribute to real-life success. And yet it seems to me that how to not be stupid has a great deal in common across professions. If you set out to teach someone how to not turn little mistakes into big mistakes, it’s nearly the same art whether in hedge funds or romance, and one of the keys is this: Be ready to admit you lost.

To avoid professing doubts, remember: A rational doubt exists to destroy its target belief, and if it does not destroy its target it dies unfulfilled. A rational doubt arises from some specific reason the belief might be wrong. An unresolved doubt is a null-op. An uninvestigated doubt might as well not exist. You should not be proud of mere doubting, although you can justly be proud when you have just finished tearing a cherished belief to shreds. Though it may take courage to face your doubts, never forget that to an ideal mind doubt would not be scary in the first place.

A theory is obligated to make bold predictions for itself, not just steal predictions that other theories have labored to make. A theory is obligated to expose itself to falsification — if it tries to duck out, that’s like trying to duck out of a fearsome initiation ritual; you must pay your dues.

Before you try mapping an unseen territory, pour some water into a cup at room temperature and wait until it spontaneously freezes before proceeding. That way you can be sure the general trick — ignoring infinitesimally tiny probabilities of success—is working properly. You might not realize directly that your map is wrong, especially if you never visit New York; but you can see that water doesn’t freeze itself.

Maybe it’s just a question of not enough people reading Gödel, Escher, Bach at a sufficiently young age, but I’ve noticed that a large fraction of the population — even technical folk — have trouble following arguments that go this meta.

Or, “Try to think the thought that hurts the most.” And above all, the rule: “Put forth the same level of desperate effort that it would take for a theist to reject their religion.” Because, if you aren’t trying that hard, then — for all you know — your head could be stuffed full of nonsense as ridiculous as religion.

Not every doubt calls for staging an all-out Crisis of Faith. But you should consider it when: A belief has long remained in your mind; It is surrounded by a cloud of known arguments and refutations; You have sunk costs in it (time, money, public declarations); The belief has emotional consequences (note this does not make it wrong); It has gotten mixed up in your personality generally.

A special case of motivated skepticism is fake humility, where you bashfully confess that no one can know something you would rather not know. Don’t selectively demand too much authority of counterarguments.

Book III - The Machine in the Ghost


Nick Bostrom’s book Superintelligence provides a big-picture summary of the many moral and strategic questions raised by smarter-than-human AI.

Disturbed by the possibility that future progress in AI, nanotechnology, biotechnology, and other fields could endanger human civilization, Bostrom and Ćirković compiled the first academic anthology on the topic, Global Catastrophic Risks.

Part L - The Simple Math of Evolution

In the days before Darwin, it seemed like a much more reasonable hypothesis. Find a watch in the desert, said William Paley, and you can infer the existence of a watchmaker.

In a lot of ways, evolution is like unto theology. “Gods are ontologically distinct from creatures,” said Damien Broderick, “or they’re not worth the paper they’re written on.” And indeed, the Shaper of Life is not itself a creature. Evolution is bodiless, like the Judeo-Christian deity. Omnipresent in Nature, immanent in the fall of every leaf. Vast as a planet’s surface. Billions of years old. Itself unmade, arising naturally from the structure of physics. Doesn’t that all sound like something that might have been said about God? And yet the Maker has no mind, as well as no body. In some ways, its handiwork is incredibly poor design by human standards. It is internally divided. Most of all, it isn’t nice. In a way, Darwin discovered God — a God that failed to match the preconceptions of theology, and so passed unheralded.

Well, more power to us humans. I like having a Creator I can outwit. Beats being a pet. I’m glad it was Azathoth and not Odin.

The notion that evolution should explain the origin of life is a pure strawman — more creationist misrepresentation.

Natural selection, though not simple, is simpler than a human brain; and correspondingly slower and less efficient, as befits the first optimization process ever to exist. In fact, evolutions are simple enough that we can calculate exactly how stupid they are.

As the eminent biologist Cynthia Kenyon once put it at a dinner I had the honor of attending, “One grad student can do things in an hour that evolution could not do in a billion years.” According to biologists’ best current knowledge, evolutions have invented a fully rotating wheel on a grand total of three occasions.

Many enlightenments may be attained by studying the different forms and derivations of Price’s Equation. For example, the final equation says that the average characteristic changes according to its covariance with relative fitness, rather than its absolute fitness. This means that if a Frodo gene saves its whole species from extinction, the average Frodo characteristic does not increase, since Frodo’s act benefited all genotypes equally and did not covary with relative fitness. It is said that Price became so disturbed with the implications of his equation for altruism that he committed suicide, though he may have had other issues.

The mathematical conditions for group selection overcoming individual selection were too extreme to be found in Nature. Why not create them artificially, in the laboratory? Michael J. Wade proceeded to do just that, repeatedly selecting populations of insects for low numbers of adults per subpopulation. And what was the result? Did the insects restrain their breeding and live in quiet peace with enough food for all? No; the adults adapted to cannibalize eggs and larvae, especially female larvae.

If people have the right to be tempted — and that’s what free will is all about — the market is going to respond by supplying as much temptation as can be sold.

I leave you with a final argument from fictional evidence: Simon Funk’s online novel After Life depicts (among other plot points) the planned extermination of biological Homo sapiens — not by marching robot armies, but by artificial children that are much cuter and sweeter and more fun to raise than real children. Perhaps the demographic collapse of advanced societies happens because the market supplies ever-more-tempting alternatives to having children, while the attractiveness of changing diapers remains constant over time. Where are the advertising billboards that say “BREED”? Who will pay professional image consultants to make arguing with sullen teenagers seem more alluring than a vacation in Tahiti? “In the end,” Simon Funk wrote, “the human species was simply marketed out of existence.”

Part M - Fragile Purposes

It is proverbial in literary science fiction that the true test of an author is their ability to write Real Aliens. (And not just conveniently incomprehensible aliens who, for their own mysterious reasons, do whatever the plot happens to require.) Jack Vance was one of the great masters of this art. Vance’s humans, if they come from a different culture, are more alien than most “aliens.” (Never read any Vance? I would recommend starting with City of the Chasch.) Niven and Pournelle’s The Mote in God’s Eye also gets a standard mention here.

The present state of the art in rationality training is not sufficient to turn an arbitrarily selected mortal into Albert Einstein, which shows the power of a few minor genetic quirks of brain design compared to all the self-help books ever written in the twentieth century.

If you read Judea Pearl’s Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference, then you will see that the basic insight behind graphical models is indispensable to problems that require it. (It’s not something that fits on a T-shirt, I’m afraid, so you’ll have to go and read the book yourself. I haven’t seen any online popularizations of Bayesian networks that adequately convey the reasons behind the principles, or the importance of the math being exactly the way it is, but Pearl’s book is wonderful.)

When the basic problem is your ignorance, clever strategies for bypassing your ignorance lead to shooting yourself in the foot.

Even our simple formalism illustrates a sharp distinction between expected utility, which is something that actions have; and utility, which is something that outcomes have. Sure, you can map both utilities and expected utilities onto real numbers. But that’s like observing that you can map wind speed and temperature onto real numbers. It doesn’t make them the same thing.

We don’t know what our own values are, or where they came from, and can’t find out except by undertaking error-prone projects of cognitive archaeology.

When you consider how many different ways we value outcomes, and how complicated are the paths we take to get there, it’s a wonder that there exists any such thing as helpful ethical advice. (Of which the strangest of all advices, and yet still helpful, is that “the end does not justify the means.”)

There is no safe wish smaller than an entire human morality.

It is now being suggested in several sources that an actual majority of published findings in medicine, though “statistically significant with p < 0.05,” are untrue. But so long as p < 0.05 remains the threshold for publication, why should anyone hold themselves to higher standards, when that requires bigger research grants for larger experimental groups, and decreases the likelihood of getting a publication? Everyone knows that the whole point of science is to publish lots of papers, just as the whole point of a university is to print certain pieces of parchment, and the whole point of a school is to pass the mandatory tests that guarantee the annual budget. You don’t get to set the rules of the game, and if you try to play by different rules, you’ll just lose. (Though for some reason, physics journals require a threshold of p < 0.0001. It’s as if they conceive of some other purpose to their existence than publishing physics papers.)

I look back, and I see that more than anything, my life has been driven by an exceptionally strong abhorrence to lost purposes. I hope it can be transformed to a learnable skill.

Part N - A Human’s Guide to Words

The jester opened the second box, and found a dagger. “How?!” cried the jester in horror, as he was dragged away. “It’s logically impossible!” “It is entirely possible,” replied the king. “I merely wrote those inscriptions on two boxes, and then I put the dagger in the second one.”

Syllogisms are valid in all possible worlds, and therefore, observing their validity never tells us anything about which possible world we actually live in.

Just remember the Litany Against Logic: Logic stays true, wherever you may go, So logic never tells you where you live.

“I’m told it works by Bayes’s Rule, but I don’t quite understand how,” says Susan. “I like to say it, though. Bayes Bayes Bayes Bayes Bayes.”

ALBERT: “Oh, yeah: If a tree falls in the forest, and no one hears it, does it make a sound?” BARRY: “It makes an alberzle but not a bargulum. What’s the next question?” This remedy doesn’t destroy every dispute over categorizations. But it destroys a substantial fraction.

You get a very different picture of what people agree or disagree about, depending on whether you take a label’s-eye-view (Albert says “sound” and Barry says “not sound,” so they must disagree) or taking the test’s-eye-view (Albert’s membership test is acoustic vibrations, Barry’s is auditory experience).

When you find yourself in philosophical difficulties, the first line of defense is not to define your problematic terms, but to see whether you can think without using those terms at all. Or any of their short synonyms. And be careful not to let yourself invent a new word to use instead. Describe outward observables and interior mechanisms; don’t use a single handle, whatever that handle may be.

Purpose is lost whenever the substance (learning, knowledge, health) is displaced by the symbol (a degree, a test score, medical care). To heal a lost purpose, or a lossy categorization, you must do the reverse: Replace the symbol with the substance; replace the signifier with the signified; replace the property with the membership test; replace the word with the meaning; replace the label with the concept; replace the summary with the details; replace the proxy question with the real question; dereference the pointer; drop into a lower level of organization; mentally simulate the process instead of naming it; zoom in on your map.

Hence the other saying: “The map is not the territory, but you can’t fold up the territory and put it in your glove compartment.”

Here it is the very act of creating two different buckets that is the stroke of genius insight. ’Tis easier to question one’s facts than one’s ontology.

Expanding your map is (I say again) a scientific challenge: part of the art of science, the skill of inquiring into the world.

But eyeballing suggests that using the phrase by definition, anywhere outside of math, is among the most alarming signals of flawed argument I’ve ever found. It’s right up there with “Hitler,” “God,” “absolutely certain,” and “can’t prove that.”

Or you could dispute my extension by saying, “Some of these things do belong together — I can see what you’re getting at — but the Python language shouldn’t be on the list, and Modern Art should be.” (This would mark you as a philistine, but you could argue it.)

The moral is that short words are a conserved resource.

When you take this art to its limit, the length of the message you need to describe something corresponds exactly or almost exactly to its probability. This is the Minimum Description Length or Minimum Message Length formalization of Occam’s Razor

Frequent use goes along with short words; short words go along with frequent use. Or as Douglas Hofstadter put it, there’s a reason why the English language uses “the” to mean “the” and “antidisestablishmentarianism” to mean “antidisestablishmentarianism” instead of antidisestablishmentarianism other way around.

Having a word for a thing, rather than just listing its properties, is a more compact code precisely in those cases where we can infer some of those properties from the other properties.

That’s the problem with trying to build a “fully general” inductive learner: They can’t learn concepts until they’ve seen every possible example in the instance space.

A human mind — or the whole observable universe — is not nearly large enough to consider all the other hypotheses. From this perspective, learning doesn’t just rely on inductive bias, it is nearly all inductive bias — when you compare the number of concepts ruled out a priori, to those ruled out by mere evidence.

I am just so unspeakably glad that you asked that question, because I was planning to tell you whether you liked it or not.

As a matter of fact, if you use the right kind of neural network units, this “neural network” ends up exactly, mathematically equivalent to Naive Bayes.

Just because someone is presenting you with an algorithm that they call a “neural network” with buzzwords like “scruffy” and “emergent” plastered all over it, disclaiming proudly that they have no idea how the learned network works — well, don’t assume that their little AI algorithm really is Beyond the Realms of Logic. For this paradigm of adhockery, if it works, will turn out to have Bayesian structure; it may even be exactly equivalent to an algorithm of the sort called “Bayesian.”

Today we can actually neuroimage the little pictures in the visual cortex. So, yes, your brain really does represent a detailed image of what it sees or imagines. See Stephen Kosslyn’s Image and Brain: The Resolution of the Imagery Debate.

(P or ¬P) is not always a reliable heuristic, if you substitute arbitrary English sentences for P. “This sentence is false” cannot be consistently viewed as true or false. And then there’s the old classic, “Have you stopped beating your wife?”

If you have a question with a hidden variable, that evaluates to different expressions in different contexts, it feels like reality itself is unstable — what your mind’s eye sees, shifts around depending on where it looks. This often confuses undergraduates (and postmodernist professors) who discover a sentence with more than one interpretation; they think they have discovered an unstable portion of reality. “Oh my gosh! ‘The Sun goes around the Earth’ is true for Hunga Huntergatherer, but for Amara Astronomer, ‘The Sun goes around the Earth’ is false! There is no fixed truth!” The deconstruction of this sophomoric nitwittery is left as an exercise to the reader.

Interlude - An Intuitive Explanation of Bayes’s Theorem

Why does a mathematical concept generate this strange enthusiasm in its students? What is the so-called Bayesian Revolution now sweeping through the sciences, which claims to subsume even the experimental method itself as a special case? What is the secret that the adherents of Bayes know? What is the light that they have seen? Soon you will know. Soon you will be one of us.

The probability that a test gives a true positive divided by the probability that a test gives a false positive is known as the likelihood ratio of that test.

Statistical models are judged by comparison to the Bayesian method because, in statistics, the Bayesian method is as good as it gets — the Bayesian method defines the maximum amount of mileage you can get out of a given piece of evidence

“Pay more attention to the prior frequency!” is one of the many things that humans need to bear in mind to partially compensate for our built-in inadequacies.

The Bayesian revolution in the sciences is fueled, not only by more and more cognitive scientists suddenly noticing that mental phenomena have Bayesian structure in them; not only by scientists in every field learning to judge their statistical methods by comparison with the Bayesian method; but also by the idea that science itself is a special case of Bayes’s Theorem; experimental evidence is Bayesian evidence.

Previously, the most popular philosophy of science was probably Karl Popper’s falsificationism — this is the old philosophy that the Bayesian revolution is currently dethroning.

You can even formalize Popper’s philosophy mathematically. The likelihood ratio for X, the quantity P(X|A)/P(X|¬A), determines how much observing X slides the probability for A; the likelihood ratio is what says how strong X is as evidence. Well, in your theory A, you can predict X with probability 1, if you like; but you can’t control the denominator of the likelihood ratio, P(X|¬A) — there will always be some alternative theories that also predict X, and while we go with the simplest theory that fits the current evidence, you may someday encounter some evidence that an alternative theory predicts but your theory does not. That’s the hidden gotcha that toppled Newton’s theory of gravity. So there’s a limit on how much mileage you can get from successful predictions; there’s a limit on how high the likelihood ratio goes for confirmatory evidence. On the other hand, if you encounter some piece of evidence Y that is definitely not predicted by your theory, this is enormously strong evidence against your theory. If P(Y|A) is infinitesimal, then the likelihood ratio will also be infinitesimal. For example, if P(Y|A) is 0.0001%, and P(Y|¬A) is 1%, then the likelihood ratio P(Y|A)/P(Y|¬A) will be 1:10,000. That’s -40 decibels of evidence!

Falsification is much stronger than confirmation. This is a consequence of the earlier point that very strong evidence is not the product of a very high probability that A leads to X, but the product of a very low probability that not-A could have led to X. This is the precise Bayesian rule that underlies the heuristic value of Popper’s falsificationism.

On the other hand, Popper’s idea that there is only falsification and no such thing as confirmation turns out to be incorrect. Bayes’s Theorem shows that falsification is very strong evidence compared to confirmation, but falsification is still probabilistic in nature; it is not governed by fundamentally different rules from confirmation, as Popper argued.

You are now an initiate of the Bayesian Conspiracy.

Book IV - Mere Reality


As Giulio Giorello put the point in an interview with Daniel Dennett: “Yes, we have a soul. But it’s made of lots of tiny robots.”

Tegmark’s Our Mathematical Universe discusses a number of relevant ideas in philosophy and physics. Among Tegmark’s more novel ideas is his argument that all consistent mathematical structures exist, including worlds with physical laws and boundary conditions entirely unlike our own. He distinguishes these Tegmark worlds from multiverses in more scientifically mainstream hypotheses—e.g., worlds in stochastic eternal inflationary models of the Big Bang and in Hugh Everett’s many-worlds interpretation of quantum physics.

See also the Stanford Encyclopedia of Philosophy’s introduction to “Measurement in Quantum Theory,” and their introduction to several of the views associated with “many worlds” in “Everett’s Relative-State Formulation” and "Many-Worlds Interpretation"

Part O - Lawful Truth

Matches catch fire because of phosphorus — “safety matches” have phosphorus on the ignition strip; strike-anywhere matches have phosphorus in the match heads. Phosphorus is highly reactive; pure phosphorus glows in the dark and may spontaneously combust. (Henning Brand, who purified phosphorus in 1669, announced that he had discovered Elemental Fire.)

If you would learn to think like reality, then here is the Tao: Since the beginning not one unusual thing has ever happened.

Back when the Greek philosophers were debating what this “real world” thingy might be made of, there were many positions. Heraclitus said, “All is fire.” Thales said, “All is water.” Pythagoras said, “All is number.” Score: Heraclitus: 0 Thales: 0 Pythagoras: 1

Even the huge morass of the blogosphere is embedded in this perfect physics, which is ultimately as orderly as {1, 8, 27, 64, 125, ... }. So the Internet is not a big muck... it’s a series of cubes.

That’s the fundamental difference in mindset. Old School statisticians thought in terms of tools, tricks to throw at particular problems. Bayesians — at least this Bayesian, though I don’t think I’m speaking only for myself — we think in terms of laws.

But when you can use the exact Bayesian calculation that uses every scrap of available knowledge, you are done. You will never find a statistical method that yields a better answer.

“Outside the laboratory, scientists are no wiser than anyone else.” Sometimes this proverb is spoken by scientists, humbly, sadly, to remind themselves of their own fallibility.

In modern society there is a prevalent notion that spiritual matters can’t be settled by logic or observation, and therefore you can have whatever religious beliefs you like. If a scientist falls for this, and decides to live their extralaboratorial life accordingly, then this, to me, says that they only understand the experimental principle as a social convention. They know when they are expected to do experiments and test the results for statistical significance. But put them in a context where it is socially conventional to make up wacky beliefs without looking, and they just as happily do that instead.

Different buildings on a university campus do not belong to different universes, though it may sometimes seem that way. The universe is not divided into mind and matter, or life and nonlife; the atoms in our heads interact seamlessly with the atoms of the surrounding air. Nor is Bayes’s Theorem different from one place to another.

An ambition like that lacks the comfortable modesty of being able to confess that, outside your specialty, you’re no better than anyone else. But if our theories of rationality don’t generalize to everyday life, we’re doing something wrong. It’s not a different universe inside and outside the laboratory.

The second law is a bit harder to understand, as it is essentially Bayesian in nature. Yes, really.

And don’t tell me that knowledge is “subjective.” Knowledge has to be represented in a brain, and that makes it as physical as anything else. For M to physically represent an accurate picture of the state of Y, it must be that M’s physical state correlates with the state of Y. You can take thermodynamic advantage of that — it’s called a Szilárd engine. Or as E. T. Jaynes put it, “The old adage ‘knowledge is power’ is a very cogent truth, both in human relations and in thermodynamics.”

“Forming accurate beliefs requires a corresponding amount of evidence” is a very cogent truth both in human relations and in thermodynamics: if blind faith actually worked as a method of investigation, you could turn warm water into electricity and ice cubes. Just build a Maxwell’s Demon that has blind faith in molecule velocities.

The exact state of a glass of boiling-hot water may be unknown to you — indeed, your ignorance of its exact state is what makes the molecules’ kinetic energy “heat,” rather than work waiting to be extracted like the momentum of a spinning flywheel.

Part P - Reductionism 101

Many philosophers — particularly amateur philosophers, and ancient philosophers — share a dangerous instinct: If you give them a question, they try to answer it. Like, say, “Do we have free will?”

It is a fact about human psychology that people think they have free will. Finding a more defensible philosophical position doesn’t change, or explain, that psychological fact. Philosophy may lead you to reject the concept, but rejecting a concept is not the same as understanding the cognitive algorithms behind it.

I say all this, because it sometimes seems to me that at least 20% of the real-world effectiveness of a skilled rationalist comes from not stopping too early. If you keep asking questions, you’ll get to your destination eventually. If you decide too early that you’ve found an answer, you won’t.

The term “Mind Projection Fallacy” was coined by the late great Bayesian Master E. T. Jaynes, as part of his long and hard-fought battle against the accursèd frequentists. Jaynes was of the opinion that probabilities were in the mind, not in the environment — that probabilities express ignorance, states of partial information; and if I am ignorant of a phenomenon, that is a fact about my state of mind, not a fact about the phenomenon.

I believe there was a lawsuit where someone alleged that the draft lottery was unfair, because the slips with names on them were not being mixed thoroughly enough; and the judge replied, “To whom is it unfair?”

The only reason for seeing a “paradox” is thinking as though the probability of holding a pair of aces is a property of cards that have at least one ace, or a property of cards that happen to contain the ace of spades.

That’s what happens when you start thinking as if probabilities are in things, rather than probabilities being states of partial information about things. Probabilities express uncertainty, and it is only agents who can be uncertain. A blank map does not correspond to a blank territory.

Consider the archetypal postmodernist attempt to be clever: “The Sun goes around the Earth” is true for Hunga Huntergatherer, but “The Earth goes around the Sun” is true for Amara Astronomer! Different societies have different truths! No, different societies have different beliefs. Belief is of a different type than truth; it’s like comparing apples and probabilities.

When talking about the correspondence between a probability assignment and reality, a better word than “truth” would be “accuracy.” “Accuracy” sounds more quantitative, like an archer shooting an arrow: how close did your probability assignment strike to the center of the target? To make a long story short, it turns out that there’s a very natural way of scoring the accuracy of a probability assignment, as compared to reality: just take the logarithm of the probability assigned to the real state of affairs.

I now try to avoid using the English idiom “I just don’t understand how...” to express indignation. If I genuinely don’t understand how, then my model is being surprised by the facts, and I should discard it and find a better model.

My usual reply ends with the phrase: “If we cannot learn to take joy in the merely real, our lives will be empty indeed.”

But the Great Stories in their current forms have already been told, over and over. I do not think it ill if some of them should change their forms, or diversify their endings. “And they lived happily ever after” seems worth trying at least once.

The border between science fiction and space opera was once drawn as follows: If you can take the plot of a story and put it back in the Old West, or the Middle Ages, without changing it, then it is not real science fiction. In real science fiction, the science is intrinsically part of the plot — you can’t move the story from space to the savanna, not without losing something.

Richard Feynman asked: “What men are poets who can speak of Jupiter if he were like a man, but if he is an immense spinning sphere of methane and ammonia must be silent?” They are savanna poets, who can only tell stories that would have made sense around a campfire ten thousand years ago. Savanna poets, who can tell only the Great Stories in their classic forms, and nothing more.

This puts quite a different complexion on the bizarre habit indulged by those strange folk called scientists, wherein they suddenly become fascinated by pocket lint or bird droppings or rainbows, or some other ordinary thing which world-weary and sophisticated folk would never give a second glance. You might say that scientists — at least some scientists — are those folk who are in principle capable of enjoying life in the real universe.

Part Q - Joy in the Merely Real

I strongly suspect that a major part of science’s PR problem in the population at large is people who instinctively believe that if knowledge is given away for free, it cannot be important. If you had to undergo a fearsome initiation ritual to be told the truth about evolution, maybe people would be more satisfied with the answer.

The consistent solution which maintains the possibility of fun is to stop worrying about what other people know. If you don’t know the answer, it’s a mystery to you.

Born into a world of science, they did not become scientists. What makes them think that, in a world of magic, they would act any differently?

So remember the Litany Against Being Transported Into An Alternate Universe: If I’m going to be happy anywhere, Or achieve greatness anywhere, Or learn true secrets anywhere, Or save the world anywhere, Or feel strongly anywhere, Or help people anywhere, I may as well do it in reality.

if you only care about scientific issues that are controversial, you will end up with a head stuffed full of garbage.

On the Internet, a good new explanation of old science is news and it spreads like news. Why couldn’t the science sections of newspapers work the same way? Why isn’t a new explanation worth reporting on?

There are atheists who have religion-shaped holes in their minds. I have seen attempts to substitute atheism or even transhumanism for religion. And the result is invariably awful. Utterly awful. Absolutely abjectly awful. I call such efforts, “hymns to the nonexistence of God.”

There is an acid test of attempts at post-theism. The acid test is: “If religion had never existed among the human species—if we had never made the original mistake—would this song, this art, this ritual, this way of thinking, still make sense?”

What follows is taken primarily from Robert Cialdini’s Influence: The Psychology of Persuasion. I own three copies of this book: one for myself, and two for loaning to friends.

The conventional theory for explaining this is “psychological reactance,” social-psychology-speak for “When you tell people they can’t do something, they’ll just try even harder.” The fundamental instincts involved appear to be preservation of status and preservation of options. We resist dominance, when any human agency tries to restrict our freedom. And when options seem to be in danger of disappearing, even from natural causes, we try to leap on the option before it’s gone.

The light came on when I realized that I was looking at a trick of Dark Side Epistemology — if you make something private, that shields it from criticism. You can say, “You can’t criticize me, because this is my private, inner experience that you can never access to question it.” But the price of shielding yourself from criticism is that you are cast into solitude — the solitude that William James admired as the core of religious experience, as if loneliness were a good thing.

Religion is a poisoned chalice, from which we had best not even sip. Spirituality is the same cup after the original pellet of poison has been taken out, and only the dissolved portion remains — a little less directly lethal, but still not good for you.

If scientific knowledge were hidden in ancient vaults (rather than hidden in inconvenient pay-for-access journals), at least then people would try to get into the vaults. They’d be desperate to learn science. Especially when they saw the power that Eighth Level Physicists could wield, and were told that they weren’t allowed to know the explanation.

Right now, we’ve got the worst of both worlds. Science isn’t really free, because the courses are expensive and the textbooks are expensive. But the public thinks that anyone is allowed to know, so it must not be important. Ideally, you would want to arrange things the other way around.

Part R - Physicalism 201

If you clear your mind of justification, of argument, then it seems obvious why Occam’s Razor works in practice: we live in a simple world, a low-entropy universe in which there are short explanations to be found. “But,” you cry, “why is the universe itself orderly?” This I do not know, but it is what I see as the next mystery to be explained.

Truth can’t be evaluated just by looking inside your own head — if you want to know, for example, whether “the morning star = the evening star,” you need a telescope; it’s not enough just to look at the beliefs themselves. This is the point missed by the postmodernist folks screaming, “But how do you know your beliefs are true?” When you do an experiment, you actually are going outside your own head. You’re engaging in a complex interaction whose outcome is causally determined by the thing you’re reasoning about, not just your beliefs about it. I once defined “reality” as follows: Even when I have a simple hypothesis, strongly supported by all the evidence I know, sometimes I’m still surprised. So I need different names for the thingies that determine my predictions and the thingy that determines my experimental results. I call the former thingies “belief,” and the latter thingy “reality.”

(For those unfamiliar with zombies, I emphasize that this is not a strawman. See, for example, the Stanford Encyclopedia of Philosophy entry on Zombies. The “possibility” of zombies is accepted by a substantial fraction, possibly a majority, of academic philosophers of consciousness.)

The sound of these words is probably represented in your auditory cortex, as though you’d heard someone else say it. (Why do I think this? Because native Chinese speakers can remember longer digit sequences than English-speakers. Chinese digits are all single syllables, and so Chinese speakers can remember around ten digits, versus the famous “seven plus or minus two” for English speakers. There appears to be a loop of repeating sounds back to yourself, a size limit on working memory in the auditory cortex, which is genuinely phoneme-based.)

The technical term for the belief that consciousness is there, but has no effect on the physical world, is epiphenomenalism.

So you can’t say that the philosopher is writing about consciousness because of consciousness, while the zombie twin is writing about consciousness because of a Zombie Master or AI chatbot. When you trace back the chain of causality behind the keyboard, to the internal narrative echoed in the auditory cortex, to the cause of the narrative, you must find the same physical explanation in our world as in the zombie world.

SIR ROGER PENROSE: “The thought experiment you propose is impossible. You can’t duplicate the behavior of neurons without tapping into quantum gravity. That said, there’s not much point in me taking further part in this conversation.” (Wanders away.)

The moral of this story is that when you follow back discourse about “consciousness,” you generally find consciousness. It’s not always right in front of you. Sometimes it’s very cleverly hidden. But it’s there. Hence the Generalized Anti-Zombie Principle.

There are two Bayesian formalizations of Occam’s Razor: Solomonoff induction, and Minimum Message Length.

So... is the idea here, that creationism could be true, but even if it were true, you wouldn’t be allowed to teach it in science class, because science is only about “natural” things? It seems clear enough that this notion stems from the desire to avoid a confrontation between science and religion. You don’t want to come right out and say that science doesn’t teach Religious Claim X because X has been tested by the scientific method and found false. So instead, you can... um... claim that science is excluding hypothesis X a priori. That way you don’t have to discuss how experiment has falsified X a posteriori.

By far the best definition I’ve ever heard of the supernatural is Richard Carrier’s: A “supernatural” explanation appeals to ontologically basic mental things, mental entities that cannot be reduced to nonmental entities.

Part S - Quantum Physics and Many Worlds

The universe is not wavering between using particles and waves, unable to make up its mind. It’s only human intuitions about quantum mechanics that swap back and forth.

The order in which humanity discovered things is not necessarily the best order in which to teach them.

Configurations don’t keep track of where particles come from. A configuration’s identity is just, “a photon here, a photon there; an electron here, an electron there.” No matter how you get into that situation, so long as there are the same species of particles in the same places, it counts as the same configuration.

But I had read in Feynman’s popular books that if you really understood physics, you ought to be able to explain it to a nonphysicist. I believed Feynman instead of my father, because Feynman had won the Nobel Prize and my father had not. It was not until later — when I was reading the Feynman Lectures, in fact — that I realized that my father had given me the simple and honest truth. No math = no physics.

This is Bayes’s Theorem. I own at least two distinct items of clothing printed with this theorem, so it must be important.

I sometimes go around saying that the fundamental question of rationality is Why do you believe what you believe?

Macroscopic decoherence, a.k.a. many-worlds, was first proposed in a 1957 paper by Hugh Everett III. The paper was ignored. John Wheeler told Everett to see Niels Bohr. Bohr didn’t take him seriously. Crushed, Everett left academic physics, invented the general use of Lagrange multipliers in optimization problems, and became a multimillionaire.

Looking back, it may seem like one meta-lesson to learn from history, is that philosophy really matters in science — it’s not just some adjunct of a separate academic field. After all, the early quantum scientists were doing all the right experiments. It was their interpretation that was off. And the problems of interpretation were not the result of their getting the statistics wrong. Looking back, it seems like the errors they made were errors in the kind of thinking that we would describe as, well, “philosophical.”

It was once said that every science begins as philosophy, but then grows up and leaves the philosophical womb, so that at any given time, “Philosophy” is what we haven’t turned into science yet.

The only reason why many-worlds is not universally acknowledged as a direct prediction of physics which requires magic to violate, is that a contingent accident of our Earth’s scientific history gave an entrenched academic position to a phlogiston-like theory that had an unobservable faster-than-light magical “collapse” devouring all other worlds.

I am not in academia. I am not constrained to bow and scrape to some senior physicist who hasn’t grasped the obvious, but who will be reviewing my journal articles. I need have no fear that I will be rejected for tenure on account of scaring my students with “science-fiction tales of other Earths.” If I can’t speak plainly, who can? So let me state then, very clearly, on behalf of any and all physicists out there who dare not say it themselves: Many-worlds wins outright given our current state of evidence.

Part T - Science and Rationality

I like having lots of hidden motives. It’s the closest I can ethically get to being a supervillain.

Scott Aaronson suggests that many-worlds and libertarianism are similar in that they are both cases of bullet-swallowing, rather than bullet-dodging: Libertarianism and MWI are both grand philosophical theories that start from premises that almost all educated people accept (quantum mechanics in the one case, Econ 101 in the other), and claim to reach conclusions that most educated people reject, or are at least puzzled by (the existence of parallel universes / the desirability of eliminating fire departments).

The core argument for libertarianism is not that libertarianism would work in a perfect world, but that it degrades gracefully into real life.

Libertarianism secretly relies on most individuals being prosocial enough to tip at a restaurant they won’t ever visit again. An economy of genuinely selfish human-level agents would implode. Similarly, Science relies on most scientists not committing sins so egregious that they can’t rationalize them away.

Science is built around the assumption that you’re too stupid and self-deceiving to just use Solomonoff induction. After all, if it was that simple, we wouldn’t need a social process of science... right?

How could Science say which theory was right, in advance of the experimental test? Science doesn’t care where your theory comes from — it just says, “Go test it.” This is the great strength of Science, and also its great weakness.

Not everything with future consequences is cheap to test now.

If we were all perfect Bayesians, we wouldn’t need a social process of science.

I did not generalize the concept of “mysterious answers to mysterious questions,” in that many words, until I was writing a Bayesian analysis of what distinguishes technical, nontechnical and semitechnical scientific explanations. Now, the final output of that analysis can be phrased nontechnically in terms of four danger signs:

  • First, the explanation acts as a curiosity-stopper rather than an anticipation-controller.
  • Second, the hypothesis has no moving parts—the secret sauce is not a specific complex mechanism, but a blankly solid substance or force.
  • Third, those who proffer the explanation cherish their ignorance; they speak proudly of how the phenomenon defeats ordinary science or is unlike merely mundane phenomena.
  • Fourth, even after the answer is given, the phenomenon is still a mystery and possesses the same quality of wonderful inexplicability that it had at the start.

Once upon a time, there was something I trusted. Eliezer18 trusted Science. Eliezer18 dutifully acknowledged that the social process of science was flawed. Eliezer18 dutifully acknowledged that academia was slow, and misallocated resources, and played favorites, and mistreated its precious heretics. That’s the convenient thing about acknowledging flaws in people who failed to live up to your ideal; you don’t have to question the ideal itself. But who could possibly be foolish enough to question, “The experimental method shall decide which hypothesis wins”?

Not even Science can save you. The ideals of Science were born centuries ago, in a time when no one knew anything about probability theory or cognitive biases. Science demands too little of you, it blesses your good intentions too easily, it is not strict enough, it only makes those injunctions that an average scientist can follow, it accepts slowness as a fact of life.

No, not even if you turn to Bayescraft. It’s much harder to use and you’ll never be sure that you’re doing it right. The discipline of Bayescraft is younger by far than the discipline of Science. You will find no textbooks, no elderly mentors, no histories written of success and failure, no hard-and-fast rules laid down. You will have to study cognitive biases, and probability theory, and evolutionary psychology, and social psychology, and other cognitive sciences, and Artificial Intelligence—and think through for yourself how to apply all this knowledge to the case of correcting yourself, since that isn’t yet in the textbooks.

It was the notion that you could actually in real life follow Science and fail miserably that Eliezer18 didn’t really, emotionally believe was possible. Oh, of course he said it was possible. Eliezer18 dutifully acknowledged the possibility of error, saying, “I could be wrong, but...” But he didn’t think failure could happen in, you know, real life. You were supposed to look for flaws, not actually find them.

No one begins to truly search for the Way until their parents have failed them, their gods are dead, and their tools have shattered in their hand.

I’m a good deal less of a lonely iconoclast than I seem. Maybe it’s just the way I talk. The points of departure between myself and mainstream let’s-reformulate-Science-as-Bayesianism is that: (1) I’m not in academia and can censor myself a lot less when it comes to saying “extreme” things that others might well already be thinking. (2) I think that just teaching probability theory won’t be nearly enough. We’ll have to synthesize lessons from multiple sciences, like cognitive biases and social psychology, forming a new coherent Art of Bayescraft, before we are actually going to do any better in the real world than modern science.

It is a priority bias: Some scientist who successfully reasoned from the smallest amount of experimental evidence got to the truth first.

But my final moral is that the frontier where the individual scientist rationally knows something that Science has not yet confirmed is not always some innocently data-driven matter of spotting a strong regularity in a mountain of experiments. Sometimes the scientist gets there by thinking great high-minded thoughts that Science does not trust you to think.

Once I unthinkingly thought this way too, with respect to Einstein in particular, until reading Julian Barbour’s The End of Time cured me of it.

This is what Bayesians do instead of taking the squared error of things; we require invariances.

Humans are very fond of making their predictions afterward, so the social process of science requires an advance prediction before we say that a result confirms a theory.

If injecting randomness results in a reliable improvement, then some aspect of the algorithm must do reliably worse than random. Only in AI would people devise algorithms literally dumber than a bag of bricks, boost the results slightly back toward ignorance, and then argue for the healing power of noise.

I hold that everyone needs to learn at least one technical subject: physics, computer science, evolutionary biology, Bayesian probability theory, or something. Someone with no technical subjects under their belt has no referent for what it means to “explain” something.

It is written nowhere in the math of probability theory that one may have no fun.

A successful theory can embrace many models for different domains, so long as the models are acknowledged as approximations, and in each case the model is compatible with (or ideally mandated by) the underlying theory.

That is also a hazard of a semitechnical theory. Even after the flash of genius insight is confirmed, merely average scientists may fail to apply the insights properly in the absence of formal models. As late as the 1960s biologists spoke of evolution working “for the good of the species,” or suggested that individuals would restrain their reproduction to prevent species overpopulation of a habitat. The best evolutionary theorists knew better, but average theorists did not.

One of the classic signs of a poor hypothesis is that it must expend great effort in avoiding falsification — elaborating reasons why the hypothesis is compatible with the phenomenon, even though the phenomenon didn’t behave as expected.

Only after General Relativity precisely produced the perihelion advance of Mercury did we know Newtonian gravitation would never explain it.

Popper erred in thinking that falsification was qualitatively different from confirmation; both are governed by the same Bayesian rules. But Popper’s philosophy reflected an important truth about a quantitative difference between falsification and confirmation.

On Popper’s philosophy, the strength of a scientific theory is not how much it explains, but how much it doesn’t explain. The virtue of a scientific theory lies not in the outcomes it permits, but in the outcomes it prohibits. Freud’s theories, which seemed to explain everything, prohibited nothing. Translating this into Bayesian terms, we find that the more outcomes a model prohibits, the more probability density the model concentrates in the remaining, permitted outcomes. The more outcomes a theory prohibits, the greater the knowledge-content of the theory. The more daringly a theory exposes itself to falsification, the more definitely it tells you which experiences to anticipate.

Book V - Mere Goodness


Ends: An Introduction

Utopia-planning has become rather passe—partly because it smacks of naiveté, and partly because we’re empirically terrible at translating utopias into realities. Even the word utopia reflects this cynicism; it is derived from the Greek for “non-place.”

Part U - Fake Preferences

Drew McDermott’s “Artificial Intelligence Meets Natural Stupidity.”

Culture is not nearly so powerful as a good many Marxist academics once liked to think. For more on this I refer you to Tooby and Cosmides’s “The Psychological Foundations of Culture” or Steven Pinker’s The Blank Slate.

E. T. Jaynes’s Probability Theory: The Logic of Science.

ATP synthase is a molecular machine—one of three known occasions when evolution has invented the freely rotating wheel — that is essentially the same in animal mitochondria, plant chloroplasts, and bacteria.

Part V - Value Theory

Now, one lesson you might derive from this is “Don’t be born with a stupid prior.” This is an amazingly helpful principle on many real-world problems, but I doubt it will satisfy philosophers.

We want to have a coherent causal story about how our mind comes to know something, a story that explains how the process we used to arrive at our beliefs is itself trustworthy. This is the essential demand behind the rationalist’s fundamental question, “Why do you believe what you believe?”

At this point I feel obliged to drag up the point that rationalists are not out to win arguments with ideal philosophers of perfect emptiness; we are simply out to win. For which purpose we want to get as close to the truth as we can possibly manage.

Lewis Carroll, who was also a mathematician, once wrote a short dialogue called “What the Tortoise said to Achilles.” If you have not yet read this ancient classic, consider doing so now.

You have to train yourself to be deliberately aware of the distinction between the curried and uncurried forms of concepts.

To those who say “Nothing is real,” I once replied, “That’s great, but how does the nothing work?”

To paraphrase Piers Anthony, only those who have moralities worry over whether or not they have them.

Why, Friendly AI isn’t hard at all! All you need is an AI that does what’s good! Oh, sure, not every possible mind does what’s good — but in this case, we just program the superintelligence to do what’s good. All you need is a neural network that sees a few instances of good things and not-good things, and you’ve got a classifier. Hook that up to an expected utility maximizer and you’re done!

And without a complicated effort backed up by considerable knowledge, a neurologically intact human being cannot pretend to be genuinely, truly selfish. We’re born with a sense of fairness, honor, empathy, sympathy, and even altruism — the result of our ancestors’ adapting to play the iterated Prisoner’s Dilemma.

Who is the most formidable, among the human kind? The strongest? The smartest? More often than either of these, I think, it is the one who can call upon the most friends.

(To understand unsympathetic optimization processes, I would suggest studying natural selection, which doesn’t bother to anesthetize fatally wounded and dying creatures, even when their pain no longer serves any reproductive purpose, because the anesthetic would serve no reproductive purpose either.)

I sometimes think that futuristic ideals phrased in terms of “getting rid of work” would be better reformulated as “removing low-quality work to make way for high-quality work.”

To look at it another way, if we’re looking for a suitable long-run meaning of life, we should look for goals that are good to pursue and not just good to satisfy.

There must be the true effort, the true victory, and the true experience — the journey, the destination and the traveler.

Every Utopia ever constructed — in philosophy, fiction, or religion — has been, to one degree or another, a place where you wouldn’t actually want to live. I am not alone in this important observation: George Orwell said much the same thing in “Why Socialists Don’t Believe In Fun,” and I expect that many others said it earlier.

When I was a child I couldn’t write fiction because I wrote things to go well for my characters — just like I wanted things to go well in real life. Which I was cured of by Orson Scott Card: Oh, I said to myself, that’s what I’ve been doing wrong, my characters aren’t hurting. Even then, I didn’t realize that the microstructure of a plot works the same way — until Jack Bickham said that every scene must end in disaster. Here I’d been trying to set up problems and resolve them, instead of making them worse . . . You simply don’t optimize a story the way you optimize a real life. The best story and the best life will be produced by different criteria.

There is another rule of writing which states that stories have to shout. A human brain is a long way off those printed letters. Every event and feeling needs to take place at ten times natural volume in order to have any impact at all. You must not try to make your characters behave or feel realistically — especially, you must not faithfully reproduce your own past experiences — because without exaggeration, they’ll be too quiet to rise from the page.

"There is never any mystery-in-the-world. Mystery is a property of questions, not answers."

Part W - Quantified Humanism

It probably helps in interpreting the Allais Paradox to have absorbed more of the gestalt of the field of heuristics and biases, such as: Experimental subjects tend to defend incoherent preferences even when they’re really silly. People put very high values on small shifts in probability away from 0 or 1 (the certainty effect).

As Peter Norvig once pointed out, if Asimov’s robots had strict priority for the First Law of Robotics (“A robot shall not harm a human being, nor through inaction allow a human being to come to harm”) then no robot’s behavior would ever show any sign of the other two Laws; there would always be some tiny First Law factor that would be sufficient to determine the decision.

Where music is concerned, I care about the journey. When lives are at stake, I shut up and multiply.

I have seen many people struggling to excuse themselves from their ethics. Always the modification is toward lenience, never to be more strict. And I am stunned by the speed and the lightness with which they strive to abandon their protections. Hobbes said, “I don’t know what’s worse, the fact that everyone’s got a price, or the fact that their price is so low.”

(The essential difficulty in becoming a master rationalist is that you need quite a bit of rationality to bootstrap the learning process.)

Only when you become more wedded to success than to any of your beloved techniques of rationality do you begin to appreciate these words of Miyamoto Musashi: You can win with a long weapon, and yet you can also win with a short weapon. In short, the Way of the Ichi school is the spirit of winning, whatever the weapon and whatever its size. —Miyamoto Musashi, The Book of Five Rings

I object to the air of authority given to these numbers pulled out of thin air.

First, foremost, fundamentally, above all else: Rational agents should WIN.

It is precisely the notion that Nature does not care about our algorithm that frees us up to pursue the winning Way — without attachment to any particular ritual of cognition, apart from our belief that it wins. Every rule is up for grabs, except the rule of winning.

You shouldn’t claim to be more rational than someone and simultaneously envy them their choice — only their choice. Just do the act you envy.

Interlude Twelve Virtues of Rationality

The first virtue is curiosity. A burning itch to know is higher than a solemn vow to pursue truth.

The second virtue is relinquishment. P. C. Hodgell said: “That which can be destroyed by the truth should be.”

The third virtue is lightness. Let the winds of evidence blow you about as though you are a leaf, with no direction of your own.

The fourth virtue is evenness.

The fifth virtue is argument.

The sixth virtue is empiricism. The roots of knowledge are in observation and its fruit is prediction.

The seventh virtue is simplicity.

The eighth virtue is humility. To be humble is to take specific actions in anticipation of your own errors. To confess your fallibility and then do nothing about it is not humble; it is boasting of your modesty.

The ninth virtue is perfectionism.

The tenth virtue is precision.

The eleventh virtue is scholarship. Study many sciences and absorb their power as your own. Each field that you consume makes you larger. If you swallow enough sciences the gaps between them will diminish and your knowledge will become a unified whole. If you are gluttonous you will become vaster than mountains.

Musashi wrote: “When you appreciate the power of nature, knowing the rhythm of any situation, you will be able to hit the enemy naturally and strike naturally. All this is the Way of the Void.” These then are twelve virtues of rationality: Curiosity, relinquishment, lightness, evenness, argument, empiricism, simplicity, humility, perfectionism, precision, scholarship, and the void.

Book VI - Becoming Stronger

These writings also sparked the establishment of the Center for Applied Rationality, a nonprofit organization that attempts to translate results from the science of rationality into useable techniques for self-improvement.

Part X - Yudkowsky’s Coming of Age

My parents always used to downplay the value of intelligence. And play up the value of — effort, as recommended by the latest research? No, not effort. Experience. A nicely unattainable hammer with which to smack down a bright young child, to be sure.

It was dysrationalia that did them in; they used their intelligence only to defeat itself.

My youthful disbelief in a mathematics of general intelligence was simultaneously one of my all-time worst mistakes, and one of my all-time best mistakes.

One of my major childhood influences was reading Jerry Pournelle’s A Step Farther Out, at the age of nine. It was Pournelle’s reply to Paul Ehrlich and the Club of Rome, who were saying, in the 1960s and 1970s, that the Earth was running out of resources and massive famines were only years away. It was a reply to Jeremy Rifkin’s so-called fourth law of thermodynamics; it was a reply to all the people scared of nuclear power and trying to regulate it into oblivion.

According to this source, the FDA’s longer approval process prevents 5,000 casualties per year by screening off medications found to be harmful, and causes at least 20,000–120,000 casualties per year just by delaying approval of those beneficial medications that are still developed and eventually approved.

Don’t ask me to read my old writings; that’s too much pain.

Thus (he said) there are three “hard problems”: the hard problem of conscious experience, in which we see that qualia cannot arise from computable processes; the hard problem of existence, in which we ask how any existence enters apparently from nothingness; and the hard problem of morality, which is to get to an “ought.”

Eliezer2000 lives by the rule that you should always be ready to have your thoughts broadcast to the whole world at any time, without embarrassment. Otherwise, clearly, you’ve fallen from grace: either you’re thinking something you shouldn’t be thinking, or you’re embarrassed by something that shouldn’t embarrass you. (These days, I don’t espouse quite such an extreme viewpoint, mostly for reasons of Fun Theory. I see a role for continued social competition between intelligent life-forms, as least as far as my near-term vision stretches. I admit, these days, that it might be all right for human beings to have a self; as John McCarthy put it, “If everyone were to live for others all the time, life would be like a procession of ants following each other around in a circle.” If you’re going to have a self, you may as well have secrets, and maybe even conspiracies. But I do still try to abide by the principle of being able to pass a future lie detector test, with anyone else who’s also willing to go under the lie detector, if the topic is a professional one. Fun Theory needs a commonsense exception for global catastrophic risk management.)

If your actions don’t look good when they’re stripped of all their justifications and presented as mere brute facts... then maybe you should re-examine them.

I once lent Xiaoguang “Mike” Li my copy of Probability Theory: The Logic of Science.

I’d enjoyed math proofs before I encountered Jaynes. But E. T. Jaynes was the first time I picked up a sense of formidability from mathematical arguments. Maybe because Jaynes was lining up “paradoxes” that had been used to object to Bayesianism, and then blasting them to pieces with overwhelming firepower — power being used to overcome others.

When Marcello Herreshoff had known me for long enough, I asked him if he knew of anyone who struck him as substantially more natively intelligent than myself. Marcello thought for a moment and said “John Conway — I met him at a summer math camp.”

They would lay out arguments for why World War II was inevitable and would have happened in more or less the same way, even if Hitler had become an architect. But in sober historical fact, this is an unreasonable belief; I chose the example of World War II because from my reading, it seems that events were mostly driven by Hitler’s personality, often in defiance of his generals and advisors. There is no particular empirical justification that I happen to have heard of for doubting this. The main reason to doubt would be refusal to accept that the universe could make so little sense — that horrible things could happen so lightly, for no more reason than a roll of the dice. But why not? What prohibits it?

Part Y - Challenging the Difficult

Tsuyoku naritai is Japanese. Tsuyoku is “strong”; naru is “becoming,” and the form naritai is “want to become.” Together it means “I want to become stronger,” and it expresses a sentiment embodied more intensely in Japanese works than in any Western literature I’ve read.

When there’s a will to fail, obstacles can be found. —John McCarthy

Richard Hamming used to go around asking his fellow scientists two questions: “What are the important problems in your field?,” and, “Why aren’t you working on them?”

Trying to do the impossible is definitely not for everyone. Exceptional talent is only the ante to sit down at the table. The chips are the years of your life. If wagering those chips and losing seems like an unbearable possibility to you, then go do something else. Seriously. Because you can lose.

This is hardly an original observation on my part: but entrepreneurship, risk-taking, leaving the herd, are still advantages the West has over the East. And since Japanese scientists are not yet preeminent over American ones, this would seem to count for at least as much as desperate efforts.

Part Z - The Craft and the Community

When you consider it — these are all rather basic matters of study, as such things go. A quick introduction to all of them (well, except naturalistic metaethics) would be... a four-credit undergraduate course with no prerequisites? But there are Nobel laureates who haven’t taken that course! Richard Smalley if you’re looking for a cheap shot, or Robert Aumann if you’re looking for a scary shot.

That’s what the dead canary, religion, is telling us: that the general sanity waterline is currently really ridiculously low. Even in the highest halls of science.

Yet the Rorschach is still in use. It’s just such a good story that psychotherapists simply can’t bring themselves to believe the vast mounds of experimental evidence saying it doesn’t work — which tells you what sort of field we’re dealing with here.

In the entire absence of the slightest experimental evidence for their effectiveness, psychotherapists became licensed by states, their testimony accepted in court, their teaching schools accredited, and their bills paid by health insurance.

If you give colleges the power to grant degrees, then do they have an incentive not to fail people? (I consider it drop-dead obvious that the task of verifying acquired skills and hence the power to grant degrees should be separated from the institutions that do the teaching, but let’s not go into that.)

So I have a problem with the idea that the Dark Side, thanks to their pluralistic ignorance and affective death spirals, will always win because they are better coordinated than us.

If I were setting forth to systematically train rationalists, there would be lessons on how to disagree and lessons on how to agree, lessons intended to make the trainee more comfortable with dissent, and lessons intended to make them more comfortable with conformity.

Our culture puts all the emphasis on heroic disagreement and heroic defiance, and none on heroic agreement or heroic group consensus. We signal our superior intelligence and our membership in the nonconformist community by inventing clever objections to others’ arguments. Perhaps that is why the technophile / Silicon Valley crowd stays marginalized, losing battles with less nonconformist factions in larger society. No, we’re not losing because we’re so superior, we’re losing because our exclusively individualist traditions sabotage our ability to cooperate.

If the issue isn’t worth your personally fixing by however much effort it takes, and it doesn’t arise from outright bad faith, it’s not worth refusing to contribute your efforts to a cause you deem worthwhile.

To help break the mold to start with — the straitjacket of cached thoughts on how to do this sort of thing — consider that some modern offices may also fill the same role as a church. By which I mean that some people are fortunate to receive community from their workplaces: friendly coworkers who bake brownies for the office, whose teenagers can be safely hired for babysitting, and maybe even help in times of catastrophe...? But certainly not everyone is lucky enough to find a community at the office.

Looking at a typical religious church, for example, you could suspect — although all of these things would be better tested experimentally, than just suspected — That getting up early on a Sunday morning is not optimal; That wearing formal clothes is not optimal, especially for children; That listening to the same person give sermons on the same theme every week (“religion”) is not optimal; That the cost of supporting a church and a pastor is expensive, compared to the number of different communities who could time-share the same building for their gatherings; That they probably don’t serve nearly enough of a matchmaking purpose, because churches think they’re supposed to enforce their medieval moralities; That the whole thing ought to be subject to experimental data-gathering to find out what works and what doesn’t.

Conversely, maybe keeping current on some insurance policies should be a requirement for membership, lest you rely too much on the community... But again, to the extent that churches provide community, they’re trying to do it without actually admitting that this is nearly all of what people get out of it.

But if you’re explicitly setting out to build community — then right after a move is when someone most lacks community, when they most need your help. It’s also an opportunity for the band to grow. If anything, tribes ought to be competing at quarterly exhibitions to capture newcomers.

But it’s still an interesting point that Science manages to survive not because it is in our collective individual interest to see Science get done, but rather, because Science has fastened itself as a parasite onto the few forms of large organization that can exist in our world. There are plenty of other projects that simply fail to exist in the first place.

There is this very, very old puzzle/observation in economics about the lawyer who spends an hour volunteering at the soup kitchen, instead of working an extra hour and donating the money to hire someone to work for five hours at the soup kitchen.

But if you find that you are like me in this aspect — that selfish good deeds still work — then I recommend that you purchase warm fuzzies and utilons separately. Not at the same time. Trying to do both at the same time just means that neither ends up done well. If status matters to you, purchase status separately too!

Cialdini suggests that if you’re ever in emergency need of help, you point to one single bystander and ask them for help — making it very clear to whom you’re referring. Remember that the total group, combined, may have less chance of helping than one individual.

There are three great besetting sins of rationalists in particular, and the third of these is underconfidence.

And when it comes to the particular questions of confidence, overconfidence, and underconfidence — being interpreted now in the broader sense, not just calibrated confidence intervals — then there is a natural tendency to cast overconfidence as the sin of pride, out of that other list which never warned against the improper use of humility or the abuse of doubt

One of the chief ways that smart people end up stupid is by getting so used to winning that they stick to places where they know they can win — meaning that they never stretch their abilities, they never try anything difficult.

As David Stove observes, most “great thinkers” in philosophy, e.g., Hegel, are properly objects of pity. That’s what happens by default to anyone who sets out to develop the art of thinking; they develop fake answers.

To the best of my knowledge there is no true science that draws its strength from only one person. To the best of my knowledge that is strictly an idiom of cults. A true science may have its heroes, it may even have its lonely defiant heroes, but it will have more than one.