# The Signal and the Noise, and Other Readings

The Signal and the Noise

Since last year’s presidential election, everyone has heard of the legendary Nate Silver, who predicted the outcomes of all 50 states correctly. Given that he also correctly predicted 49 out of 50 states in the 2008 election, this repeat feat seemed like clairvoyance, not coincidence. So the question is, what did Silver do right that so many polls and pundits did wrong?

Statistics.

The Signal and the Noise (2012) is basically a popular applied statistics book, with more history, philosophy, and psychology than formulas. The first half of the book illustrates the failures of prediction including the 2007/8 financial crisis, elections, sports, and natural disasters; the second half explains how to predict the correct way, using Bayesian probability. Overall it does an excellent job at explaining the concepts and not going into mathematical detail (which is probably a plus for most people; even for a math person like me, I know where to look up the details).

Sidenote: While I was reading the chess section, my mind literally blanked for about 10 seconds upon seeing the following:

My chess intuition immediately told me that something was wrong: there is no way this position could have occurred “after Kasparov’s 3rd move.” Since Kasparov was white, this implied the white position must have 3 moves, but clearly there are only two moves: the Knight on f3 (from g1) and the Pawn on b3 (from b2). Yet this book was written by Nate Silver, so he couldn’t have gotten something wrong that was so simple. Once I realized it must have been a mistake, I looked up the game and found that at this point of the game, the g2 pawn should be on g3. I thought it was an interesting mind lapse.

Breaking the Spell

This book argues that scientific analysis should be applied to religion. Namely, the title refers to the taboo of preventing rational discussion of religion, and that to “break the spell” is to break the taboo. In addition, it discusses the theories as to how religion arose; ironically the names for such theories are evolutionary theories, as they concern how modern religion has evolved over time from ancient spiritual beliefs (e.g. which specific doctrines maximize a belief system’s chances of survival, etc.).

Reading this means I have now read at least one book from each of the four “horsemen”: Dawkins, Dennett, Harris, and Hitchens. Of the four, Dennett is by far the least provocative. While the other three make arguments that outright use logical analysis on religion, in this book Dennett is the one carefully arguing that one should be allowed to make arguments that analyze religion just as one can on any other phenomena. This book should be nowhere near as controversial as The God Delusion or The End of Faith.

Overall the book makes good points but is quite slow, makes overly cautious caveats, and has a very formal tone. I think if someone like Dawkins had written this, it would be much more readable. I wouldn’t really recommend this to anyone who doesn’t have a lot of interest in philosophy.

CEO Material

The main competitive advantage of this book over the typical leadership book is that it quotes very often from 100+ real CEOs. Overall these first-hand experiences supplemented the author’s main points quite well. However, for the sake of privacy I presume, the quotations are not labeled with the speaker, so it is sometimes difficult to tell how any particular passage applies to a given situation. For example, do I want to listen to the advice of a food company CEO on a particular issue and apply it to run a tech company? Perhaps the overall message is similar but clearly the details matter. Some say that context is everything, and without the context of who said it, each quote has much less power.

Most of the points seemed like common sense, although that is to be expected—the system is efficient enough that if the most effective behavior for a CEO were radically different from what we already do, then we would have adapted to that already (hopefully). Even so, there are still some interesting points made with real justifications, though again it would be helpful if we knew who said each quote, even for a few of them. In all, Benton did make points that changed the way I look at things, so it was worth reading.

The Blind Watchmaker

While The Selfish Gene focuses on how genes propagate themselves and how they dynamically compete over time (evolutionary game theory), The Blind Watchmaker covers an entirely different issue: How did complexity arise?

Some of its answers, written at an earlier time (1986), seem somewhat outdated now, ironically more so than The Selfish Gene which was written even earlier in 1976. This is probably due to The Selfish Gene being more of “Here’s the progress we made in the last decade” when it was written, while The Blind Watchmaker is more along the lines of “Here’s why this work from 1802 is nonsense” and that this counter-argument doesn’t particularly need to invoke the most up-to-date findings.

But anyways, we don’t judge books by how outdated they seem in 30 years, so let’s move on to the content. Due to its premise, the book is more philosophical than The Selfish Gene, which is itself more scientific, hardly addressing at all the conflict between evolution and religion. While The Blind Watchmaker still has a formidable amount of science, it addresses some philosophical questions as well and confronts the conflict head-on. I would recommend it to those looking to question philosophical beliefs, whether of others or of their own.

Mortality

Of the books in this post, Mortality is the answer choice that doesn’t belong with the others. While the other four are strict nonfiction works that try to explain or teach certain something, Mortality comes off more as a dramatic story, the story of coming to terms with terminal illness. Hitchens opens up with the stark statement, “I have more than once in my life woken up feeling like death.” As usual, Christopher Hitchens’ signature writing style and tone are apparent.

“What do I hope for? If not a cure, then a remission. And what do I want back? In the most beautiful apposition of two of the simplest words in our language: the freedom of speech.”

“It’s probably a merciful thing that pain is impossible to describe from memory.”

“The politicized sponsors of this pseudoscientific nonsense should be ashamed to live, let alone die. If you want to take part in the ‘war’ against cancer, and other terrible maladies, too, then join the battle against their lethal stupidity.”

“The man who prays is the one who thinks that god has arranged matters all wrong, but who also thinks that he can instruct god how to put them right.”

“I have been taunting the Reaper into taking a free scythe in my direction and have now succumbed to something so predictable and banal that it bores even me.”

“Myself, I love the imagery of struggle. I sometimes wish I were suffering in a good cause, or risking my life for the good of others, instead of just being a gravely endangered patient.”

“To the dumb question ‘Why me?’ the cosmos barely bothers to return the reply: why not?”

# Rationality vs Irrationality

This article is based on several conversations I’ve had recently on rationality, and it is supposed to be an overview-type post that explores different areas of the subject. In fact, since this is a pretty heated topic that comes with misunderstandings by the handful, I will be going very slowly and throwing out as many caveats as possible to make sure I’m not misunderstood, though of course this is bound to happen. Because of this, the tone for this article will be rather informal.

Rationality vs Irrationality

It is obvious (to anyone who follows this blog or knows me in real life) that I stand on the side of rationality (though I often intentionally do things that would be considered irrational). Heck, even the blog name is “A Reasoner‘s Miscellany.” Note that the title is not “A Reasoner’s Manifesto” or “A Reasoner’s Main Ideas.” Rather, it is a “miscellany” of various ideas in various subjects and of various degrees of significance. The main purpose of this blog is to jot down random ideas, serve as a diary of thoughts, and also just to satisfy my urge to write. It is not to try to start a revolution or to promote any particular ideology.

Answering this question obviously depends on having precise definitions of what rationality and irrationality are, but as soon as I lay down definitions, some of you will start arguing the definitions rather than the actual concepts. And without this disclaimer, some of you will be arguing “Well it depends on the definitions” as if that refutes my overall argument. It turns out you’re in luck, because in this post, I’m not trying to make any grand overarching arguments, but instead just laying down a bunch of thoughts, which might be followed up on in later blog posts with more fully fleshed out arguments.

Now that many of the meta-caveats are out of the way, I suppose I can finally begin talking about rationality. Of course, even without giving detailed definitions, I feel as if I must give some overall definition to anchor the discussion. Basically, when I refer to rational thinking, I refer to thinking involving logic, facts, evidence, and reason. This is opposed to irrational thinking, which I consider to be thinking involving emotion, faith, or just not thinking (or even the refusal to think). These characterizations don’t exactly match the conventional philosophical terms (which are themselves sometimes disagreed upon), but I think this captures what is generally meant when someone says “That thought process is rational” or “That thought process is irrational.”

Biases are one of the primary obstructions to reason. Two perfectly rational agents using perfect logic and starting with the same information should theoretically arrive at the same conclusion. However, the “perfect logic” assumption is ruined if one of the agents is biased towards one side from the beginning and uses that bias in their “logic,” at which point it is no longer logic. Of course, one of the most important biases is that you are less biased than other people. Thus I must try my best to account for major personal impacts in my life that would push me towards rationality.

The main event influencing my choice towards reason is when I started learning about astronomy when I was in first grade, in South Carolina of all places. We visited an observatory and I quickly became interested in space. Even then, I realized that knowing all these things about space must have occurred through some systematic method of observation, experimentation, and reasoning (though not in terms of these words). We knew there were nine planets (back then, Pluto was a planet) because we saw them through our telescopes and reasoned their existence through their movements and gravitational effects, not because we wished there were nine, or because it would be totally awesome if there were nine, or because it was divinely revealed to us that there were nine.

Religion and Tradition Both Oppose Rationality

Because of my early interest in space I learned by 1st grade about the Galileo incident with the Church (and also about Copernicus to a lesser degree). It didn’t just bother me that the vast majority of people were so ludicrously wrong about something like whether Earth revolves around the Sun or the Sun revolves around the Earth, but rather, that the Church refused to believe the truth and instead demonized the bringer of truth, doing so because they so adamantly believed that the Sun orbits the Earth because their holy book said so. From the moment I learned about this, I could never take “religious logic” seriously (i.e., X is true because it says so in the Bible/Quran/etc).

My views on religion have changed a lot since 1st grade. For instance, my main objection to religion now is not so much that it is fictional, but rather because of the vast social harm it causes due to its irrationality. In fact, throughout most of my life I subscribed to multiculturalism (regarding religion, you have to respect religious ideas no matter how insane they are), and so I wasn’t an antitheist. It was only a year ago that I went from (agnostic) atheist to (agnostic) atheist antitheist.

Another great opponent to rationality is tradition. Similarly to religion, tradition in principle stifles new ideas and is very bad a providing reasonable justification for doing something, i.e. “Because it says so in the Bible” or “Because that’s how it has always been done.” Again along the lines of biases, I have to warn that I am probably personally vested in this topic of tradition vs rationality as I extremely resented how I was treated in my childhood from my Asian parents, and also due to my view of Chinese culture in general. For an explanation, see this post and this one. In context of this post, even at a young age I was capable of making logical arguments and it always frustrated me that whenever I argued with my parents, they could never actually refute what I said, only justifying their actions through tradition, superstition, and authority. I’ve never mentioned it on this blog before, and only to a few people in real life, but in my childhood I was driven by my parents to near suicide. These anti-tradition, anti-superstition, and anti-authority sentiments have persisted.

Intentional vs Unintentional Irrationality

This summer I probably thought about rationality more than I ever have in the past, as my work had to do with making rational decisions. The book Thinking, Fast and Slow, by Daniel Kahneman (Nobel Prize winner), made an significant impact. The primary reason I wrote the post “Pride in Things Out of Your Control” was that it was something that I found deeply irrational even though it was being expressed by a number of highly rational people. The fact that it was on July 4th given the subject was pure coincidence.

But that topic was on something that most people probably never think about. Because of this, it’s much harder to call someone with this kind of view “irrational,” as they probably aren’t aware of it. On the other hand, if someone say read that post and thought about pride in randomness, and afterwards still thought it was rational to be proud of one’s race, then it is much easier to consider them irrational. Similarly, I don’t find most religious people irrational since most religious people (at least of the ones I know) never talk about religion, thus they probably aren’t ever in a serious state of questioning religion. On the other hand, some religious people read science books (particularly on evolution) and still believe in creationism, thus it is much easier to consider these people irrational. Just as refusing to accept that Earth orbits the Sun (based on religious texts) is worse than simply not knowing about it, refusing to learn about evolution (based on religious texts) is worse not knowing about evolution. See willful ignorance.

Rationality vs Irrationality in the Media

The distinction between rationality and irrationality is related to many others, like Enlightenment vs Romanticism, future utopia vs past utopia, objective truth vs subjective truth, or science vs religion. If anything, support of irrationality is significantly overrepresented in the media. Does the following movie setting sound (overly) familiar?: The future, advanced technology, but with social inequality, terrible quality of life, what it means to be “human” is gone, nature is destroyed, and evil technologists or even machines rule as the result of the rise of the “rational,” and the day is saved by someone with an old-fashioned, “irrational” mentality often involving some mythical power? Nah, that sounds like a completely original idea. What about the one where nature overcomes technology? Or the religious guy who no one believes who is right the whole time? Or the evil scientist showing that science is bad? Or society claims to know how to treat the “irrational,” using nefarious tactics?

Sure, these are just movies mostly for entertainment purpose, and any societal warnings are a secondary effect. Perhaps I’m way overreacting. I mean, a movie or a novel has to have dramatic conflict, and movie about the future being an awesome place would be really boring to watch. But this does not mean the framing of which side is “good” and which side is “bad” should be so one-sided. One of the only shows that takes the pro-rational side is Star Trek (the [earlier] TV shows, not so much the recent movies). Characters like Spock and Data are as logical as you can possibly get, yet they are on the team of the protagonists. Technology is shown as overall beneficial, and even religion has almost disappeared from humanity (though some of the aliens they encounter have their own religions). In fact, it seems like if some show like Star Trek, The Original Series or The Next Generation, were to be released in modern day, 2013, it would be canned and be deemed far too political and “anti-religious,” as American society is far more anti-science than before (I find it hard to imagine the modern US having a warm reaction to a hypothetical modern-day version of Albert Einstein.)

The only other type of show I can think of that is pro-reason is crime investigation shows, where the protagonists try to rationally deduce facts from clues and from suspects, many of whom committed crimes for highly irrational purposes. But the main theme for these shows are normally concerned with justice, not rationality vs irrationality.

The Rationality of Irrationality

In the second paragraph, I mentioned that I sometimes intentionally act “irrationally.” However, many of these irrationalities are still made from an overall rational decision. In the post “Spontaneous Decision Making,” I talked about how I generally “…don’t plan ahead details ahead of time, as I abhor fixed schedules or fixed paths.” I will re-quote here an interesting behavior from my Fall 2010 semester:

For example, last semester, to get to one of my classes from my dorm I had two main paths, one going over the Thurston Bridge and the other over a smaller bridge that went by a waterfall. For the first couple weeks I took the Thurston Bridge path exclusively, as I thought it was shorter than the waterfall path. But then one day I went the other path and timed it, with about the same time, maybe a minute slower (out of a total of 15 minutes). So I started taking the waterfall path exclusively. But eventually that got boring too, so I started alternating every time. You might think that’s how it ended.

But a consistent change like that is still… consistent. Still the same. It was still repetitive, and still very predictable. Perhaps the mathematical side of me started running pattern-search algorithms or something. Eventually, I ended up on a random schedule, not repeating the same pattern in any given span of 3 or 4 days.

But as I later reasoned in the “Spontaneous Decisions” post, there was a method in the madness. I go against patterns on purpose, but all this increases versatility. I try to be prepared for anything, and if I always do the same pattern or plan everything out ahead of time, then I may not be able to adapt quickly to a new situation.

Another set of examples comes from video games. I tend to play extremely flexible classes/builds that have multiple purposes, and I try to have multiple characters or styles to be able to adapt quickly and to know what other people are thinking…

To have a quick response, I try to be accustomed to every scenario, and moreover, practice responding quickly. It is a sort of planned spontaneity. Intentionally making spontaneous decisions is like handicapping yourself during practice. But then when you get to the real thing, you remove the handicap and perform much better. If you can make a good assessment of a situation in 10 seconds, imagine how much better it would be with 10 hours.

In addition, the planned spontaneity is very much like preparing for a later event. Comedians spend a bunch of time preparing content so that it seems spontaneous when they perform it. In speed chess, when you don’t have time to think, the only thing that helps is prior experience. To quote Oscar Wilde: “To be natural is such a very difficult pose to keep up.”

Is Art Irrational?

Anti-rationalists often point to art, implying that to be rational is to see art as pointless. Art is indeed a more subjective experience, but is it totally subjective? Many great artists and novelists created works that expressed the style or discontent of their times. In the same way I see history as useful because it provides us with a context with which to view the modern world and the future, I see art as useful to see not just the time period of the artist, but also the lives of the artists themselves. To say “art is subjective” and end discussion with that is a very naive move that shows either a shallow understanding of art or a participation card in the “all truth is subjective” movement.

I can have rational discussions of art, novels, films, TV shows, video games, etc. When you want another’s opinion on a new painting from a famous artist and you have artist friends, who do you consult? Do you go on the streets and find a hobo or crack dealer and ask him about the art? Do you ask your favorite 6-year old relative? Do you consult a physics professor? No, probably not. Even though “art is subjective” and beauty is in the eye of the beholder, you go to the fellow artist or art critic to hear their professional, trained opinion. If the art critic’s opinion is worth more than that of the average person, then there must be some part of art that is objective. If you met someone at a formal event who said, “I hate the Mona Lisa, it’s a terrible piece of art!” you would probably think this person is uncultured and has an inferior art opinion despite your belief that art is subjective.

Ordinary Faith vs Religious Faith

It is perfectly rational to have faith in the conventional sense, but it is almost always irrational to have faith of the religious variety. I am okay with believing something with no proof if I still consider it a reasonable decision. Do I have absolute proof that the Sun will come up tomorrow? No, but I’ll bet anyone 10,000 to 1 odds that it will (if it doesn’t, I’ll give you \$10,000; if it does, you owe me \$1). For me to make this bet, that means I have to believe the probability of the Sun coming up tomorrow is >99.99%, given certain risk aversion preferences. If a billionaire whom I was best friends with and a homeless beggar both asked me for \$100 as investment money and promised to give me a \$50 a year for the next 10 years, given that I trust the billionaire sufficiently (and that inflation/interest rates are as they are now), I would give it to the billionaire (i.e. I would have faith in this billionaire), but would obviously not give any money to the beggar. Rationally, anything with a high enough probability of happening and with a low enough max cost, is reasonable to believe.

Religious faith corrupts the usual concept of faith. Instead of having strong evidence (the Sun has come up every single day since recorded history and according to science there is nothing to suggest a high probability of the Sun not coming up tomorrow; or this person is a self-made billionaire and so must know how to invest money, and is also a good friend) and therefore believing something, I am given ZERO evidence and expected to believe something. Not even a speck of evidence.

Conclusion

This article wasn’t really written in a way that lends to a conclusion, but given the length, I find it nonetheless necessary to include a “Conclusion” section. The post was much longer than I expected (around 2900 words), but I think I gained a more organized view of these ideas. The topic is, of course, open to rational debate.

# Ethical Dilemmas and Human Morality, part 2

For the full explanation, see Ethical Dilemmas and Human Morality, part 1, written almost exactly a year ago.

Moral Consistency

We had a particular debate recently on consistency in moral dilemmas. In particular, we went over two variants of the Trolley Problem: the fat man and the transplant. One side argued that you must pick the same answer in both variants, while the other side argued that it was rational to have opposite answers in the two cases. I argued for the latter.

Here is Wikipedia’s preferred formulations of the two variants:

Fat man:

As before, a trolley is hurtling down a track towards five people. You are on a bridge under which it will pass, and you can stop it by dropping a heavy weight in front of it. As it happens, there is a very fat man next to you – your only way to stop the trolley is to push him over the bridge and onto the track, killing him to save five. Should you proceed?

Transplant:

A brilliant transplant surgeon has five patients, each in need of a different organ, each of whom will die without that organ. Unfortunately, there are no organs available to perform any of these five transplant operations. A healthy young traveler, just passing through the city the doctor works in, comes in for a routine checkup. In the course of doing the checkup, the doctor discovers that his organs are compatible with all five of his dying patients. Suppose further that if the young man were to disappear, no one would suspect the doctor.

In the original trolley problem, most people would sacrifice one person to save five. However, in the fat man variation, not as many people are willing to take the action. And in transplant, very few people agree that harvesting the healthy traveler’s organs is the correct move.

This is quite inconsistent. Why would you be willing to sacrifice one person to save five in some cases, but not in others? Shouldn’t the results be the same?

I argued that it is morally feasible to have different answers to this question, especially in regards to saying yes to the original or fat man case, and saying no in the transplant case.

From a utilitarian perspective, these scenarios are not the same, namely because people contribute different values to society. In the standard trolley example, there is no reason to suspect that the one person laid on one of the tracks is different from any of the five people laid on the other track. Since we are given no other information as to who these people are (of course, the situation changes if we have more information), the best bet is to save the five. Similar is the fat man scenario.

In the transplant case, however, we are given additional information: given that someone is about to die due to the failure of some vital organ, they are probably contributing less to society than the healthy traveler undergoing a routine checkup. Now, this effect may not be strong enough to warrant the sacrifice of 5 people, but it clearly makes the transplant scenario different from the trolley or the fat man.

Now, if the transplant case were replaced with sacrificing one life to save a million, then the problem is entirely changed as well. Similarly, in the trolley problem, if we said the five people were all serial killers and the one person on the other track was a normal hard-working person, that changes the situation.

Since we can change around the answers so easily, there doesn’t seem to be a fundamental one life versus five lives struggle at hand, but rather, a combination of other factors. We can answer the question based on what information we have about the people involved, and since these situations imply different types of people, we are not morally obliged to answer the same for all variants of the problem.

# The Moral Landscape

“She: What makes you think that science will ever be able to say that forcing women to wear burqas is wrong?
Me: Because I think that right and wrong are a matter of increasing or decreasing well-being—and it is obvious that forcing half the population to live in cloth bags, and beating or killing them if they refuse, is not a good strategy for maximizing human wellbeing.
She: But that’s only your opinion.
Me: Okay … Let’s make it even simpler. What if we found a culture that ritually blinded every third child by literally plucking out his or her eyes at birth, would you then agree that we had found a culture that was needlessly diminishing human well-being?
She: It would depend on why they were doing it.
Me [slowly returning my eyebrows from the back of my head]: Let’s say they were doing it on the basis of religious superstition. In their scripture, God says, ‘Every third must walk in darkness.’
She: Then you could never say that they were wrong.

This is a passage from Sam Harris’s The Moral Landscape (2011). The book is controversial and very thought-provoking, both philosophically and practically, especially to the liberal notions of the West. It has certainly changed my views of morality.

Namely, Harris argues that moral relativism has gone too far in our current world, and that it has caused morally inferior practices (such as the burqa) to persist without serious criticism. In addition, he notes of these practices, several are especially difficult to criticize, because to criticize them would be considered offensive to religion. Moreover, because morals are associated by many to religion, it is difficult to seriously argue what is right or wrong, again out of fear of being labeled as offensive or intolerant. And out of this, many moral issues are left unresolved because to debate them is considered wrong.

Can One Culture Be Inferior?

Consider two societies that had the same moral code in all ways except, as in the example earlier, one society required removing the eyes of every third-born, while the other did not. Can we say that the former has an inferior culture? Maybe, maybe not. But this question has an answer, according to Harris, although most of the world would think that it does not. In our world, the tendency is to say that all cultures are equal, that they deserve the same respect, or something along those lines. We would be viewed as supremely intolerant if we were to say otherwise.

And yet, there are issues with this: Can we really view a culture that plucks out the eyes of third-borns out of tradition as an equal culture? What about a culture that condones slavery, or one that requires the burqa, or one that isn’t taken aback by suicide bombing? In the back of my mind, at least, I think such cultures can be viewed as wrong in those areas, but of course, it is an entirely different thing to say it publicly. (See what I did there?)

In the section “Moral Blindness in the Name of ‘Tolerance'”:

There are very practical concerns that follow from the glib idea that anyone is free to value anything—the most consequential being that it is precisely what allows highly educated, secular, and otherwise well-intentioned people to pause thoughtfully, and often interminably, before condemning practices like compulsory veiling, genital excision, bride burning, forced marriage, and the other cheerful products of alternative “morality” found elsewhere in the world. Fanciers of Hume’s is/ought distinction never seem to realize what the stakes are, and they do not see how abject failures of compassion are enabled by this intellectual “tolerance” of moral difference. While much of the debate on these issues must be had in academic terms, this is not merely an academic debate. There are girls getting their faces burned off with acid at this moment for daring to learn to read, or for not consenting to marry men they have never met, or even for the “crime” of getting raped. The amazing thing is that some Western intellectuals won’t even blink when asked to defend these practices on philosophical grounds. I once spoke at an academic conference on themes similar to those discussed here. Near the end of my lecture, I made what I thought would be a quite incontestable assertion: We already have good reason to believe that certain cultures are less suited to maximizing well-being than others. I cited the ruthless misogyny and religious bamboozlement of the Taliban as an example of a worldview that seems less than perfectly conducive to human flourishing.

As it turns out, to denigrate the Taliban at a scientific meeting is to court controversy. At the conclusion of my talk, I fell into debate with another invited speaker, who seemed, at first glance, to be very well positioned to reason effectively about the implications of science for our understanding of morality. In fact, this person has since been appointed to the President’s Commission for the Study of Bioethical Issues…. Here is a snippet of our conversation, more or less verbatim:

She: What makes you think that science will ever be able to say that forcing women to wear burqas is wrong?
…”

# Free Will

When I choose a book to read, am I really making a choice, or do the events that led up to my choosing a book already determine which book I am about to read? According to the book that I ended up reading, Free Will (2012) by neuroscientist Sam Harris, the answer is the second one.

Sam Harris argues that free will is simply an illusion. Our decisions arise from background causes which our conscience often does not notice. For instance, he asks if the presence of brain tumors in criminals affects our perception of their crimes, then what about other neurological disorders? And even non-neurological ones?

If a man’s choice to shoot the president is determined by a certain pattern of neural activity, which is in turn the product of prior causes—perhaps an unfortunate coincidence of bad genes, an unhappy childhood, lost sleep, and cosmic-ray bombardment—what can it possibly mean to say that his will is “free”? (3)

In fact, the strength of this book is that its argument is based on a well-researched neuroscience. Granted, Harris brings up the more speculative conjectures of philosophy, but only after discussing research of the brain at length.

The physiologist Benjamin Libet famously used EEG to show that activity in the brain’s motor cortex can be detected some 300 milliseconds before a person feels that he has decided to move…. More recently, direct recordings from the cortex showed that the activity of merely 256 neurons was sufficient to predict with 80 percent accuracy a person’s decision to move 700 milliseconds before he became aware of it. (8)

In fact, the science seems very well established, and it is the public perception that needs to catch up. Before reading this book and subsequently researching what neuroscientists and philosophers think of free will and determinism, I expected there be serious debate and the sides roughly equally sized. But as it turns out, only 14.9% of philosophers did not lean towards one of compatibilism, libertarianism, or no free will. The majority of them actually know what is going on. Neuroscience is even more strongly against free will, as its experiments directly contradict it.

It kind of reminds of me a post I wrote called On Giving Too Much Legitimacy to the Inferior Position, where I argued that on certain issues, even trying to point out that there is “debate” over something sometimes distracts or even draws people away from the truth. This is a case in point, as I had always thought I was in the minority when I argued determinism instead of free will, but it turns out I was in the academic majority.

In addition, as an atheist and humanist, I must applaud Harris for the following passage:

Despite our attachment to the notion of free will, most of us know that disorders of the brain can trump the best intentions of the mind. This shift in understanding represents progress toward a deeper, more consistent, and more compassionate view of our common humanity—and we should note that this is progress away from religious metaphysics. Few concepts have offered greater scope for human cruelty than the idea of an immortal soul that stands independent of all material influences, ranging from genes to economic systems. Within a religious framework, a belief in free will supports the notion of sin—which seems to justify not only harsh punishment in this life but eternal punishment in the next. And yet, ironically, one of the fears attending our progress in science is that a more complete understanding of ourselves will dehumanize us. (55)

Indeed, the concept of free will is very related to religion and its morally abhorrent idea of sin. Dispelling mythological concepts such as the soul or sin is a necessary step in the advancement of the human species. And at some point, free will too must go.

# Are We in a Simulation? A Scientific Test

According to a recent article, scientists are planning a test to determine whether our universe is a computer simulation. This is pretty relevant to my blog as I have discussed this idea a number of times before [1] [2] [3] [4].

Of course, the must-read paper on this subject is philosopher Nick Bostrom’s article, “Are You Living in a Computer Simulation?” The implication, given a couple of premises, is that we are almost certainly living in a computer simulation. Not only that, but the argument posits that our simulators are themselves extremely likely to be in a simulation, and those simulators are likely too to be in a simulation, etc.

Indeed, how will scientists test for signs of a simulation?

“Currently, computer simulations are decades away from creating even a primitive working model of the universe. In fact, scientists are able to accurately model only a 100 trillionth of a metre, with work to create a model of a full human being still out of reach.”

Even so, there are limitations beyond technical ones that should be considered. If a test does not find any evidence of our being in a simulation, that does not rule out the possibility—in fact, a very well-designed simulation would be very difficult, if not actually impossible, to tell apart from a “reality” to its inhabitants.

Conversely, suppose a test that did find “evidence” that we are in a simulation. How would we judge this evidence? How can we know which way the evidence is supposed to point? After all, even if we find “glitches,” they could turn out to be part of a larger set of natural laws.

As Richard Feynman once thought, suppose we are observing a chess game but are not told what the rules are. After looking at various snapshots of a game, we can piece together some of the rules, and eventually we will learn that a Bishop must stay on the same color when it moves. But one snapshot later, we find that the only Bishop in the game is now on a different colored square. There would be no way of knowing, without looking at many more games, that there is a rule where a Pawn can promote into another piece, such as a Bishop, and that the old Bishop was captured. Without this knowledge, we might have thought that the Bishop changing color was a glitch.

Now back to the article.

“By testing the behaviour of cosmic rays on underlying ‘lattice’ frameworks governing rules of physics that could exist in future models of the universe, the researchers could find patterns that could point to a simulation.”

Many disciplines would have to come together here to prove something fundamentally “wrong” with our universe. It would be the junction point of computer science, physics, philosophy, mathematics, neuroscience, astronomy.

The plan given in the article is a noble one, but I do not expect it to grant any important experimental data soon. Rather, it is the tip of an immense iceberg that will be explored in not years or decades, but millennia to come.

# Orwell, Chomsky, and the Power of Twisting Language

Choosing the right word is very important, but I’ve recently found it to be far more important than I previously thought. Influences: George Orwell, Noam Chomsky.

An Experiment

Consider the 1974 Loftus and Palmer experiment [1][2][3]. Participants were shown identical short videos of car crashes, and were then asked one of the questions:

1. About how fast were the cars going when they smashed into each other?
2. About how fast were the cars going when they collided into each other?
3. About how fast were the cars going when they bumped into each other?
4. About how fast were the cars going when they hit each other?
5. About how fast were the cars going when they contacted each other?

The only difference is the wording. Yet it was able to produce a statistically significant result:

People will believe what they hear.

Framing the Question: Politics and Religion

There are many issues today in America that suffer similar biases from wording.

Take immigration for example. Most people would probably be against illegal aliens, but would probably be more sympathetic towards undocumented workers. With this phrasing, the same person might support giving rights to undocumented workers, yet might vote the opposite way when the media or a political party calls them illegal aliens. Even though they are referring to the same people, one term focuses to the illegality, while the other focuses on their work. Of course when you call them illegal aliens, you’re going to have a biased discussion.

Abortion falls to the same bias. It is the termination of pregnancy, yet those who are opposed label it as bad as killing babies.

Or if you are not a Muslim, you are a non-Muslim; however, Islamist extremists label you as an infidel.

And don’t think Christianity gets off the hook here. A non-Christian is similarly labeled by extremists as a blasphemer (or infidel or heretic as well). And since one can’t be both Muslim and Christian at the same time, every person on Earth is an infidel or a blasphemer. That’s just the logical truth.

Framing the Question: Science and Religion

The power of twisting language is nowhere more important than in the evolution vs creationism “debate.” The reason I put the word “debate” in quotes is that it’s really not a debate where both sides use logic, reason, and facts. Yet, as long as the creationists manage to convince people there is still “debate” by labeling the whole thing as a “debate,” then they are winning their “debate.”

So far, every debate I’ve seen between evolution and creationism, and between logic and religion in general, is more of a lecture to a stubborn adolescent who still believes in fairy tales. The power of language is so strong that in labeling the conflict as a “debate” in the first place, the creationists are creating the false presumption that there even is a debate.

They use completely wrong and misleading words to describe the theory of evolution. Even calling it a theory or hypothesis in the first place is misleading, because the word theory in everyday speech strongly focuses on the possibility of being uncertain or wrong (if I said “My theory about why the grades were lower on this test…”), whereas the word theory in science implies strong logical mechanisms and the possibility to confirm or deny through evidence (such as the theory of gravity).

To adapt this “debate” to everyday speech, we should really call it the fact of evolution. One is of course allowed to call it a theory, but only seriously if one actually understands it scientifically. Most of those who claim “it’s just a theory” don’t actually understand it at all.

A debate would imply both sides are using reason. That is hardly the case. It is really more of a clearing of misunderstandings than the use of any higher cognitive skills.

The following words are extremely well misunderstood: random, chance, selection, adapt, and purpose. Consider the following dialog, which more or less actually happened (I am putting quotes around the word “Evolutionist” as it is really just a label that shouldn’t have to exist, just as you don’t have to call people who believe the world is round “Round-Earthers”):

Creationist: It’s hard to believe that the eye happened by accident.

“Evolutionist”: Evolution doesn’t say it happened by accident.

Creationist: Then it has to have a purpose.

What’s going on here is not a debate at all, but an abuse of language. The eye does not have any intrinsic purpose, but it is also not an accident. Creationists often create this false dichotomy of purpose vs accident. And when they show it is preposterous for life to have developed by accident, they think they have shown it must have been done on purpose.

Randomness does not imply either purpose or accident. Why is a cheetah fast? Because in a larger pool of animals in an ecosystem, if it were slower, it wouldn’t be able to catch its prey, and it would die off, and that would have happened millions of years ago, so we wouldn’t see it today. That’s the simple logic. No accident or purpose is implied.

So many other words—good, evil, salvation, sin, faith, and I’m sure I’m missing a ton more—are all heavily loaded, ill-defined, ambiguous concepts that are twisted around by religion to suit its needs depending on the situation. This is Orwellian Doublespeak at its strongest.

Words and the Future

It is imperative that the American public understand how loaded words are affecting its choices and decisions. The election process should be dependent on the rational discussion of real issues, not by a massive popularity contest shrouded by mutual insults and loaded words oversimplifying the situation and vilifying the other party. News should be news, not political indoctrination. Language should be the way we voice our concerns to the government, not the way political parties usher us like pawns to certain death.

In addition to math and science education, which should most certainly be improved, we really do need to keep our English and history classes in able hands. But, in English classes, instead of teaching only books written long in the past, they should occasionally make students read current news articles and critically think about them. Then maybe people will realize that English is not all pointless. And once this happens, the government will be afraid, and it will be forced to listen to the educated American people, as history perhaps once intended.