Noam Chomsky on Postmodernism

Since I’ve been thinking about postmodernism recently, I thought to share this fascinating interview from the youtubes. The most unfamiliar point that Chomsky brings up is the story about Bruno Latour and the ancient Egyptian tuberculosis death (read more about it here). Basically, Latour argued that since tuberculosis was not constructed until the era of modern medicine, it could not have existed in ancient Egypt! (Starts at 3:48 in the video.)

5 Historical Documents on Universal Truths

A couple of weeks ago, I wrote a post criticizing the strong form of moral relativism, namely the idea that nobody, or no culture, is right or wrong. In this post, to continue the objective vs subjective truth discussion, I will look at five historical documents that have explicitly acknowledged universal truths. Moreover, all of these documents proclaim non-empirical truths, i.e. they are not documents of science that can be tested by the scientific method. (I include this caveat because it’s easy for a relativist to acknowledge that science can have universal truths but then claim arbitrarily that other subjects work differently than science and shouldn’t have universal or objective truths. So, I am addressing the claim that nonscientific truths cannot be universal.)

1. Euclid’s Elements (~300 BC)

euclid-elements

The Elements is one of the most influential books of all time, not just in mathematics but in the entire Western way of thinking. For this post, math is considered separate from science, in that math does not operate by the scientific method. It instead operates by a strictly logical method that was largely formalized by Elements. The steps of this deductive method, in contrast with the inductive scientific method, consist of:

  1. Listing axioms, or self-evident truths.
  2. Listing basic assertions, which also should be self-evident.
  3. Combining the axioms and assertions to obtain theorems, which are the end result.

(For a list of the axioms and assertions, see the wiki page.)

In Elements, the first “postulate,” or axiom, is that a straight line can be drawn from one point to any other point. This seems obvious enough. Clearly if we imagine two points, we can also imagine a straight line between them. Another seemingly obvious claim is the last “common notion,” or assertion, which states that the whole is greater than the part.

But to what extent are these axioms really self-evident or universal? On what basis do we have to judge their universality or objectivity? The last axiom, for instance, known as the parallel postulate, is not even true in certain geometries. These are questions that have been debated for centuries

2. The Declaration of Independence (1776)

Trumbull-Declaration-of-independence

“We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.”

Note that “We hold these truths to be self-evident” sounds like something Euclid would have written two thousand years earlier. In fact, the similarity is likely more than coincidence. Thomas Jefferson was a reader of Euclid, as evidenced in a letter to John Adams: “I have given up newspapers in exchange for Tacitus and Thucydides, for Newton and Euclid; and I find myself much the happier.” Furthermore, the Declaration reads much like a mathematical proof in the style of Euclid:

  1. The introduction (“When in the Course of human events… a decent respect to the opinions of mankind requires that they should declare the causes which impel them to the separation”) establishes the want for the “dissolution of political bands” and then acknowledges that they need to declare the causes for it, i.e. the need for a proof.
  2. The preamble establishes the self-evident truths.
  3. The indictment contains the various violations by the King of the self-evident truths.
  4. The denunciation gathers the above together and says a “therefore,” showing that the proof has been concluded: “We must, therefore, acquiesce in the necessity, which denounces our Separation, and hold them, as we hold the rest of mankind, Enemies in War, in Peace Friends.”
  5. The conclusion notes that the proof has been completed; therefore, they will act on the result of the proof: “That these united Colonies are, and of Right ought to be Free and Independent States; that they are Absolved from all Allegiance to the British Crown.”

More can be found in a talk given by Noam Elkies. The interesting thing is to note how universal these self-evident truths are. Is it objectively true, for example, that all men are created equal? Is this view just a Western and/or Enlightenment construction? I would argue it is not (this is for a different post).

3. Pride and Prejudice (1813)

pride-and-prejudice

The reason I have included Pride and Prejudice over any other work of literature is the opening sentence: “It is a truth universally acknowledged, that a single man in possession of a good fortune must be in want of a wife.”

Yet again, we have a declaration of universal truth, though this time used in fiction to establish the setting for the story. In contrast with its use in the Elements and the Declaration of Independence, universal truth is used by Austen in a more sarcastic manner.

Indeed, literature in general tends to question truths that are universally held. In this context, Pride and Prejudice is special because it acknowledges this explicitly. The statement, of course, is patently false, but it raises the question of whether there are any universal truths in social relations. And what would “universal” even mean? If something applied to a certain group in early 19th century England but not to anyone else, is it still universal?

4. The Communist Manifesto (1848)

Karl_Marx

Back to serious documents, we have the strong claim by Marx and Engels that “The history of all hitherto existing society is the history of class struggles.” The signifier is the word “all,” which again proclaims a universal truth, at least universal to a sufficiently large breadth (“hitherto existing society”). By the nature of their argument, it should not be an absolute universal in the sense of applying to all time: success would mean having a classless society, and therefore, class struggles wouldn’t exist.

This example and Austen’s example are both social/historical universals. Marx argues that history can be understood by looking at class struggles, but again, on what basis can we support this? The modern view is that history is complex and can be partially understood through many different means, not just on modes of production.

On the other hand, Euclid’s is a mathematical universal, and Jefferson’s is a moral universal, in acknowledging the rights of man.

5. The Universal Declaration of Human Rights (1948)

Flag_of_the_United_Nations

This United Nations Universal Declaration of Human Rights is among the most significant documents of the twentieth century, and it is also based on presumed universal truths. Its preamble consists of seven “whereas” clauses to establish several self-evident assertions much like in the introduction to the US Declaration of Independence. These are:

“Whereas recognition of the inherent dignity and of the equal and inalienable rights of all members of the human family is the foundation of freedom, justice and peace in the world,

Whereas disregard and contempt for human rights have resulted in barbarous acts which have outraged the conscience of mankind, and the advent of a world in which human beings shall enjoy freedom of speech and belief and freedom from fear and want has been proclaimed as the highest aspiration of the common people,

Whereas it is essential, if man is not to be compelled to have recourse, as a last resort, to rebellion against tyranny and oppression, that human rights should be protected by the rule of law,

Whereas it is essential to promote the development of friendly relations between nations,

Whereas the peoples of the United Nations have in the Charter reaffirmed their faith in fundamental human rights, in the dignity and worth of the human person and in the equal rights of men and women and have determined to promote social progress and better standards of life in larger freedom,

Whereas Member States have pledged themselves to achieve, in co-operation with the United Nations, the promotion of universal respect for and observance of human rights and fundamental freedoms,

Whereas a common understanding of these rights and freedoms is of the greatest importance for the full realization of this pledge….”

These set up the basis for the 30 articles, which are the “self-evident” truths or axioms. The first three articles, for example, are:

“Article 1. All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood.

Article 2. Everyone is entitled to all the rights and freedoms set forth in this Declaration, without distinction of any kind, such as race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status. Furthermore, no distinction shall be made on the basis of the political, jurisdictional or international status of the country or territory to which a person belongs, whether it be independent, trust, non-self-governing or under any other limitation of sovereignty.

Article 3. Everyone has the right to life, liberty and security of person.”

Note that the UN did not feel the need to prove any of these. They were simply obvious or self-evident. The theorems, however, are all implicit. It is implied that if these axioms are violated, the UN has the authority to intervene on behalf of human rights.

We could spend a long time debating which particular articles are true or false, but the big picture question is, Can any of them be objectively true? Is the discussion of them even meaningful? The intuitive answer is yes.

To be continued…

The Flaming Laser Sword

I recently stumbled upon Mike Alder’s article “Newton’s Flaming Laser Sword, Or: Why Mathematicians and Scientists don’t like Philosophy but do it anyway.” It was quite relevant to my view of philosophy. From a mathematical and scientific perspective, plenty of philosophical issues seem strange.

Alder uses the example of when an unstoppable force meets an immovable object. From a scientific perspective, the thought process is something like, “We’ll test it: if the object moves, then it wasn’t immovable, and if it doesn’t, then the force wasn’t unstoppable.” Anyway, this is something I might talk about more later on.

The Construction of Social Progress: Can Civilization Move Forward?

International_Space_Station

In the past year, I have used the term “social progress” in 6 different blog posts. It referred to various topics, including LGBT rights, women’s rights, and views on race, not to mention advances in medicine and technology. Implicit were the assumptions that civilization can move forward, and that having having a more equal society does constitute social progress.

Progress and Postmodernism

As it turns out, this type of thinking is not a given. Under postmodernist thought (whatever this phrase means), the idea of social progress is taken skeptically and questioned. Granted, the questioning is done with the noblest intention. Postmodernists argue that metanarratives of progress have, in the past, led to the cruelties of European colonialism, Fascism, and Communism. In each case, those who thought they were more civilized or who thought they could bring about a more civilized society ended up being brutal tyrants. Progress was thus a tool by which the rulers ruled the oppressed. Progress was and is, in the extreme, nothing more than a social construct.

I wonder if this fervent skepticism toward social progress is an overreaction. While I could write an entire post or more specifically about this, I reject postmodernism overall and consider myself under post-postmodernism, remodernism, metamodernism, or whatever word you prefer to describe the cultural state after postmodernism. Admittedly, I recognize that my own thoughts cannot be fully disentangled from postmodernist thought (which is itself a postmodernist way of thinking), but I can try to move forward.

The reason I bring this up is that postmodernism and progress are more intricately tied than just a loose sentiment that progress doesn’t exist. Postmodernism also rejects objective truth (either to some degree or often all-out); if you have been in an English class, you’ve probably learned that all truth is subjective. Herein lies another issue, as the concept of progress entails that society is objectively moving forward, that there is some objective truth, a conflict with postmodernism.

To add one more grain to the heap, there is a modernist vs postmodernist dichotomy between prescription and description. The significance of this is that modernism and progress are inherently compatible: modernism tried not only to describe the world, but also to prescribe that we should try to achieve social progress (even if it did not reveal how). Postmodernism, however, as a purely descriptive framework, is incompatible with the concept of progress; it could not advocate for social progress even if it were not a social construction. (This leads to a chicken and egg problem: Does postmodernism reject progress because it rejects prescription, or does it reject prescription because it rejects progress?)

The Existence of Social Progress

Despite the postmodern rejection of progress, it is very easy to show that progress does exist. Ask any postmodernist if they would rather contract polio or measles or chicken pox right now, or not contract any of them. Clearly, everyone agrees there is some objective truth and an objective scale of progress on health and medicine. “But that’s falling into the technology trap,” one might object, “you cannot tie together technology and progress because of nukes.” But this is like saying Einstein’s 1905 paper on special relativity was was the cause of the Cold War. This type of thinking misses the big picture, and it misses the fact that technological advancements have made the world a much better place.

Even then, supposing you are still against technology despite medical or other technological advances, say you are not a heterosexual, white male. Would you rather live in the United States of 2014 or 1814? Does your answer not signify the existence of progress?

What about even if you are a heterosexual, white male, would you rather live in the England of 2014 or 1314? That is, would you rather live in a society with the homicide rate of 1314, or in a society with a 95% lower homicide rate? (p. 61 of this book)

Here is the Social Progress Index, which ranks countries based on aggregate scores on Basic Human Needs, Foundations of Wellbeing, and Opportunity:

SocialProgressIndex

Using numerical data is a modernist approach, and a caricature postmodernist might flinch upon seeing the United Kingdom as being considered more “progressed” than Nigeria. Of course, we must be very cautious at how we interpret this data. For instance, the UK’s higher position than Nigeria does not constitute grounds for invasion and colonization as it may have in the modernist era. But these numbers do form grounds for critical analysis.

Yes, much of progress is socially constructed. Many of the earlier (i.e. modern) approaches were naive and led to atrocious results. But the solution is not to forsake progress altogether, but rather, to gain a matured understanding of it. This first step towards true progress requires the acceptance of progress, the rejection of postmodernism.

Making Use of the Armchair: The Rise of the Non-Expert

As with all news, when I heard about the Sochi skating controversy last week, I read multiple sources on it and let it simmer. From the comments, however, that I saw on Facebook, Reddit, and on the news websites themselves, one thing struck me—nearly everyone seemed to be have extensive knowledge of Olympic figure skating, from the names of the spins to the exact scoring rubric.

How could this be? Was I the only person who had no idea who Yuna Kim was, or that Russia had not won in the category before?

Much of this “everyone is an expert” phenomenon is explained by selection bias, in that those with more knowledge of skating were more likely to comment in the first place; therefore, most of the comments that we see are from those who are the most knowledgeable.

But it’s unlikely that there would be hundreds of figure skating experts all commenting on at once. Moreover, when you look at the commenting history of the people in the discussion, they seem to also be experts on every other subject, not just in figure skating. So another effect is in play.

Namely, the Wikipedia effect (courtesy of xkcd):

xkcd Extended Mind

Of course, this effect is not limited to skating in the Olympics. When Newtown occurred, masses of people were able to rattle off stats on gun deaths and recount the global history of gun violence in late 20th- and early 21st-century.

Even so, not everyone does their research. There are still the “where iz ukrane????” comments, but undoubtedly the average knowledge of Ukrainian politics in the United States has increased drastically in the past few days. If you polled Americans on the capital of Ukraine, many more would be able to answer “Kiev” today than one week prior. For every conceivable subject, the Internet has allowed us all to become non-expert experts.

Non-Expert Knowledge

The consequences of non-expert knowledge range from subject to subject. The main issue is that we all start with an intuition about something, but with experience or training comes a better intuition that can correct naive errors and uncover counterintuitive truths.

  • An armchair doctor might know a few bits of genuine medical practice, but might also throw in superstitious remedies into the mix and possibly harm the patient more than helping. Or they might google the symptoms but come up with the wrong diagnosis and a useless or damaging prescription.
  • Armchair psychologists are more common, and it is easier to make up things that sound legitimate in this field. It is possible that an armchair psychiatrist will help a patient, even if due to empathy and not from psychiatric training.
  • Armchair economist. Might say some insightful things about one trend that they read about in the economy, but could completely miss other trends that any grad student would see.
  • Armchair physicist. Might profess to have discovered a perpetual motion machine, to be dismissed by a real physicist because the machine actually has positive energy input and is hence not perpetual. Or, might read about the latest invisibility cloak and be able to impress friends by talking about the bending of electromagnetic waves around an object by using materials with negative refractive index, but has no idea that it only works for a particular wavelength, thus making it practically useless (for now).
  • Armchair philosopher. Perhaps the most common, the armchair philosopher notices the things that happen in life and takes note of them. The article that you are currently reading is armchair philosophy, as I basically talk about abstract stuff using almost zero cited sources, occasionally referencing real-world events but only to further an abstract discussion.

Going back to the physics example, we normal people might observe the drinking bird working continuously for hours and conclude that it is a perpetual motion machine. An armchair physicist might go further to claim that that if we attach a motor to it, we could generate free energy.

Drinking Bird

A real physicist, however, would eventually figure out the evaporation and temperature differential, and then conclude that it is not a perpetual motion machine.

Five minutes of reading Wikipedia will not allow you to match an expert’s knowledge. But having non-expert knowledge sometimes does help. It opens up the door to new information and ideas. If everyone spoke only about what they were experts in, the world would become boring very quickly.

Talking About Topics Outside of Your Expertise

In everyday speech, any topic is fair game except for, ironically, the one topic that everyone is deemed to be an expert in even without Wikipedia—(their) religion. But I digress. The point is, the way we talk about things on a day-to-day basis is very different from the way experts talk about them in a serious setting.

Some differences are very minor and just a matter of terminology. For instance, I was discussing the statistics of voter turnout in the 2012 election one time, and I had phrased it as “percentage of eligible people who voted.” At the time, I did not know that “turnout” was a technical term that meant precisely what I had just said; I thought it was just a loose term in that didn’t necessarily consider the difference between the electorate and the total population, hence why I phrased it so specifically. In this example, the statistics I presented were correct, and thus the conclusion was valid, but the terminology was off.

Other differences are more significant. In the case of medical practice, a lack of formal understanding could seriously affect someone’s health. Using Wikipedia knowledge from your smartphone to treat an unexpected snake bite in real time is probably better than letting it fester before help arrives. But it’s probably safest to see a doctor afterwards.

A non-expert discussion in a casual setting is fine, as is an expert discussion in a serious setting. But what about a non-expert discussion in a serious setting? Is there anything to be gained? If two non-physicists talk about physics, can any meaning be found?

My answer is yes, but you need to discuss the right things. For example, my training is in math, so it would be pretty futile for me to discuss chemical reactions that occur from the injection of snake venom into the human body. However, given that I had done my research properly, I might be able to talk about the statistics of snake bites with as much authority as a snake expert. Of course, it would depend on the context of my bringing up the statistics. If we were comparing the rise in snake deaths to the rise in automobile deaths, I might be on equal footing. But if we were comparing snake bite deaths between difference species of snakes, a snake expert probably has the intellectual high ground.

But even this example still requires you to use some area of expertise to relate it to the one in question. To the contrary, you can still have a legitimate discussion of something outside your area of expertise even without relating to an area of expertise that you already have. You only need to make a claim broad enough, abstract enough, or convincingly enough to have an effect.

Among all groups of people, writers (and artists in general) have a unique position in being able to say things with intellectual authority as non-experts. Politicians are next, being able to say anything with political power as non-experts. However, I’m interested in the truth and not what politicians say, so let’s get back to writers. F. Scott Fitzgerald was not a formal historian of the 1920s, but The Great Gatsby really captures the decade in a way no history textbook could. George Orwell was not a political scientist, but Nineteen Eighty-Four was very effective at convincing people that totalitarian control is something to protect against.

The Internet and the Non-Expert

On the other hand, Nineteen Eighty-Four was not crafted in a medium limited by 140 characters or by one-paragraph expectancy. If George Orwell were alive today and, instead of writing Nineteen Eighty-Four, wrote a two-sentence anti-totalitarian comment on a news story on North Korea, I doubt he would have the same effect.

It is usually hard to distinguish an expert from a non-expert online. Often, an expert prefaces oneself by explicitly saying, “I am an expert on [this topic],” but even this is to be taken skeptically. I could give a rant on the times people claiming to have a Ph.D in economics had no grasp on even the most basic concepts.

In addition to allowing us the sum total of human knowledge just a click away (well, maybe not all knowledge), the Internet allows us to post knowledge instantaneously and share it with millions of other users. We have not only the public appearance of non-expert knowledge, but also the virus-like proliferation of it. Since the dawn of the Internet, people have been able to acquire knowledge about anything, but there was a great divide between the few content providers and the many consumers. Only recently have we become the content makers ourselves. What is the role of armchair philosophy in the age of information?

Conclusion

Now is a more important time than ever to be an armchair philosopher, or an armchair thinker, precisely because of the overwhelming amount of information available to us. To deal with the data overload requires an abstract way to categorize information, to filter out the useless from the useful, the wrong from the less wrong, the less true from the true.

We are expected to deal with areas outside of our expertise, and as our knowledge of these areas grows from the age of mass information, our responsibility to use it correctly becomes greater. Forming opinions even on issues that you have no authority to form opinions on is now an imperative. We learned the capital of Ukraine in one week, and our googling of Kiev might prove useful in the future. To deal with a quickly changing world, we need to deal with all information, not just data that we are comfortable with, as effectively as possible.

The Signal and the Noise, and Other Readings

The Signal and the Noise

The Signal and the Noise

Since last year’s presidential election, everyone has heard of the legendary Nate Silver, who predicted the outcomes of all 50 states correctly. Given that he also correctly predicted 49 out of 50 states in the 2008 election, this repeat feat seemed like clairvoyance, not coincidence. So the question is, what did Silver do right that so many polls and pundits did wrong?

Statistics.

The Signal and the Noise (2012) is basically a popular applied statistics book, with more history, philosophy, and psychology than formulas. The first half of the book illustrates the failures of prediction including the 2007/8 financial crisis, elections, sports, and natural disasters; the second half explains how to predict the correct way, using Bayesian probability. Overall it does an excellent job at explaining the concepts and not going into mathematical detail (which is probably a plus for most people; even for a math person like me, I know where to look up the details).

Sidenote: While I was reading the chess section, my mind literally blanked for about 10 seconds upon seeing the following:

signal-and-the-noise-chess-error

My chess intuition immediately told me that something was wrong: there is no way this position could have occurred “after Kasparov’s 3rd move.” Since Kasparov was white, this implied the white position must have 3 moves, but clearly there are only two moves: the Knight on f3 (from g1) and the Pawn on b3 (from b2). Yet this book was written by Nate Silver, so he couldn’t have gotten something wrong that was so simple. Once I realized it must have been a mistake, I looked up the game and found that at this point of the game, the g2 pawn should be on g3. I thought it was an interesting mind lapse.

Breaking the Spell

Breaking_The_Spell

This book argues that scientific analysis should be applied to religion. Namely, the title refers to the taboo of preventing rational discussion of religion, and that to “break the spell” is to break the taboo. In addition, it discusses the theories as to how religion arose; ironically the names for such theories are evolutionary theories, as they concern how modern religion has evolved over time from ancient spiritual beliefs (e.g. which specific doctrines maximize a belief system’s chances of survival, etc.).

Reading this means I have now read at least one book from each of the four “horsemen”: Dawkins, Dennett, Harris, and Hitchens. Of the four, Dennett is by far the least provocative. While the other three make arguments that outright use logical analysis on religion, in this book Dennett is the one carefully arguing that one should be allowed to make arguments that analyze religion just as one can on any other phenomena. This book should be nowhere near as controversial as The God Delusion or The End of Faith.

Overall the book makes good points but is quite slow, makes overly cautious caveats, and has a very formal tone. I think if someone like Dawkins had written this, it would be much more readable. I wouldn’t really recommend this to anyone who doesn’t have a lot of interest in philosophy.

CEO Material

CEO_Material

The main competitive advantage of this book over the typical leadership book is that it quotes very often from 100+ real CEOs. Overall these first-hand experiences supplemented the author’s main points quite well. However, for the sake of privacy I presume, the quotations are not labeled with the speaker, so it is sometimes difficult to tell how any particular passage applies to a given situation. For example, do I want to listen to the advice of a food company CEO on a particular issue and apply it to run a tech company? Perhaps the overall message is similar but clearly the details matter. Some say that context is everything, and without the context of who said it, each quote has much less power.

Most of the points seemed like common sense, although that is to be expected—the system is efficient enough that if the most effective behavior for a CEO were radically different from what we already do, then we would have adapted to that already (hopefully). Even so, there are still some interesting points made with real justifications, though again it would be helpful if we knew who said each quote, even for a few of them. In all, Benton did make points that changed the way I look at things, so it was worth reading.

The Blind Watchmaker

Blind_Watchmaker

While The Selfish Gene focuses on how genes propagate themselves and how they dynamically compete over time (evolutionary game theory), The Blind Watchmaker covers an entirely different issue: How did complexity arise?

Some of its answers, written at an earlier time (1986), seem somewhat outdated now, ironically more so than The Selfish Gene which was written even earlier in 1976. This is probably due to The Selfish Gene being more of “Here’s the progress we made in the last decade” when it was written, while The Blind Watchmaker is more along the lines of “Here’s why this work from 1802 is nonsense” and that this counter-argument doesn’t particularly need to invoke the most up-to-date findings.

But anyways, we don’t judge books by how outdated they seem in 30 years, so let’s move on to the content. Due to its premise, the book is more philosophical than The Selfish Gene, which is itself more scientific, hardly addressing at all the conflict between evolution and religion. While The Blind Watchmaker still has a formidable amount of science, it addresses some philosophical questions as well and confronts the conflict head-on. I would recommend it to those looking to question philosophical beliefs, whether of others or of their own.

Mortality

Mortality_Christopher_Hitchens

Of the books in this post, Mortality is the answer choice that doesn’t belong with the others. While the other four are strict nonfiction works that try to explain or teach certain something, Mortality comes off more as a dramatic story, the story of coming to terms with terminal illness. Hitchens opens up with the stark statement, “I have more than once in my life woken up feeling like death.” As usual, Christopher Hitchens’ signature writing style and tone are apparent.

“What do I hope for? If not a cure, then a remission. And what do I want back? In the most beautiful apposition of two of the simplest words in our language: the freedom of speech.”

“It’s probably a merciful thing that pain is impossible to describe from memory.”

“The politicized sponsors of this pseudoscientific nonsense should be ashamed to live, let alone die. If you want to take part in the ‘war’ against cancer, and other terrible maladies, too, then join the battle against their lethal stupidity.”

“The man who prays is the one who thinks that god has arranged matters all wrong, but who also thinks that he can instruct god how to put them right.”

“I have been taunting the Reaper into taking a free scythe in my direction and have now succumbed to something so predictable and banal that it bores even me.”

“Myself, I love the imagery of struggle. I sometimes wish I were suffering in a good cause, or risking my life for the good of others, instead of just being a gravely endangered patient.”

“To the dumb question ‘Why me?’ the cosmos barely bothers to return the reply: why not?”

Free Will

When I choose a book to read, am I really making a choice, or do the events that led up to my choosing a book already determine which book I am about to read? According to the book that I ended up reading, Free Will (2012) by neuroscientist Sam Harris, the answer is the second one.

Free Will

Sam Harris argues that free will is simply an illusion. Our decisions arise from background causes which our conscience often does not notice. For instance, he asks if the presence of brain tumors in criminals affects our perception of their crimes, then what about other neurological disorders? And even non-neurological ones?

If a man’s choice to shoot the president is determined by a certain pattern of neural activity, which is in turn the product of prior causes—perhaps an unfortunate coincidence of bad genes, an unhappy childhood, lost sleep, and cosmic-ray bombardment—what can it possibly mean to say that his will is “free”? (3)

In fact, the strength of this book is that its argument is based on a well-researched neuroscience. Granted, Harris brings up the more speculative conjectures of philosophy, but only after discussing research of the brain at length.

The physiologist Benjamin Libet famously used EEG to show that activity in the brain’s motor cortex can be detected some 300 milliseconds before a person feels that he has decided to move…. More recently, direct recordings from the cortex showed that the activity of merely 256 neurons was sufficient to predict with 80 percent accuracy a person’s decision to move 700 milliseconds before he became aware of it. (8)

In fact, the science seems very well established, and it is the public perception that needs to catch up. Before reading this book and subsequently researching what neuroscientists and philosophers think of free will and determinism, I expected there be serious debate and the sides roughly equally sized. But as it turns out, only 14.9% of philosophers did not lean towards one of compatibilism, libertarianism, or no free will. The majority of them actually know what is going on. Neuroscience is even more strongly against free will, as its experiments directly contradict it.

It kind of reminds of me a post I wrote called On Giving Too Much Legitimacy to the Inferior Position, where I argued that on certain issues, even trying to point out that there is “debate” over something sometimes distracts or even draws people away from the truth. This is a case in point, as I had always thought I was in the minority when I argued determinism instead of free will, but it turns out I was in the academic majority.

In addition, as an atheist and humanist, I must applaud Harris for the following passage:

Despite our attachment to the notion of free will, most of us know that disorders of the brain can trump the best intentions of the mind. This shift in understanding represents progress toward a deeper, more consistent, and more compassionate view of our common humanity—and we should note that this is progress away from religious metaphysics. Few concepts have offered greater scope for human cruelty than the idea of an immortal soul that stands independent of all material influences, ranging from genes to economic systems. Within a religious framework, a belief in free will supports the notion of sin—which seems to justify not only harsh punishment in this life but eternal punishment in the next. And yet, ironically, one of the fears attending our progress in science is that a more complete understanding of ourselves will dehumanize us. (55)

Indeed, the concept of free will is very related to religion and its morally abhorrent idea of sin. Dispelling mythological concepts such as the soul or sin is a necessary step in the advancement of the human species. And at some point, free will too must go.