Confirmation Bias and the Illuminati

Check out this hilarious Buzzfeed article, “28 Shocking Pictures That Prove That The Illuminati Is All Around Us.”

buzzfeed-illuminati-beyonce

While it may seem comical at best, it is the only time I have seen such a sustained visual depiction of confirmation bias, satirical or not. The popularity of the article demonstrates that everyone can and does understand what confirmation bias is. Unfortunately, people tend to think they are less biased than everyone else (which is itself a bias), so that they simultaneously enjoy this Buzzfeed article and make fun of conspiracy theorists and superstitious worshipers, yet often believe in equally ridiculous things.

Namely, if you change the title to “28 Shocking Pictures That Prove That God Does Good Things All Around Us,” I have a feeling it would be much less satirical, and if it was, people would call to burn the writer at the stake. Of course, the punchline of the Illuminati images is that the criterion for being the Illuminati, i.e., being a triangle, is so vague that it can literally appear anywhere. Sound familiar?

(To be fair, at least there is definitive evidence that the Illuminati existed.)

Dismissing Things Without Evidence

evidence

Superstition

In middle school, I used to stay up late and listen to a radio talk show called Coast to Coast AM. The show dealt with many topics, focusing on the supernatural or paranormal. While occasional talks were on real science (they brought on Michio Kaku as a guest), the vast majority consisted of things like psychic powers, auras, numerology, UFOs, alien abductions, crop circles, Bigfoot, astrology, conspiracy theories, the Illuminati, the New World Order, collective consciousness, spoon bending, ghosts, near-death experiences, quantum healing, astral projection, clairvoyance, and other wacky phenomena.

Of course, I have no problem with the expression of unpopular views, and I have written several times in support of their expression. It’s not like Coast to Coast AM is being promoted in the school curriculum, at which point I would take issue. However, this particular category of beliefs, namely superstition, is generally harmful because it promotes thinking in a highly irrational and naive way. Especially in the social media age, we cannot afford as a society to succumb to believing in whatever pops up on our newsfeeds.

But surely this is just a tiny minority of people, right? This is the typical response I get when I speak out against superstition, and it seems sensible because I usually talk about this with highly educated people who automatically dismiss this kind of stuff. However, the numbers for the general populace may be discouraging. From a December 2013 Harris poll (the link is broken so here is a Google cache link), the numbers believing were: 42% in ghosts, 36% in UFOs, 29% in astrology, 26% in witches, and 24% in reincarnation. This is not including religious-based superstitious beliefs, with much higher numbers such as 72% in miracles, 68% in angels, 58% in the devil, and 57% in the virgin birth.

I would usually criticize religion more than superstition, but in this post I make an exception. Even as religious belief is on the decline (see numbers in the Harris poll or also in a Pew Research poll), superstition is on the rise. According to the Harris poll, only 24% of matures (68+) believe in ghosts, but 44% of echo boomers (18-36) do. Astrology increases from 23% to 33%, and witches increase from 18% to 27%, when you go from the oldest to the youngest generation.

The Role of Evidence

Every belief mentioned in the previous section shares something in common: there is zero credible evidence supporting them. Of course, those who believe such things often think they have evidence, and this is almost always explained by confirmation bias, selection bias, or being simply misled. Only when you get to some forms of religious belief do you run into people who claim they do not need evidence at all (“I don’t need evidence, I have faith”). Fortunately, when debating superstitious people as opposed to religious people, you at least agree that you need evidence, but might differ as to what constitutes evidence. (A conspiracy theorist will shower you with evidence.)

In the paranormal, it is especially easy to construct signal out of noise, or beliefs out of nothingness. Take something like astrology: Someone writes an extremely vague, all-encompassing description of life, and it generally matches anyone. The reason it seems to fit you specifically is that the vague wording (“something important recently happened in your life”) triggers several biases:

  • you are selectively looking for things that fit the description (selection bias),
  • you ignore things that don’t fit (confirmation bias),
  • you find something that you didn’t originally view as important, but now it must be important because of the prediction (circular reasoning), and
  • you note the importance of something long after the fact (hindsight bias).

The rational person is not immune to biases, but at least is aware of them and tries to look at evidence from a more objective perspective. After all, systematic analysis of evidence is the main criterion that separates real science from pseudo science.

A Priori Dismissal

Suppose you read a story in the news today about a new Bigfoot sighting. How much evidence would you need to dismiss it? I would claim it is almost none. You would realize the probability of the existence of Bigfoot is so low in the first place (well under 1%, possibly 0%), that it would take a significant amount of evidence to convince you otherwise. The burden of proof is on the sighter. Given the advent of universal smartphone ownership, it would seem easy to simply snap a picture of Bigfoot when you saw one. In this case, you do not need evidence against to dismiss it.

The point is, if a Bigfoot article appeared in the news today, then without reading any of it and without having any evidence of its being a hoax, you could safely dismiss it as a hoax, as it has been every time. Again, I am not saying that every person who has sighted Bigfoot did so to perpetuate a hoax—I think some people genuinely saw something they personally couldn’t explain. However, there’s quite a leap of logic to go from “I don’t know what I saw” to “It was Bigfoot.”

Imagine that you find the following title in today’s paper: “Scientists find conclusive proof of Flat Earth theory.” Without having to be a scientist yourself, you have enough intelligence (hopefully) to conclude that the article is wrong, even without reading a word of it.

Some friends I talk to have actually pointed out that I am perhaps too dismissive. For instance, last year this Carrie promo video made its rounds on Youtube:

If you don’t want to watch it, basically a hoax is set up so that someone appears to be using telekinetic powers in a cafe, and onlookers are fearful and in a state of shock.

We discussed what we would have done in that situation. Everyone else said they would have been scared !@#&less in that scenario, but I said I would have known it was a hoax and thus have stayed calm. Of course, nobody believed me. Given this post, judge for yourself.

Hitchens’ Razor

“What can be asserted without evidence can be dismissed without evidence.”

This philosophical tool allows you to dismiss many kinds of statements. If someone just claims, “There is a leprechaun in my backyard,” you can dismiss it even if you have never met this person before and have never been to their backyard.

Hitchens’ Razor differs slightly from the idea in the previous section: the aversion to believing in Bigfoot, even if there is “evidence” in the form of extremely shaky and blurry cam, comes more from statistical improbability than from philosophical concern. Christopher Hitchens’ statement applies more to abstract claims that sometimes cannot be justified in the physical world, i.e. religious claims.

The title refers to both interpretations of “without evidence”: dismissing something that has no evidence for, and dismissing something that has no evidence against. Namely, if there is no evidence for, you do not need evidence against.

More relevant to purely superstitious claims that can be tested is Carl Sagan’s “razor”:

“Extraordinary claims require extraordinary evidence.”

It is generally true in real life that the more absurd a claim is, the more justification it requires. If you claim the Malaysian flight 370 is on Mars, you better have some very convincing pieces of evidence supporting it.

Overall, I just ask that we think more rationally, especially in response to the media and to questionable stories. We simply cannot afford to slip back into an age of superstition.

5 Historical Documents on Universal Truths

A couple of weeks ago, I wrote a post criticizing the strong form of moral relativism, namely the idea that nobody, or no culture, is right or wrong. In this post, to continue the objective vs subjective truth discussion, I will look at five historical documents that have explicitly acknowledged universal truths. Moreover, all of these documents proclaim non-empirical truths, i.e. they are not documents of science that can be tested by the scientific method. (I include this caveat because it’s easy for a relativist to acknowledge that science can have universal truths but then claim arbitrarily that other subjects work differently than science and shouldn’t have universal or objective truths. So, I am addressing the claim that nonscientific truths cannot be universal.)

1. Euclid’s Elements (~300 BC)

euclid-elements

The Elements is one of the most influential books of all time, not just in mathematics but in the entire Western way of thinking. For this post, math is considered separate from science, in that math does not operate by the scientific method. It instead operates by a strictly logical method that was largely formalized by Elements. The steps of this deductive method, in contrast with the inductive scientific method, consist of:

  1. Listing axioms, or self-evident truths.
  2. Listing basic assertions, which also should be self-evident.
  3. Combining the axioms and assertions to obtain theorems, which are the end result.

(For a list of the axioms and assertions, see the wiki page.)

In Elements, the first “postulate,” or axiom, is that a straight line can be drawn from one point to any other point. This seems obvious enough. Clearly if we imagine two points, we can also imagine a straight line between them. Another seemingly obvious claim is the last “common notion,” or assertion, which states that the whole is greater than the part.

But to what extent are these axioms really self-evident or universal? On what basis do we have to judge their universality or objectivity? The last axiom, for instance, known as the parallel postulate, is not even true in certain geometries. These are questions that have been debated for centuries

2. The Declaration of Independence (1776)

Trumbull-Declaration-of-independence

“We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.”

Note that “We hold these truths to be self-evident” sounds like something Euclid would have written two thousand years earlier. In fact, the similarity is likely more than coincidence. Thomas Jefferson was a reader of Euclid, as evidenced in a letter to John Adams: “I have given up newspapers in exchange for Tacitus and Thucydides, for Newton and Euclid; and I find myself much the happier.” Furthermore, the Declaration reads much like a mathematical proof in the style of Euclid:

  1. The introduction (“When in the Course of human events… a decent respect to the opinions of mankind requires that they should declare the causes which impel them to the separation”) establishes the want for the “dissolution of political bands” and then acknowledges that they need to declare the causes for it, i.e. the need for a proof.
  2. The preamble establishes the self-evident truths.
  3. The indictment contains the various violations by the King of the self-evident truths.
  4. The denunciation gathers the above together and says a “therefore,” showing that the proof has been concluded: “We must, therefore, acquiesce in the necessity, which denounces our Separation, and hold them, as we hold the rest of mankind, Enemies in War, in Peace Friends.”
  5. The conclusion notes that the proof has been completed; therefore, they will act on the result of the proof: “That these united Colonies are, and of Right ought to be Free and Independent States; that they are Absolved from all Allegiance to the British Crown.”

More can be found in a talk given by Noam Elkies. The interesting thing is to note how universal these self-evident truths are. Is it objectively true, for example, that all men are created equal? Is this view just a Western and/or Enlightenment construction? I would argue it is not (this is for a different post).

3. Pride and Prejudice (1813)

pride-and-prejudice

The reason I have included Pride and Prejudice over any other work of literature is the opening sentence: “It is a truth universally acknowledged, that a single man in possession of a good fortune must be in want of a wife.”

Yet again, we have a declaration of universal truth, though this time used in fiction to establish the setting for the story. In contrast with its use in the Elements and the Declaration of Independence, universal truth is used by Austen in a more sarcastic manner.

Indeed, literature in general tends to question truths that are universally held. In this context, Pride and Prejudice is special because it acknowledges this explicitly. The statement, of course, is patently false, but it raises the question of whether there are any universal truths in social relations. And what would “universal” even mean? If something applied to a certain group in early 19th century England but not to anyone else, is it still universal?

4. The Communist Manifesto (1848)

Karl_Marx

Back to serious documents, we have the strong claim by Marx and Engels that “The history of all hitherto existing society is the history of class struggles.” The signifier is the word “all,” which again proclaims a universal truth, at least universal to a sufficiently large breadth (“hitherto existing society”). By the nature of their argument, it should not be an absolute universal in the sense of applying to all time: success would mean having a classless society, and therefore, class struggles wouldn’t exist.

This example and Austen’s example are both social/historical universals. Marx argues that history can be understood by looking at class struggles, but again, on what basis can we support this? The modern view is that history is complex and can be partially understood through many different means, not just on modes of production.

On the other hand, Euclid’s is a mathematical universal, and Jefferson’s is a moral universal, in acknowledging the rights of man.

5. The Universal Declaration of Human Rights (1948)

Flag_of_the_United_Nations

This United Nations Universal Declaration of Human Rights is among the most significant documents of the twentieth century, and it is also based on presumed universal truths. Its preamble consists of seven “whereas” clauses to establish several self-evident assertions much like in the introduction to the US Declaration of Independence. These are:

“Whereas recognition of the inherent dignity and of the equal and inalienable rights of all members of the human family is the foundation of freedom, justice and peace in the world,

Whereas disregard and contempt for human rights have resulted in barbarous acts which have outraged the conscience of mankind, and the advent of a world in which human beings shall enjoy freedom of speech and belief and freedom from fear and want has been proclaimed as the highest aspiration of the common people,

Whereas it is essential, if man is not to be compelled to have recourse, as a last resort, to rebellion against tyranny and oppression, that human rights should be protected by the rule of law,

Whereas it is essential to promote the development of friendly relations between nations,

Whereas the peoples of the United Nations have in the Charter reaffirmed their faith in fundamental human rights, in the dignity and worth of the human person and in the equal rights of men and women and have determined to promote social progress and better standards of life in larger freedom,

Whereas Member States have pledged themselves to achieve, in co-operation with the United Nations, the promotion of universal respect for and observance of human rights and fundamental freedoms,

Whereas a common understanding of these rights and freedoms is of the greatest importance for the full realization of this pledge….”

These set up the basis for the 30 articles, which are the “self-evident” truths or axioms. The first three articles, for example, are:

“Article 1. All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood.

Article 2. Everyone is entitled to all the rights and freedoms set forth in this Declaration, without distinction of any kind, such as race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status. Furthermore, no distinction shall be made on the basis of the political, jurisdictional or international status of the country or territory to which a person belongs, whether it be independent, trust, non-self-governing or under any other limitation of sovereignty.

Article 3. Everyone has the right to life, liberty and security of person.”

Note that the UN did not feel the need to prove any of these. They were simply obvious or self-evident. The theorems, however, are all implicit. It is implied that if these axioms are violated, the UN has the authority to intervene on behalf of human rights.

We could spend a long time debating which particular articles are true or false, but the big picture question is, Can any of them be objectively true? Is the discussion of them even meaningful? The intuitive answer is yes.

To be continued…

Culture, Biases, and Empathy

A few disclaimers before I start:

  1. This is a complicated issue. While I may simplify definitions or arguments for the sake of making a point, I realize the truth is more complex than that.
  2. I’m not completely sure about the conclusions, and this is not a topic that I am an authority on. Still, there are some things that I find so disturbing that I feel the need to say something, even if it is just armchairing.
  3. Culture can be taboo, especially to criticism. I realize this.
  4. I am going to throw in more caveats than usual, particularly because of the first three reasons. The last post I wrote in this area, on the social construction of progress, seemed to strike the wrong nerve even among some of my friends, so I’ll be extra careful here. I feel that I shouldn’t need to make such disclaimers, and hopefully this will clarify understanding rather than confound it.

The topic for today is the criticism of other cultures. In particular, we are very reluctant to criticize even a tiny facet of another culture, and while this is for good reason due to the not-so-friendly history of cultural superiority, I think we have overcompensated in the moral relativism direction and have ended up shielding even the worst culture-specific behaviors from criticism.

Wariness in Criticizing Cultures

As noted in the social progress post, much of our (post-)modern reservation to proclaim objective truths is well intentioned: to prevent future atrocities from happening as a result of the feelings of cultural superiority. The Holocaust comes to mind immediately, and European colonialism is another.

However, to (theoretically) renounce objective truth altogether would go too far. Then on what grounds do we have to say that stoning someone for adultery is wrong? Or rather, how can we criticize a culture that practices stoning as punishment for adultery? Or a culture with the punishment of 200 lashes for the crime of being raped? (Yes, you read that right—200 lashes not for the perpetrator, but for the victim.) We don’t have any grounds to make such criticism on at all, if we subscribe to extreme moral relativism.

Of course, this is an extreme scenario. The average person doesn’t watch a video of a woman being stoned to death and then say, “That’s okay because it’s okay in their culture and we have to respect that.” The reaction is outrage, as it should be.

Cultural Anthropic Principle

I want to take one step back and talk about a peculiarity in the logic of cultural critique: a selection effect on what we are saying. It is similar to an effect in cosmology called the anthropic principle: given that we are observing the universe, the universe must have properties that support intelligent life. That is, it addresses the question of “Why is our universe suitable for life?” by noting that if our universe were not suitable for life, then we wouldn’t be here making that observation. That is, the alternative question, “Why is our universe not suitable for life,” cannot physically be asked. We must observe a universe compatible with intelligent life.

A similar effect is found in some areas of cultural analysis. We have, for instance, many critiques of democracy written by people living in democracies. One might ask, what kind of criticisms do people make within a totalitarian state? The answer might be none: given that a writer is in a totalitarian system, their critique of the totalitarian government may never be published or even written in the first place for fear of imprisonment by the state. The net result is, given that we are critiquing our own political system, we are most likely in an open political system. This seems to answer the question, “Why is political analysis democracy-centric?”

The same principle applies to the criticism of cultures. More intellectually advanced cultures tend to be more open to self criticism and be more wary of criticizing other cultures. So, a culture that is wary about criticizing other cultures tends to be more intellectually sophisticated, and thus often are concerned with epistemological questions of cultural analysis in the first place and can often give a better answer than one that is less self-aware.

Cultural Exclusion, Bias

In any discussion with one person criticizing another culture, the go-to defense is, “You are not from culture X, so you cannot possibly understand X.” This seems to be a very exclusionary argument that implicitly denies the role of empathy. By saying “you cannot possibly understand,” one implies that there is something mysterious that cannot be shared with someone outside the group.

I’m all for people of different cultures to communicate and get along with one another, but the mindset of “you cannot possibly understand” seems to reinforce cultural divisions and deny the possibility for mutual understanding.

Along the lines of “you cannot possibly understand,” a related argument is, “You are from culture X, therefore your opinion is biased,” where X usually equals Western culture.

Of course opinions are biased! But it’s not as simple as biased vs unbiased (and does an unbiased person even exist?)—there is a whole range of biases along different dimensions. To reiterate my favorite Isaac Asimov quote:

When people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together.

Interestingly enough, the context of this quote (source) is that it was in response to an English major who “…went on to lecture me severely on the fact that in every century people have thought they understood the universe at last, and in every century they were proved to be wrong. It follows that the one thing we can say about our modern ‘knowledge’ is that it is wrong.” Asimov’s response signifies that wrongness exists as not a dichotomy, but a scale. (It is kind of ironic that Asimov was the one who argued that wrongness is relative, to an English major in 1989.)

So yes, we are biased, but that does not mean we should just abandon cultural analysis. As we understand biases more, we get better at working around them and minimizing their impacts. One example is the anchoring bias, which says that if you are trying to guess a number but think of some other number beforehand, your guess will move slightly closer to that other number. For example, in situation (1), I ask you, “What is 1000 plus 1000?” and then ask you to estimate the price of a car, versus (2) I ask you, “What is a million plus a million?” and then ask you to estimate the price of the car. You will give a lower estimate in the first case and a higher estimate in the second case, even though it is the same car! To work around this, try to not expose someone to arbitrary numbers beforehand if you want an honest estimation from them, for instance. (For more on biases, see Daniel Kahneman’s Thinking, Fast and Slow.)

Probably, we cannot eliminate all biases from our minds. But in regards to cultural criticism, bias cannot be used as a disqualifier. In 12th grade history, we had an essay where one of the points was to analyze and contextualize sources, e.g. looking for bias. Some of my classmates apparently had used the “you cannot possibly understand” mentality on the source analysis. Our teacher had to announce in class that “This author is not from country X and therefore must be biased when talking about country X” is not a valid scholarly argument. From my college experience, professors explicitly warn against doing this as well, so to be clear, my argument on cultural criticism is not targeted against academics (who I think are approaching things correctly), but against a popular/cultural sentiment.

This recent Buzzfeed article “Why Muslim Americans Are Giving ‘Alice In Arabia’ Major Side-Eye” is an apt example of this sentiment. It’s interesting that the criticisms are not of the content but of the context—that the writer is a white woman and therefore must be racist and cannot possibly understand Muslims. I won’t say too much more about it here, but it’s pretty interesting and solidly demonstrates the point of this post. It isn’t even criticism of culture so much as even portrayal of/writing about another culture. Which leads me to…

Personal Investment and Empathy

“You cannot possibly understand” as an argument seems to deny empathy. The point of empathy is you can understand someone else. More specifically, we are concerned with intercultural empathy, trying to understand another culture. There are plenty of people who come from multicultural backgrounds and who have adapted from one culture to another, so it happens all the time.

Recently, I also ran into the argument of “you are not personally invested in X, therefore you have no point in talking about X,” which is again a denial of empathy and an affirmation of total self interest. This argument was made in a comment to the social progress blog post, and the commenter ended with the following:

Your stakes in this critical project are low, and you’re yelling that from your desk chair for some reason.

I think the implication was that since I’m not a humanities major, I shouldn’t be interested in talking about the humanities. Really? In addition, this sentiment is simply historically wrong. From a previous blog post:

It is important to keep in mind that when groups do agitate for rights, their practical purpose is to convince whomever is in charge to give them rights. Just looking at American history, we see that every time there is a major social revolution granting rights to a previously discriminated group, the government itself contained extremely few, if any, members of that group.

Abraham Lincoln was white, and so was the rest of the US government when the Civil War occurred. When Congress granted women the right to vote, there were no women in Congress. And when the LGBT community first agitated for rights, no member of Congress of such an orientation had openly declared it.

According to the commenter’s logic, these rights revolutions should never have happened because there was no personal investment for any white member of Congress to support rights for racial minorities, or for any male Congressperson to support rights for women, or for the straight Congress to support LGBT rights, etc.

And according to the commenter’s logic, pretty much everything I talk about should not be talked about. I’ve spoken in the past about LGBT rights and perceptions, women’s rights, and the wealth gap, even though I’m straight, male, and will be working on Wall Street. So why do I write on these topics? One word: empathy. (Arguably, even my atheism-related posts are not really personally invested: I’ve never felt discriminated against due to my atheism. It’s sometimes more of giving a voice to those who are prevented from having one.)

“You are not personally invested in X” is not as common as the other sentiments, but I feel that it needs an explanation. Maybe we are so well conditioned to look for biases that we assume everyone must have some personal vestment/personal reason for doing something. Perhaps it does stem from similar lines of thinking to “you cannot possibly understand.” If you assume that everyone is purely self-interested, then this argument is not as ridiculous, but it’s still shaky at best.

In all, we must be careful in analyzing other cultures, minimize the impact of our biases, and use empathy to even try to understand those whom we don’t normally associate with. And most of all, we need to move beyond “you cannot possibly understand.”

What Is the Best Superpower?

smbc-negative-wishes
From SMBC Comics

We often have discussions in our apartment on the most arbitrary topics. One time, we debated the question: What is the best superpower?

Despite the catchy title, this post is not really about the best superpower. Sure, it talks about that a lot, but that’s not the main point. The main point is about how messy a debate can be when the rules and terms are ill-defined.

What Is a Superpower?

From the start, it was unclear what was meant by “superpower.” It was implicitly understood that something completely all-encompassing like omnipotence is invalid because it is too broad, but this wasn’t formally forbidden. The only thing that was formally forbidden was any superpower than entailed having multiple other superpowers, like wishing for more wishes (but it gets fuzzy as to what counts as one superpower and what counts as multiple).

Being a smart-ass, instead of answering with the usual answers like telekinesis or mind control or invisibility or flying, I suggested the power to move subatomic particles. Let’s just call this particle manipulation for short.

From a naturalist perspective, i.e., physics, particle manipulation encompasses most other plausible powers (hold on for what “plausible” means):

  • To move a large object, you just make quadrillions of quadrillions of particles move in the same direction.
  • To start a fire, you make the particles move faster.
  • To create something out of thin air, or to regenerate any injury, you rearrange particles from the air into atoms and molecules to get what you want.
  • To control someone’s mind, you manipulate the neurons directly and make certain connections fire and others not fire.
  • To defuse a world war, you could just vaporize every nuke into air.
  • To become infinitely rich, you could just turn lead, or any other material, into gold, or into dollar bills.

However, my friend who initiated this discussion, and whose own answer was mind control, thought this answer I gave was “implausible” or “unrealistic.” So what is plausible and implausible? What is realistic and unrealistic?

Doesn’t the word “superpower” imply that it is NOT real? Why does moving a nearby object with your mind seem “realistic”? Does it take a lot of mental power or concentration? Are you limited in the number of objects you can control? Do I always write blog posts that have 7 questions in a row?

Much of our intuition of superpowers comes from the film industry (and thus indirectly from the comic book industry). Before getting bogged down with more philosophical questions, let’s appreciate some good old superpower usage in X-Men: First Class!

Observe the amount of concentration required in the first scene, compared to the relative ease in the second.

The second act is arguably more difficult: it requires control of a scattered collection of objects rather than just one, the control is required at far range, and the change in velocity is much greater. It’s hard to say which is more valid or realistic.

What Powers Are Valid?

Because the particle manipulation power was considered too strong, we decided to forbid it and use only well-known superpowers, to avoid some of the questions as to what was considered a superpower. But this clarification did not come at the beginning, it was more of a change of rules halfway in.

Even so, if you look at the comics, some powers are significantly stronger than portrayed in film. It’s still arguable that Jean Grey’s powers, especially as the Phoenix, are valid and are much stronger than most of the ones we talked about later in the discussion. Even so, do we count these powers separately? Are telepathy and telekinesis separate, or are they included together like in Jean’s case?

Magneto, for instance, is mostly known for him namesake, magnetism. But according to science, electricity and magnetism are really the same force, so does control of magnetism also come with control of electricity? According to Wikipedia:

The primary application of his power is control over magnetism and the manipulation of ferrous and nonferrous metal. While the maximum amount of mass he can manipulate at one time is unknown, he has moved large asteroids several times and effortlessly levitated a 30,000 ton nuclear submarine. His powers extend into the subatomic level (insofar as the electromagnetic force is responsible for chemical bonding), allowing him to manipulate chemical structures and rearrange matter, although this is often a strenuous task. He can manipulate a large number of individual objects simultaneously and has assembled complex machinery with his powers. He can also affect non-metallic and non-magnetic objects to a lesser extent and frequently levitates himself and others. He can also generate electromagnetic pulses of great strength and generate and manipulate electromagnetic energy down to photons. He can turn invisible by warping visible light around his body. […] On occasion he has altered the behavior of gravitational fields around him, which has been suggested as evidence of the existence of a unified field which he can manipulate. He has demonstrated the capacity to produce a wormhole and to safely teleport himself and others via the wormhole.

Thus, from a logical and consistency perspective, I found it difficult to reject the validity of powers such as these. We essentially watered down telekinesis to being able to move objects within X meters and within sight range.

Telekinesis vs Mind Control

Among the remaining, weaker powers, the debate ended up being between telekinesis and mind control. More and more rules were made up on the spot. Once it was established that one power was generally stronger, the other side tried to state some technicality that would limit the power, and thus bring both back to equal levels. At this point, I thought the debate was pointless because we already conceded so many of the better powers, and then kept limiting the remaining powers because of arbitrary, subjective reasons such as being “unrealistic,” which was the main counterpoint. This seems absurd, because you are debating superpowers in the first place—they’re not supposed to be realistic!

It seemed like a debate regarding “What is the highest whole number?” At first we got rid of infinity (omnipotence was not allowed). Getting rid of really strong powers turned into “What is the highest whole number less than 100?” Then when one side says 99, the other side uses a limiting argument basically saying, “The same way numbers over 100 are not allowed, 99 is absurdly high and should not allowed either.” It then becomes “What is the highest whole number less than 99?” And so on.

While there was some semblance to rational debate, it was clear that on the big picture scale, there were essentially no logical points being discussed. It was a matter of imposed fairness. “It’s unfair that your superpower gets to do X and ours does not, so yours is invalid.” But this defeats the purpose of the question in the first place, which was to determine which one was the best. It devolved into the question, “Given that a superpower does not exceed some power level N, what is the best superpower?” Of course, the answer will just be ANY sufficiently good superpower, restricted enough to be at level N. In this case, making up rules on the spot completely defeated the purpose of the question.

Conclusion

There were a bunch of other complications in the debate, but overall it was pretty fruitless. The rules of the debate, namely allowing one to make up rules spontaneously, defeated the purpose of the debate in the first place. It was not completely pointless, however, as it showed the need for setting clear guidelines at the start, and for being consistent.

Making Use of the Armchair: The Rise of the Non-Expert

As with all news, when I heard about the Sochi skating controversy last week, I read multiple sources on it and let it simmer. From the comments, however, that I saw on Facebook, Reddit, and on the news websites themselves, one thing struck me—nearly everyone seemed to be have extensive knowledge of Olympic figure skating, from the names of the spins to the exact scoring rubric.

How could this be? Was I the only person who had no idea who Yuna Kim was, or that Russia had not won in the category before?

Much of this “everyone is an expert” phenomenon is explained by selection bias, in that those with more knowledge of skating were more likely to comment in the first place; therefore, most of the comments that we see are from those who are the most knowledgeable.

But it’s unlikely that there would be hundreds of figure skating experts all commenting on at once. Moreover, when you look at the commenting history of the people in the discussion, they seem to also be experts on every other subject, not just in figure skating. So another effect is in play.

Namely, the Wikipedia effect (courtesy of xkcd):

xkcd Extended Mind

Of course, this effect is not limited to skating in the Olympics. When Newtown occurred, masses of people were able to rattle off stats on gun deaths and recount the global history of gun violence in late 20th- and early 21st-century.

Even so, not everyone does their research. There are still the “where iz ukrane????” comments, but undoubtedly the average knowledge of Ukrainian politics in the United States has increased drastically in the past few days. If you polled Americans on the capital of Ukraine, many more would be able to answer “Kiev” today than one week prior. For every conceivable subject, the Internet has allowed us all to become non-expert experts.

Non-Expert Knowledge

The consequences of non-expert knowledge range from subject to subject. The main issue is that we all start with an intuition about something, but with experience or training comes a better intuition that can correct naive errors and uncover counterintuitive truths.

  • An armchair doctor might know a few bits of genuine medical practice, but might also throw in superstitious remedies into the mix and possibly harm the patient more than helping. Or they might google the symptoms but come up with the wrong diagnosis and a useless or damaging prescription.
  • Armchair psychologists are more common, and it is easier to make up things that sound legitimate in this field. It is possible that an armchair psychiatrist will help a patient, even if due to empathy and not from psychiatric training.
  • Armchair economist. Might say some insightful things about one trend that they read about in the economy, but could completely miss other trends that any grad student would see.
  • Armchair physicist. Might profess to have discovered a perpetual motion machine, to be dismissed by a real physicist because the machine actually has positive energy input and is hence not perpetual. Or, might read about the latest invisibility cloak and be able to impress friends by talking about the bending of electromagnetic waves around an object by using materials with negative refractive index, but has no idea that it only works for a particular wavelength, thus making it practically useless (for now).
  • Armchair philosopher. Perhaps the most common, the armchair philosopher notices the things that happen in life and takes note of them. The article that you are currently reading is armchair philosophy, as I basically talk about abstract stuff using almost zero cited sources, occasionally referencing real-world events but only to further an abstract discussion.

Going back to the physics example, we normal people might observe the drinking bird working continuously for hours and conclude that it is a perpetual motion machine. An armchair physicist might go further to claim that that if we attach a motor to it, we could generate free energy.

Drinking Bird

A real physicist, however, would eventually figure out the evaporation and temperature differential, and then conclude that it is not a perpetual motion machine.

Five minutes of reading Wikipedia will not allow you to match an expert’s knowledge. But having non-expert knowledge sometimes does help. It opens up the door to new information and ideas. If everyone spoke only about what they were experts in, the world would become boring very quickly.

Talking About Topics Outside of Your Expertise

In everyday speech, any topic is fair game except for, ironically, the one topic that everyone is deemed to be an expert in even without Wikipedia—(their) religion. But I digress. The point is, the way we talk about things on a day-to-day basis is very different from the way experts talk about them in a serious setting.

Some differences are very minor and just a matter of terminology. For instance, I was discussing the statistics of voter turnout in the 2012 election one time, and I had phrased it as “percentage of eligible people who voted.” At the time, I did not know that “turnout” was a technical term that meant precisely what I had just said; I thought it was just a loose term in that didn’t necessarily consider the difference between the electorate and the total population, hence why I phrased it so specifically. In this example, the statistics I presented were correct, and thus the conclusion was valid, but the terminology was off.

Other differences are more significant. In the case of medical practice, a lack of formal understanding could seriously affect someone’s health. Using Wikipedia knowledge from your smartphone to treat an unexpected snake bite in real time is probably better than letting it fester before help arrives. But it’s probably safest to see a doctor afterwards.

A non-expert discussion in a casual setting is fine, as is an expert discussion in a serious setting. But what about a non-expert discussion in a serious setting? Is there anything to be gained? If two non-physicists talk about physics, can any meaning be found?

My answer is yes, but you need to discuss the right things. For example, my training is in math, so it would be pretty futile for me to discuss chemical reactions that occur from the injection of snake venom into the human body. However, given that I had done my research properly, I might be able to talk about the statistics of snake bites with as much authority as a snake expert. Of course, it would depend on the context of my bringing up the statistics. If we were comparing the rise in snake deaths to the rise in automobile deaths, I might be on equal footing. But if we were comparing snake bite deaths between difference species of snakes, a snake expert probably has the intellectual high ground.

But even this example still requires you to use some area of expertise to relate it to the one in question. To the contrary, you can still have a legitimate discussion of something outside your area of expertise even without relating to an area of expertise that you already have. You only need to make a claim broad enough, abstract enough, or convincingly enough to have an effect.

Among all groups of people, writers (and artists in general) have a unique position in being able to say things with intellectual authority as non-experts. Politicians are next, being able to say anything with political power as non-experts. However, I’m interested in the truth and not what politicians say, so let’s get back to writers. F. Scott Fitzgerald was not a formal historian of the 1920s, but The Great Gatsby really captures the decade in a way no history textbook could. George Orwell was not a political scientist, but Nineteen Eighty-Four was very effective at convincing people that totalitarian control is something to protect against.

The Internet and the Non-Expert

On the other hand, Nineteen Eighty-Four was not crafted in a medium limited by 140 characters or by one-paragraph expectancy. If George Orwell were alive today and, instead of writing Nineteen Eighty-Four, wrote a two-sentence anti-totalitarian comment on a news story on North Korea, I doubt he would have the same effect.

It is usually hard to distinguish an expert from a non-expert online. Often, an expert prefaces oneself by explicitly saying, “I am an expert on [this topic],” but even this is to be taken skeptically. I could give a rant on the times people claiming to have a Ph.D in economics had no grasp on even the most basic concepts.

In addition to allowing us the sum total of human knowledge just a click away (well, maybe not all knowledge), the Internet allows us to post knowledge instantaneously and share it with millions of other users. We have not only the public appearance of non-expert knowledge, but also the virus-like proliferation of it. Since the dawn of the Internet, people have been able to acquire knowledge about anything, but there was a great divide between the few content providers and the many consumers. Only recently have we become the content makers ourselves. What is the role of armchair philosophy in the age of information?

Conclusion

Now is a more important time than ever to be an armchair philosopher, or an armchair thinker, precisely because of the overwhelming amount of information available to us. To deal with the data overload requires an abstract way to categorize information, to filter out the useless from the useful, the wrong from the less wrong, the less true from the true.

We are expected to deal with areas outside of our expertise, and as our knowledge of these areas grows from the age of mass information, our responsibility to use it correctly becomes greater. Forming opinions even on issues that you have no authority to form opinions on is now an imperative. We learned the capital of Ukraine in one week, and our googling of Kiev might prove useful in the future. To deal with a quickly changing world, we need to deal with all information, not just data that we are comfortable with, as effectively as possible.

The Spectrum of Choice

One concept that I wanted to develop further was the idea of being proud of something that happened entirely by chance. In the original post, I argued that this is irrational. Being proud of something that you have no control over, such as your race or gender or eye color, is nonsensical.

For this reason and many others, our society looks down on racial or gender supremacy. To a lesser degree, we also look down on economic supremacy: we accept that the rich have better circumstances than the poor, but we would be appalled if someone said that the rich are better people than the poor. We think the US is number one, but we don’t say that Americans are better than those of other nationalities. We think everyone should be entitled to their own political or religious beliefs, but we find it hard to sympathize with those who think their beliefs are superior to those of others.

But at some point, we do start condemning. We condemn murderers and thieves, rapists and kidnappers, drunks and drug dealers. We condemn those who live extravagant lifestyles who don’t care at all about the common person. We condemn those who we perceive to have wronged for whatever reason. Where is the line drawn? There exist many ways of looking at this problem, and the perspective I will analyze it from is that of personal choice.

The Spectrum of Choice

Looking at the degree of personal choice helps to resolve a few questions, such as

  • What should we be proud of?
  • What should we condemn or not condemn?
  • What defines us?

Basically, this approach is to look at what degree of choice we have in some property of ourselves. A very simplified spectrum is given below.

spectrum-of-choice-2

The first category consists of properties over which we have absolutely no control; i.e. properties arising from pure chance. The two examples above are race and sex, which are, for the most part, the most important examples. (By race, I am referring to the original (biological) race, not an acquired ethnicity from cultural experiences. Ethnicity would, in fact, not belong in this section as you do have some control over it.) Since race is something that a person is born with and cannot be changed, it should not be used to label or criticize. Similarly, sex is determined before birth and unchangeable, and thus should not be used for condemnation.

The second category could also be called “Little Control.” It consists of properties over which we have some control but not very much. Socioeconomic status is included because while it is possible to move up the ladder (economic mobility), it does not happen often and there exist significant barriers that would impede someone from a lower status from advancing to a higher one. I have also included nationality in this category, and ethnicity could belong here as well. For most people, their country of residence is not so much a choice as it is just remaining where they were born. Even for many immigrants, the objective may be job-related, in which case it is debatable whether there was legitimate choice involved, or education-related, but this might only be temporary residence. Moreover, it may be difficult for someone to afford international travel or to part with family and friends.

The third section is for things you generally feel that you have control over. Political views, while theoretically changeable at will, are rarely changed. Moreover, many seem to inherit the political views of their parents or friends without questioning it much themselves. Hence I would not consider that one has full control over their political views. The same applies to religious views. Conversion to another religion is not a common occurrence, and many people’s religious views are suspiciously similar to those of their parents or friends. There are numerous social and cultural pressures as well for one to profess certain religions over others. Hence, while religion is something that people probably think they have full control over (perhaps having free will in the matter), I would not classify it under full control.

The last category is for things over which you have full control, things that you can change on a whim (well, most of the time). Unless you suffer from epilepsy, you normally have full control over your actions. This is why it is permissible to condemn criminals for their actions, because it is something that they chose. Sure, someone may have been under the influence of alcohol, but their act of drinking was itself a conscious action. Hobbies are included as well. Just as with actions, we generally don’t care what hobbies people have, but when they involve excessive drinking or drug use, we recognize that it is not a “just your opinion” decision, but there is an objectively better and an objectively worse choice. Nonpolitical/nonreligious beliefs probably fit under full control, since they are less biased from vested interests. Yes, your views are colored by society and culture, but you still have autonomy over them.

Ambiguous Properties

Some things are difficult to categorize. Intelligence, for instance, is part biological and part environmental (this relates to the nature vs nurture debate). Is intelligence something that we have control over? We generally don’t condemn people for not being super intelligent, so it cannot be Full Control; on the other hand, we know people who clearly have ways to enhance their intelligence, so it cannot be No Control. For the sake of this post, I will put intelligence in Some Control. Keep in mind that even if intelligence is almost purely the result of environment, i.e. nurture, this could say more about the parents or society or school than the actual child, who had little choice in determine his/her own intelligence in the years that mattered the most.

The perception of the spectrum may also shift for each individual depending on personal circumstance. For someone who is very rich and just wants to live in whatever country for whatever reason, place of residence would indeed fit under Full Control (though nationality may still remain the same). For someone who doesn’t have the financial or educational means, socioeconomic status might seem to be under No Control. For myself, since I don’t view atheism as a religion, I consider my “religious” views as non-religious (perhaps a better term would just be philosophical views) and would categorize it under Full Control. Finally, this is a spectrum, not a set of four discrete points on a line. The categorizations above are for convenience. In actuality, each property may occupy locations on the line that fit between the categories.

Conclusions

To answer the three questions: What should we be proud of? Well, since it is absurd to be proud of luck, it seems we should be most proud of the things that we had the most choice in. Our actions, our hobbies, and our general interests are legitimate things to take pride in. One way this differs from the common usage of the word “pride” is that this is inherently method-driven rather than results-driven. Reliance on choice makes coming up with the decision as the key step. Thus against an evenly matched opponent at chess, I can be proud of the step where I thought five moves ahead to checkmate, but not proud of winning the game (which was basically a coin flip before the game started).

As for condemnation, we are not justified in condemning people for something in which they had no choice. The more choice they had in the matter, the more it is possible to criticize (of course, due to social norms, this doesn’t mean we generally should). Related is the debate over the treatment of other religions, for instance. Some might decry criticism of Islam as “racist,” but Islam is not a race; it is a changeable religious belief. Sure, actually converting from Islam in some countries may be difficult, if not impossible, due to capital punishment for apostasy and shunning from the group. But in general, there is some degree of choice involved in being an extremist Christian or Muslim, hence equating religious criticism with racism or misogyny is very wrong. It is justified to criticize political or religious beliefs; it is unjustified to criticize race or gender. (I am not saying it is justified as social norm, but that it is justified in intellectual discussion.)

Lastly, given the degree of personal choice, what defines us is not the random and artificial labels that society gives us, but it is the choices we make and the actions we take in response. It should not be determined by what we don’t have a choice in, but rather by what we do end up choosing.