Culture, Biases, and Empathy

A few disclaimers before I start:

  1. This is a complicated issue. While I may simplify definitions or arguments for the sake of making a point, I realize the truth is more complex than that.
  2. I’m not completely sure about the conclusions, and this is not a topic that I am an authority on. Still, there are some things that I find so disturbing that I feel the need to say something, even if it is just armchairing.
  3. Culture can be taboo, especially to criticism. I realize this.
  4. I am going to throw in more caveats than usual, particularly because of the first three reasons. The last post I wrote in this area, on the social construction of progress, seemed to strike the wrong nerve even among some of my friends, so I’ll be extra careful here. I feel that I shouldn’t need to make such disclaimers, and hopefully this will clarify understanding rather than confound it.

The topic for today is the criticism of other cultures. In particular, we are very reluctant to criticize even a tiny facet of another culture, and while this is for good reason due to the not-so-friendly history of cultural superiority, I think we have overcompensated in the moral relativism direction and have ended up shielding even the worst culture-specific behaviors from criticism.

Wariness in Criticizing Cultures

As noted in the social progress post, much of our (post-)modern reservation to proclaim objective truths is well intentioned: to prevent future atrocities from happening as a result of the feelings of cultural superiority. The Holocaust comes to mind immediately, and European colonialism is another.

However, to (theoretically) renounce objective truth altogether would go too far. Then on what grounds do we have to say that stoning someone for adultery is wrong? Or rather, how can we criticize a culture that practices stoning as punishment for adultery? Or a culture with the punishment of 200 lashes for the crime of being raped? (Yes, you read that right—200 lashes not for the perpetrator, but for the victim.) We don’t have any grounds to make such criticism on at all, if we subscribe to extreme moral relativism.

Of course, this is an extreme scenario. The average person doesn’t watch a video of a woman being stoned to death and then say, “That’s okay because it’s okay in their culture and we have to respect that.” The reaction is outrage, as it should be.

Cultural Anthropic Principle

I want to take one step back and talk about a peculiarity in the logic of cultural critique: a selection effect on what we are saying. It is similar to an effect in cosmology called the anthropic principle: given that we are observing the universe, the universe must have properties that support intelligent life. That is, it addresses the question of “Why is our universe suitable for life?” by noting that if our universe were not suitable for life, then we wouldn’t be here making that observation. That is, the alternative question, “Why is our universe not suitable for life,” cannot physically be asked. We must observe a universe compatible with intelligent life.

A similar effect is found in some areas of cultural analysis. We have, for instance, many critiques of democracy written by people living in democracies. One might ask, what kind of criticisms do people make within a totalitarian state? The answer might be none: given that a writer is in a totalitarian system, their critique of the totalitarian government may never be published or even written in the first place for fear of imprisonment by the state. The net result is, given that we are critiquing our own political system, we are most likely in an open political system. This seems to answer the question, “Why is political analysis democracy-centric?”

The same principle applies to the criticism of cultures. More intellectually advanced cultures tend to be more open to self criticism and be more wary of criticizing other cultures. So, a culture that is wary about criticizing other cultures tends to be more intellectually sophisticated, and thus often are concerned with epistemological questions of cultural analysis in the first place and can often give a better answer than one that is less self-aware.

Cultural Exclusion, Bias

In any discussion with one person criticizing another culture, the go-to defense is, “You are not from culture X, so you cannot possibly understand X.” This seems to be a very exclusionary argument that implicitly denies the role of empathy. By saying “you cannot possibly understand,” one implies that there is something mysterious that cannot be shared with someone outside the group.

I’m all for people of different cultures to communicate and get along with one another, but the mindset of “you cannot possibly understand” seems to reinforce cultural divisions and deny the possibility for mutual understanding.

Along the lines of “you cannot possibly understand,” a related argument is, “You are from culture X, therefore your opinion is biased,” where X usually equals Western culture.

Of course opinions are biased! But it’s not as simple as biased vs unbiased (and does an unbiased person even exist?)—there is a whole range of biases along different dimensions. To reiterate my favorite Isaac Asimov quote:

When people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together.

Interestingly enough, the context of this quote (source) is that it was in response to an English major who “…went on to lecture me severely on the fact that in every century people have thought they understood the universe at last, and in every century they were proved to be wrong. It follows that the one thing we can say about our modern ‘knowledge’ is that it is wrong.” Asimov’s response signifies that wrongness exists as not a dichotomy, but a scale. (It is kind of ironic that Asimov was the one who argued that wrongness is relative, to an English major in 1989.)

So yes, we are biased, but that does not mean we should just abandon cultural analysis. As we understand biases more, we get better at working around them and minimizing their impacts. One example is the anchoring bias, which says that if you are trying to guess a number but think of some other number beforehand, your guess will move slightly closer to that other number. For example, in situation (1), I ask you, “What is 1000 plus 1000?” and then ask you to estimate the price of a car, versus (2) I ask you, “What is a million plus a million?” and then ask you to estimate the price of the car. You will give a lower estimate in the first case and a higher estimate in the second case, even though it is the same car! To work around this, try to not expose someone to arbitrary numbers beforehand if you want an honest estimation from them, for instance. (For more on biases, see Daniel Kahneman’s Thinking, Fast and Slow.)

Probably, we cannot eliminate all biases from our minds. But in regards to cultural criticism, bias cannot be used as a disqualifier. In 12th grade history, we had an essay where one of the points was to analyze and contextualize sources, e.g. looking for bias. Some of my classmates apparently had used the “you cannot possibly understand” mentality on the source analysis. Our teacher had to announce in class that “This author is not from country X and therefore must be biased when talking about country X” is not a valid scholarly argument. From my college experience, professors explicitly warn against doing this as well, so to be clear, my argument on cultural criticism is not targeted against academics (who I think are approaching things correctly), but against a popular/cultural sentiment.

This recent Buzzfeed article “Why Muslim Americans Are Giving ‘Alice In Arabia’ Major Side-Eye” is an apt example of this sentiment. It’s interesting that the criticisms are not of the content but of the context—that the writer is a white woman and therefore must be racist and cannot possibly understand Muslims. I won’t say too much more about it here, but it’s pretty interesting and solidly demonstrates the point of this post. It isn’t even criticism of culture so much as even portrayal of/writing about another culture. Which leads me to…

Personal Investment and Empathy

“You cannot possibly understand” as an argument seems to deny empathy. The point of empathy is you can understand someone else. More specifically, we are concerned with intercultural empathy, trying to understand another culture. There are plenty of people who come from multicultural backgrounds and who have adapted from one culture to another, so it happens all the time.

Recently, I also ran into the argument of “you are not personally invested in X, therefore you have no point in talking about X,” which is again a denial of empathy and an affirmation of total self interest. This argument was made in a comment to the social progress blog post, and the commenter ended with the following:

Your stakes in this critical project are low, and you’re yelling that from your desk chair for some reason.

I think the implication was that since I’m not a humanities major, I shouldn’t be interested in talking about the humanities. Really? In addition, this sentiment is simply historically wrong. From a previous blog post:

It is important to keep in mind that when groups do agitate for rights, their practical purpose is to convince whomever is in charge to give them rights. Just looking at American history, we see that every time there is a major social revolution granting rights to a previously discriminated group, the government itself contained extremely few, if any, members of that group.

Abraham Lincoln was white, and so was the rest of the US government when the Civil War occurred. When Congress granted women the right to vote, there were no women in Congress. And when the LGBT community first agitated for rights, no member of Congress of such an orientation had openly declared it.

According to the commenter’s logic, these rights revolutions should never have happened because there was no personal investment for any white member of Congress to support rights for racial minorities, or for any male Congressperson to support rights for women, or for the straight Congress to support LGBT rights, etc.

And according to the commenter’s logic, pretty much everything I talk about should not be talked about. I’ve spoken in the past about LGBT rights and perceptions, women’s rights, and the wealth gap, even though I’m straight, male, and will be working on Wall Street. So why do I write on these topics? One word: empathy. (Arguably, even my atheism-related posts are not really personally invested: I’ve never felt discriminated against due to my atheism. It’s sometimes more of giving a voice to those who are prevented from having one.)

“You are not personally invested in X” is not as common as the other sentiments, but I feel that it needs an explanation. Maybe we are so well conditioned to look for biases that we assume everyone must have some personal vestment/personal reason for doing something. Perhaps it does stem from similar lines of thinking to “you cannot possibly understand.” If you assume that everyone is purely self-interested, then this argument is not as ridiculous, but it’s still shaky at best.

In all, we must be careful in analyzing other cultures, minimize the impact of our biases, and use empathy to even try to understand those whom we don’t normally associate with. And most of all, we need to move beyond “you cannot possibly understand.”

What Is the Best Superpower?

smbc-negative-wishes
From SMBC Comics

We often have discussions in our apartment on the most arbitrary topics. One time, we debated the question: What is the best superpower?

Despite the catchy title, this post is not really about the best superpower. Sure, it talks about that a lot, but that’s not the main point. The main point is about how messy a debate can be when the rules and terms are ill-defined.

What Is a Superpower?

From the start, it was unclear what was meant by “superpower.” It was implicitly understood that something completely all-encompassing like omnipotence is invalid because it is too broad, but this wasn’t formally forbidden. The only thing that was formally forbidden was any superpower than entailed having multiple other superpowers, like wishing for more wishes (but it gets fuzzy as to what counts as one superpower and what counts as multiple).

Being a smart-ass, instead of answering with the usual answers like telekinesis or mind control or invisibility or flying, I suggested the power to move subatomic particles. Let’s just call this particle manipulation for short.

From a naturalist perspective, i.e., physics, particle manipulation encompasses most other plausible powers (hold on for what “plausible” means):

  • To move a large object, you just make quadrillions of quadrillions of particles move in the same direction.
  • To start a fire, you make the particles move faster.
  • To create something out of thin air, or to regenerate any injury, you rearrange particles from the air into atoms and molecules to get what you want.
  • To control someone’s mind, you manipulate the neurons directly and make certain connections fire and others not fire.
  • To defuse a world war, you could just vaporize every nuke into air.
  • To become infinitely rich, you could just turn lead, or any other material, into gold, or into dollar bills.

However, my friend who initiated this discussion, and whose own answer was mind control, thought this answer I gave was “implausible” or “unrealistic.” So what is plausible and implausible? What is realistic and unrealistic?

Doesn’t the word “superpower” imply that it is NOT real? Why does moving a nearby object with your mind seem “realistic”? Does it take a lot of mental power or concentration? Are you limited in the number of objects you can control? Do I always write blog posts that have 7 questions in a row?

Much of our intuition of superpowers comes from the film industry (and thus indirectly from the comic book industry). Before getting bogged down with more philosophical questions, let’s appreciate some good old superpower usage in X-Men: First Class!

Observe the amount of concentration required in the first scene, compared to the relative ease in the second.

The second act is arguably more difficult: it requires control of a scattered collection of objects rather than just one, the control is required at far range, and the change in velocity is much greater. It’s hard to say which is more valid or realistic.

What Powers Are Valid?

Because the particle manipulation power was considered too strong, we decided to forbid it and use only well-known superpowers, to avoid some of the questions as to what was considered a superpower. But this clarification did not come at the beginning, it was more of a change of rules halfway in.

Even so, if you look at the comics, some powers are significantly stronger than portrayed in film. It’s still arguable that Jean Grey’s powers, especially as the Phoenix, are valid and are much stronger than most of the ones we talked about later in the discussion. Even so, do we count these powers separately? Are telepathy and telekinesis separate, or are they included together like in Jean’s case?

Magneto, for instance, is mostly known for him namesake, magnetism. But according to science, electricity and magnetism are really the same force, so does control of magnetism also come with control of electricity? According to Wikipedia:

The primary application of his power is control over magnetism and the manipulation of ferrous and nonferrous metal. While the maximum amount of mass he can manipulate at one time is unknown, he has moved large asteroids several times and effortlessly levitated a 30,000 ton nuclear submarine. His powers extend into the subatomic level (insofar as the electromagnetic force is responsible for chemical bonding), allowing him to manipulate chemical structures and rearrange matter, although this is often a strenuous task. He can manipulate a large number of individual objects simultaneously and has assembled complex machinery with his powers. He can also affect non-metallic and non-magnetic objects to a lesser extent and frequently levitates himself and others. He can also generate electromagnetic pulses of great strength and generate and manipulate electromagnetic energy down to photons. He can turn invisible by warping visible light around his body. […] On occasion he has altered the behavior of gravitational fields around him, which has been suggested as evidence of the existence of a unified field which he can manipulate. He has demonstrated the capacity to produce a wormhole and to safely teleport himself and others via the wormhole.

Thus, from a logical and consistency perspective, I found it difficult to reject the validity of powers such as these. We essentially watered down telekinesis to being able to move objects within X meters and within sight range.

Telekinesis vs Mind Control

Among the remaining, weaker powers, the debate ended up being between telekinesis and mind control. More and more rules were made up on the spot. Once it was established that one power was generally stronger, the other side tried to state some technicality that would limit the power, and thus bring both back to equal levels. At this point, I thought the debate was pointless because we already conceded so many of the better powers, and then kept limiting the remaining powers because of arbitrary, subjective reasons such as being “unrealistic,” which was the main counterpoint. This seems absurd, because you are debating superpowers in the first place—they’re not supposed to be realistic!

It seemed like a debate regarding “What is the highest whole number?” At first we got rid of infinity (omnipotence was not allowed). Getting rid of really strong powers turned into “What is the highest whole number less than 100?” Then when one side says 99, the other side uses a limiting argument basically saying, “The same way numbers over 100 are not allowed, 99 is absurdly high and should not allowed either.” It then becomes “What is the highest whole number less than 99?” And so on.

While there was some semblance to rational debate, it was clear that on the big picture scale, there were essentially no logical points being discussed. It was a matter of imposed fairness. “It’s unfair that your superpower gets to do X and ours does not, so yours is invalid.” But this defeats the purpose of the question in the first place, which was to determine which one was the best. It devolved into the question, “Given that a superpower does not exceed some power level N, what is the best superpower?” Of course, the answer will just be ANY sufficiently good superpower, restricted enough to be at level N. In this case, making up rules on the spot completely defeated the purpose of the question.

Conclusion

There were a bunch of other complications in the debate, but overall it was pretty fruitless. The rules of the debate, namely allowing one to make up rules spontaneously, defeated the purpose of the debate in the first place. It was not completely pointless, however, as it showed the need for setting clear guidelines at the start, and for being consistent.

Making Use of the Armchair: The Rise of the Non-Expert

As with all news, when I heard about the Sochi skating controversy last week, I read multiple sources on it and let it simmer. From the comments, however, that I saw on Facebook, Reddit, and on the news websites themselves, one thing struck me—nearly everyone seemed to be have extensive knowledge of Olympic figure skating, from the names of the spins to the exact scoring rubric.

How could this be? Was I the only person who had no idea who Yuna Kim was, or that Russia had not won in the category before?

Much of this “everyone is an expert” phenomenon is explained by selection bias, in that those with more knowledge of skating were more likely to comment in the first place; therefore, most of the comments that we see are from those who are the most knowledgeable.

But it’s unlikely that there would be hundreds of figure skating experts all commenting on at once. Moreover, when you look at the commenting history of the people in the discussion, they seem to also be experts on every other subject, not just in figure skating. So another effect is in play.

Namely, the Wikipedia effect (courtesy of xkcd):

xkcd Extended Mind

Of course, this effect is not limited to skating in the Olympics. When Newtown occurred, masses of people were able to rattle off stats on gun deaths and recount the global history of gun violence in late 20th- and early 21st-century.

Even so, not everyone does their research. There are still the “where iz ukrane????” comments, but undoubtedly the average knowledge of Ukrainian politics in the United States has increased drastically in the past few days. If you polled Americans on the capital of Ukraine, many more would be able to answer “Kiev” today than one week prior. For every conceivable subject, the Internet has allowed us all to become non-expert experts.

Non-Expert Knowledge

The consequences of non-expert knowledge range from subject to subject. The main issue is that we all start with an intuition about something, but with experience or training comes a better intuition that can correct naive errors and uncover counterintuitive truths.

  • An armchair doctor might know a few bits of genuine medical practice, but might also throw in superstitious remedies into the mix and possibly harm the patient more than helping. Or they might google the symptoms but come up with the wrong diagnosis and a useless or damaging prescription.
  • Armchair psychologists are more common, and it is easier to make up things that sound legitimate in this field. It is possible that an armchair psychiatrist will help a patient, even if due to empathy and not from psychiatric training.
  • Armchair economist. Might say some insightful things about one trend that they read about in the economy, but could completely miss other trends that any grad student would see.
  • Armchair physicist. Might profess to have discovered a perpetual motion machine, to be dismissed by a real physicist because the machine actually has positive energy input and is hence not perpetual. Or, might read about the latest invisibility cloak and be able to impress friends by talking about the bending of electromagnetic waves around an object by using materials with negative refractive index, but has no idea that it only works for a particular wavelength, thus making it practically useless (for now).
  • Armchair philosopher. Perhaps the most common, the armchair philosopher notices the things that happen in life and takes note of them. The article that you are currently reading is armchair philosophy, as I basically talk about abstract stuff using almost zero cited sources, occasionally referencing real-world events but only to further an abstract discussion.

Going back to the physics example, we normal people might observe the drinking bird working continuously for hours and conclude that it is a perpetual motion machine. An armchair physicist might go further to claim that that if we attach a motor to it, we could generate free energy.

Drinking Bird

A real physicist, however, would eventually figure out the evaporation and temperature differential, and then conclude that it is not a perpetual motion machine.

Five minutes of reading Wikipedia will not allow you to match an expert’s knowledge. But having non-expert knowledge sometimes does help. It opens up the door to new information and ideas. If everyone spoke only about what they were experts in, the world would become boring very quickly.

Talking About Topics Outside of Your Expertise

In everyday speech, any topic is fair game except for, ironically, the one topic that everyone is deemed to be an expert in even without Wikipedia—(their) religion. But I digress. The point is, the way we talk about things on a day-to-day basis is very different from the way experts talk about them in a serious setting.

Some differences are very minor and just a matter of terminology. For instance, I was discussing the statistics of voter turnout in the 2012 election one time, and I had phrased it as “percentage of eligible people who voted.” At the time, I did not know that “turnout” was a technical term that meant precisely what I had just said; I thought it was just a loose term in that didn’t necessarily consider the difference between the electorate and the total population, hence why I phrased it so specifically. In this example, the statistics I presented were correct, and thus the conclusion was valid, but the terminology was off.

Other differences are more significant. In the case of medical practice, a lack of formal understanding could seriously affect someone’s health. Using Wikipedia knowledge from your smartphone to treat an unexpected snake bite in real time is probably better than letting it fester before help arrives. But it’s probably safest to see a doctor afterwards.

A non-expert discussion in a casual setting is fine, as is an expert discussion in a serious setting. But what about a non-expert discussion in a serious setting? Is there anything to be gained? If two non-physicists talk about physics, can any meaning be found?

My answer is yes, but you need to discuss the right things. For example, my training is in math, so it would be pretty futile for me to discuss chemical reactions that occur from the injection of snake venom into the human body. However, given that I had done my research properly, I might be able to talk about the statistics of snake bites with as much authority as a snake expert. Of course, it would depend on the context of my bringing up the statistics. If we were comparing the rise in snake deaths to the rise in automobile deaths, I might be on equal footing. But if we were comparing snake bite deaths between difference species of snakes, a snake expert probably has the intellectual high ground.

But even this example still requires you to use some area of expertise to relate it to the one in question. To the contrary, you can still have a legitimate discussion of something outside your area of expertise even without relating to an area of expertise that you already have. You only need to make a claim broad enough, abstract enough, or convincingly enough to have an effect.

Among all groups of people, writers (and artists in general) have a unique position in being able to say things with intellectual authority as non-experts. Politicians are next, being able to say anything with political power as non-experts. However, I’m interested in the truth and not what politicians say, so let’s get back to writers. F. Scott Fitzgerald was not a formal historian of the 1920s, but The Great Gatsby really captures the decade in a way no history textbook could. George Orwell was not a political scientist, but Nineteen Eighty-Four was very effective at convincing people that totalitarian control is something to protect against.

The Internet and the Non-Expert

On the other hand, Nineteen Eighty-Four was not crafted in a medium limited by 140 characters or by one-paragraph expectancy. If George Orwell were alive today and, instead of writing Nineteen Eighty-Four, wrote a two-sentence anti-totalitarian comment on a news story on North Korea, I doubt he would have the same effect.

It is usually hard to distinguish an expert from a non-expert online. Often, an expert prefaces oneself by explicitly saying, “I am an expert on [this topic],” but even this is to be taken skeptically. I could give a rant on the times people claiming to have a Ph.D in economics had no grasp on even the most basic concepts.

In addition to allowing us the sum total of human knowledge just a click away (well, maybe not all knowledge), the Internet allows us to post knowledge instantaneously and share it with millions of other users. We have not only the public appearance of non-expert knowledge, but also the virus-like proliferation of it. Since the dawn of the Internet, people have been able to acquire knowledge about anything, but there was a great divide between the few content providers and the many consumers. Only recently have we become the content makers ourselves. What is the role of armchair philosophy in the age of information?

Conclusion

Now is a more important time than ever to be an armchair philosopher, or an armchair thinker, precisely because of the overwhelming amount of information available to us. To deal with the data overload requires an abstract way to categorize information, to filter out the useless from the useful, the wrong from the less wrong, the less true from the true.

We are expected to deal with areas outside of our expertise, and as our knowledge of these areas grows from the age of mass information, our responsibility to use it correctly becomes greater. Forming opinions even on issues that you have no authority to form opinions on is now an imperative. We learned the capital of Ukraine in one week, and our googling of Kiev might prove useful in the future. To deal with a quickly changing world, we need to deal with all information, not just data that we are comfortable with, as effectively as possible.

The Spectrum of Choice

One concept that I wanted to develop further was the idea of being proud of something that happened entirely by chance. In the original post, I argued that this is irrational. Being proud of something that you have no control over, such as your race or gender or eye color, is nonsensical.

For this reason and many others, our society looks down on racial or gender supremacy. To a lesser degree, we also look down on economic supremacy: we accept that the rich have better circumstances than the poor, but we would be appalled if someone said that the rich are better people than the poor. We think the US is number one, but we don’t say that Americans are better than those of other nationalities. We think everyone should be entitled to their own political or religious beliefs, but we find it hard to sympathize with those who think their beliefs are superior to those of others.

But at some point, we do start condemning. We condemn murderers and thieves, rapists and kidnappers, drunks and drug dealers. We condemn those who live extravagant lifestyles who don’t care at all about the common person. We condemn those who we perceive to have wronged for whatever reason. Where is the line drawn? There exist many ways of looking at this problem, and the perspective I will analyze it from is that of personal choice.

The Spectrum of Choice

Looking at the degree of personal choice helps to resolve a few questions, such as

  • What should we be proud of?
  • What should we condemn or not condemn?
  • What defines us?

Basically, this approach is to look at what degree of choice we have in some property of ourselves. A very simplified spectrum is given below.

spectrum-of-choice-2

The first category consists of properties over which we have absolutely no control; i.e. properties arising from pure chance. The two examples above are race and sex, which are, for the most part, the most important examples. (By race, I am referring to the original (biological) race, not an acquired ethnicity from cultural experiences. Ethnicity would, in fact, not belong in this section as you do have some control over it.) Since race is something that a person is born with and cannot be changed, it should not be used to label or criticize. Similarly, sex is determined before birth and unchangeable, and thus should not be used for condemnation.

The second category could also be called “Little Control.” It consists of properties over which we have some control but not very much. Socioeconomic status is included because while it is possible to move up the ladder (economic mobility), it does not happen often and there exist significant barriers that would impede someone from a lower status from advancing to a higher one. I have also included nationality in this category, and ethnicity could belong here as well. For most people, their country of residence is not so much a choice as it is just remaining where they were born. Even for many immigrants, the objective may be job-related, in which case it is debatable whether there was legitimate choice involved, or education-related, but this might only be temporary residence. Moreover, it may be difficult for someone to afford international travel or to part with family and friends.

The third section is for things you generally feel that you have control over. Political views, while theoretically changeable at will, are rarely changed. Moreover, many seem to inherit the political views of their parents or friends without questioning it much themselves. Hence I would not consider that one has full control over their political views. The same applies to religious views. Conversion to another religion is not a common occurrence, and many people’s religious views are suspiciously similar to those of their parents or friends. There are numerous social and cultural pressures as well for one to profess certain religions over others. Hence, while religion is something that people probably think they have full control over (perhaps having free will in the matter), I would not classify it under full control.

The last category is for things over which you have full control, things that you can change on a whim (well, most of the time). Unless you suffer from epilepsy, you normally have full control over your actions. This is why it is permissible to condemn criminals for their actions, because it is something that they chose. Sure, someone may have been under the influence of alcohol, but their act of drinking was itself a conscious action. Hobbies are included as well. Just as with actions, we generally don’t care what hobbies people have, but when they involve excessive drinking or drug use, we recognize that it is not a “just your opinion” decision, but there is an objectively better and an objectively worse choice. Nonpolitical/nonreligious beliefs probably fit under full control, since they are less biased from vested interests. Yes, your views are colored by society and culture, but you still have autonomy over them.

Ambiguous Properties

Some things are difficult to categorize. Intelligence, for instance, is part biological and part environmental (this relates to the nature vs nurture debate). Is intelligence something that we have control over? We generally don’t condemn people for not being super intelligent, so it cannot be Full Control; on the other hand, we know people who clearly have ways to enhance their intelligence, so it cannot be No Control. For the sake of this post, I will put intelligence in Some Control. Keep in mind that even if intelligence is almost purely the result of environment, i.e. nurture, this could say more about the parents or society or school than the actual child, who had little choice in determine his/her own intelligence in the years that mattered the most.

The perception of the spectrum may also shift for each individual depending on personal circumstance. For someone who is very rich and just wants to live in whatever country for whatever reason, place of residence would indeed fit under Full Control (though nationality may still remain the same). For someone who doesn’t have the financial or educational means, socioeconomic status might seem to be under No Control. For myself, since I don’t view atheism as a religion, I consider my “religious” views as non-religious (perhaps a better term would just be philosophical views) and would categorize it under Full Control. Finally, this is a spectrum, not a set of four discrete points on a line. The categorizations above are for convenience. In actuality, each property may occupy locations on the line that fit between the categories.

Conclusions

To answer the three questions: What should we be proud of? Well, since it is absurd to be proud of luck, it seems we should be most proud of the things that we had the most choice in. Our actions, our hobbies, and our general interests are legitimate things to take pride in. One way this differs from the common usage of the word “pride” is that this is inherently method-driven rather than results-driven. Reliance on choice makes coming up with the decision as the key step. Thus against an evenly matched opponent at chess, I can be proud of the step where I thought five moves ahead to checkmate, but not proud of winning the game (which was basically a coin flip before the game started).

As for condemnation, we are not justified in condemning people for something in which they had no choice. The more choice they had in the matter, the more it is possible to criticize (of course, due to social norms, this doesn’t mean we generally should). Related is the debate over the treatment of other religions, for instance. Some might decry criticism of Islam as “racist,” but Islam is not a race; it is a changeable religious belief. Sure, actually converting from Islam in some countries may be difficult, if not impossible, due to capital punishment for apostasy and shunning from the group. But in general, there is some degree of choice involved in being an extremist Christian or Muslim, hence equating religious criticism with racism or misogyny is very wrong. It is justified to criticize political or religious beliefs; it is unjustified to criticize race or gender. (I am not saying it is justified as social norm, but that it is justified in intellectual discussion.)

Lastly, given the degree of personal choice, what defines us is not the random and artificial labels that society gives us, but it is the choices we make and the actions we take in response. It should not be determined by what we don’t have a choice in, but rather by what we do end up choosing.

When Principles Collide

One of the things about growing up with a sheltered life is that you rarely ever have to stand up for your principles. This could be due to several reasons: maybe they’re not really your principles, but your parents’; maybe you’re just not placed into situations where conflicts occur; maybe your principles themselves seek to avoid confrontation. I recall so many times when I was younger that I had some well thought-out idea for something but then instead went along with someone else’s idea without question, in the interest of avoiding conflict. I’m not saying that you should always insist what you’re doing is correct, but I think on the spectrum I was too far on the side of passivity.

Throughout college (and perhaps starting senior year of high school), I found myself more often at points where I needed to disagree. It wasn’t really conflict for the sake of conflict, but rather to get to the truth or to make a situation better, by challenging faulty ideas or plans. I think this change is evident on my blog: in the past, most of the topics I wrote about were very non-controversial, but recently, they have been more questioning of commonly held ideas. Granted, my online persona (including on Facebook) and my real life character are still quite different—in real life I don’t go around seeking to criticize peoples’ religious beliefs, an activity that is reserved for the internet. That’s another topic.

Contradictory Principles

For a really simple example, consider the principles “be honest” and “don’t be a jerk.” Everyone follows these principles, and most of the time they support each other. You’d be quite a jerk if you lied to your friends about so many things to the point where nothing you say has any credibility. However, when you find minor fault in something someone did, you could be honest and tell them, but most of the time it’s better to be silent about it. Of course, the best choice depends all on the situation.

contradiction-signs
I respect both ownership rights and aesthetic cleanliness—do I pollute whitespace by citing the image source, especially if the image isn’t all that special?

Perhaps a more pertinent contradiction is that between tolerance of others and… tolerance of others. For example, most of my audience probably tolerates the LGBT community. Yet, there are many people in America who do not. This leads to a tolerance paradox (that I think many of us don’t think about): Is is possible to simultaneously be tolerant of LGBT individuals and tolerant of people who are intolerant of them? Is a hypothetical all-tolerant person also tolerant of intolerance?

This depends somewhat on how you define tolerance, but it points to a deeper issue, that simply using the principle “tolerate others” is insufficient in these fringe cases. There must be some overriding principle that tells you what to be tolerant of and what not to be tolerant of. I think that being intolerant of intolerance is still tolerant.

In chess, one of the most important principles, among the first to be taught to new players, is to never lose their queen unless they can get the opponent’s queen as well. While this is a great principle 99.9% of the time, there are cases where losing your queen (for no pieces or just a pawn in return) is the best move, and there are even cases where it is the only non-losing move. It’s because the principle of “win the game” overrides the principle of “don’t lose your queen.”

Interestingly enough, even meta-principles can contradict one another. For me, “stand up for your principles” is a good principle, and so is “be open-minded about your principles.” Often blindly standing up for principles is a very bad idea (in the typical novel/movie, the antagonist may have good intentions but focuses on one idea or principle to the exclusion of all others, thus causing more overall harm than good; on the other hand, this principle seems required to become a politician).

Throughout my first two years of college, I wanted to go into academia, and I naively shunned finance because I thought people went into it just for money. Of course, once you start thinking about what to do after college and the need for money comes closer, you realize that you need money to live(!) and that despite the negative outside perception, the industry is not all evil people trying to figure out how to suck away all your money. Of course, on the “stand up for your principles” front, this change fails pretty hard, but it follows “being open-minded about your principles,” which I consider to override the first in this case. After all (to add one more layer of contradiction), it is standing up for the principle of being open-minded.

Rationality vs Irrationality

This article is based on several conversations I’ve had recently on rationality, and it is supposed to be an overview-type post that explores different areas of the subject. In fact, since this is a pretty heated topic that comes with misunderstandings by the handful, I will be going very slowly and throwing out as many caveats as possible to make sure I’m not misunderstood, though of course this is bound to happen. Because of this, the tone for this article will be rather informal.

Rationality vs Irrationality

It is obvious (to anyone who follows this blog or knows me in real life) that I stand on the side of rationality (though I often intentionally do things that would be considered irrational). Heck, even the blog name is “A Reasoner‘s Miscellany.” Note that the title is not “A Reasoner’s Manifesto” or “A Reasoner’s Main Ideas.” Rather, it is a “miscellany” of various ideas in various subjects and of various degrees of significance. The main purpose of this blog is to jot down random ideas, serve as a diary of thoughts, and also just to satisfy my urge to write. It is not to try to start a revolution or to promote any particular ideology.

Answering this question obviously depends on having precise definitions of what rationality and irrationality are, but as soon as I lay down definitions, some of you will start arguing the definitions rather than the actual concepts. And without this disclaimer, some of you will be arguing “Well it depends on the definitions” as if that refutes my overall argument. It turns out you’re in luck, because in this post, I’m not trying to make any grand overarching arguments, but instead just laying down a bunch of thoughts, which might be followed up on in later blog posts with more fully fleshed out arguments.

Now that many of the meta-caveats are out of the way, I suppose I can finally begin talking about rationality. Of course, even without giving detailed definitions, I feel as if I must give some overall definition to anchor the discussion. Basically, when I refer to rational thinking, I refer to thinking involving logic, facts, evidence, and reason. This is opposed to irrational thinking, which I consider to be thinking involving emotion, faith, or just not thinking (or even the refusal to think). These characterizations don’t exactly match the conventional philosophical terms (which are themselves sometimes disagreed upon), but I think this captures what is generally meant when someone says “That thought process is rational” or “That thought process is irrational.”

Biases are one of the primary obstructions to reason. Two perfectly rational agents using perfect logic and starting with the same information should theoretically arrive at the same conclusion. However, the “perfect logic” assumption is ruined if one of the agents is biased towards one side from the beginning and uses that bias in their “logic,” at which point it is no longer logic. Of course, one of the most important biases is that you are less biased than other people. Thus I must try my best to account for major personal impacts in my life that would push me towards rationality.

The main event influencing my choice towards reason is when I started learning about astronomy when I was in first grade, in South Carolina of all places. We visited an observatory and I quickly became interested in space. Even then, I realized that knowing all these things about space must have occurred through some systematic method of observation, experimentation, and reasoning (though not in terms of these words). We knew there were nine planets (back then, Pluto was a planet) because we saw them through our telescopes and reasoned their existence through their movements and gravitational effects, not because we wished there were nine, or because it would be totally awesome if there were nine, or because it was divinely revealed to us that there were nine.

Religion and Tradition Both Oppose Rationality

Because of my early interest in space I learned by 1st grade about the Galileo incident with the Church (and also about Copernicus to a lesser degree). It didn’t just bother me that the vast majority of people were so ludicrously wrong about something like whether Earth revolves around the Sun or the Sun revolves around the Earth, but rather, that the Church refused to believe the truth and instead demonized the bringer of truth, doing so because they so adamantly believed that the Sun orbits the Earth because their holy book said so. From the moment I learned about this, I could never take “religious logic” seriously (i.e., X is true because it says so in the Bible/Quran/etc).

My views on religion have changed a lot since 1st grade. For instance, my main objection to religion now is not so much that it is fictional, but rather because of the vast social harm it causes due to its irrationality. In fact, throughout most of my life I subscribed to multiculturalism (regarding religion, you have to respect religious ideas no matter how insane they are), and so I wasn’t an antitheist. It was only a year ago that I went from (agnostic) atheist to (agnostic) atheist antitheist.

Another great opponent to rationality is tradition. Similarly to religion, tradition in principle stifles new ideas and is very bad a providing reasonable justification for doing something, i.e. “Because it says so in the Bible” or “Because that’s how it has always been done.” Again along the lines of biases, I have to warn that I am probably personally vested in this topic of tradition vs rationality as I extremely resented how I was treated in my childhood from my Asian parents, and also due to my view of Chinese culture in general. For an explanation, see this post and this one. In context of this post, even at a young age I was capable of making logical arguments and it always frustrated me that whenever I argued with my parents, they could never actually refute what I said, only justifying their actions through tradition, superstition, and authority. I’ve never mentioned it on this blog before, and only to a few people in real life, but in my childhood I was driven by my parents to near suicide. These anti-tradition, anti-superstition, and anti-authority sentiments have persisted.

Intentional vs Unintentional Irrationality

This summer I probably thought about rationality more than I ever have in the past, as my work had to do with making rational decisions. The book Thinking, Fast and Slow, by Daniel Kahneman (Nobel Prize winner), made an significant impact. The primary reason I wrote the post “Pride in Things Out of Your Control” was that it was something that I found deeply irrational even though it was being expressed by a number of highly rational people. The fact that it was on July 4th given the subject was pure coincidence.

But that topic was on something that most people probably never think about. Because of this, it’s much harder to call someone with this kind of view “irrational,” as they probably aren’t aware of it. On the other hand, if someone say read that post and thought about pride in randomness, and afterwards still thought it was rational to be proud of one’s race, then it is much easier to consider them irrational. Similarly, I don’t find most religious people irrational since most religious people (at least of the ones I know) never talk about religion, thus they probably aren’t ever in a serious state of questioning religion. On the other hand, some religious people read science books (particularly on evolution) and still believe in creationism, thus it is much easier to consider these people irrational. Just as refusing to accept that Earth orbits the Sun (based on religious texts) is worse than simply not knowing about it, refusing to learn about evolution (based on religious texts) is worse not knowing about evolution. See willful ignorance.

Rationality vs Irrationality in the Media

The distinction between rationality and irrationality is related to many others, like Enlightenment vs Romanticism, future utopia vs past utopia, objective truth vs subjective truth, or science vs religion. If anything, support of irrationality is significantly overrepresented in the media. Does the following movie setting sound (overly) familiar?: The future, advanced technology, but with social inequality, terrible quality of life, what it means to be “human” is gone, nature is destroyed, and evil technologists or even machines rule as the result of the rise of the “rational,” and the day is saved by someone with an old-fashioned, “irrational” mentality often involving some mythical power? Nah, that sounds like a completely original idea. What about the one where nature overcomes technology? Or the religious guy who no one believes who is right the whole time? Or the evil scientist showing that science is bad? Or society claims to know how to treat the “irrational,” using nefarious tactics?

Sure, these are just movies mostly for entertainment purpose, and any societal warnings are a secondary effect. Perhaps I’m way overreacting. I mean, a movie or a novel has to have dramatic conflict, and movie about the future being an awesome place would be really boring to watch. But this does not mean the framing of which side is “good” and which side is “bad” should be so one-sided. One of the only shows that takes the pro-rational side is Star Trek (the [earlier] TV shows, not so much the recent movies). Characters like Spock and Data are as logical as you can possibly get, yet they are on the team of the protagonists. Technology is shown as overall beneficial, and even religion has almost disappeared from humanity (though some of the aliens they encounter have their own religions). In fact, it seems like if some show like Star Trek, The Original Series or The Next Generation, were to be released in modern day, 2013, it would be canned and be deemed far too political and “anti-religious,” as American society is far more anti-science than before (I find it hard to imagine the modern US having a warm reaction to a hypothetical modern-day version of Albert Einstein.)

The only other type of show I can think of that is pro-reason is crime investigation shows, where the protagonists try to rationally deduce facts from clues and from suspects, many of whom committed crimes for highly irrational purposes. But the main theme for these shows are normally concerned with justice, not rationality vs irrationality.

The Rationality of Irrationality

In the second paragraph, I mentioned that I sometimes intentionally act “irrationally.” However, many of these irrationalities are still made from an overall rational decision. In the post “Spontaneous Decision Making,” I talked about how I generally “…don’t plan ahead details ahead of time, as I abhor fixed schedules or fixed paths.” I will re-quote here an interesting behavior from my Fall 2010 semester:

For example, last semester, to get to one of my classes from my dorm I had two main paths, one going over the Thurston Bridge and the other over a smaller bridge that went by a waterfall. For the first couple weeks I took the Thurston Bridge path exclusively, as I thought it was shorter than the waterfall path. But then one day I went the other path and timed it, with about the same time, maybe a minute slower (out of a total of 15 minutes). So I started taking the waterfall path exclusively. But eventually that got boring too, so I started alternating every time. You might think that’s how it ended.

But a consistent change like that is still… consistent. Still the same. It was still repetitive, and still very predictable. Perhaps the mathematical side of me started running pattern-search algorithms or something. Eventually, I ended up on a random schedule, not repeating the same pattern in any given span of 3 or 4 days.

But as I later reasoned in the “Spontaneous Decisions” post, there was a method in the madness. I go against patterns on purpose, but all this increases versatility. I try to be prepared for anything, and if I always do the same pattern or plan everything out ahead of time, then I may not be able to adapt quickly to a new situation.

Another set of examples comes from video games. I tend to play extremely flexible classes/builds that have multiple purposes, and I try to have multiple characters or styles to be able to adapt quickly and to know what other people are thinking…

To have a quick response, I try to be accustomed to every scenario, and moreover, practice responding quickly. It is a sort of planned spontaneity. Intentionally making spontaneous decisions is like handicapping yourself during practice. But then when you get to the real thing, you remove the handicap and perform much better. If you can make a good assessment of a situation in 10 seconds, imagine how much better it would be with 10 hours.

In addition, the planned spontaneity is very much like preparing for a later event. Comedians spend a bunch of time preparing content so that it seems spontaneous when they perform it. In speed chess, when you don’t have time to think, the only thing that helps is prior experience. To quote Oscar Wilde: “To be natural is such a very difficult pose to keep up.”

Is Art Irrational?

Anti-rationalists often point to art, implying that to be rational is to see art as pointless. Art is indeed a more subjective experience, but is it totally subjective? Many great artists and novelists created works that expressed the style or discontent of their times. In the same way I see history as useful because it provides us with a context with which to view the modern world and the future, I see art as useful to see not just the time period of the artist, but also the lives of the artists themselves. To say “art is subjective” and end discussion with that is a very naive move that shows either a shallow understanding of art or a participation card in the “all truth is subjective” movement.

I can have rational discussions of art, novels, films, TV shows, video games, etc. When you want another’s opinion on a new painting from a famous artist and you have artist friends, who do you consult? Do you go on the streets and find a hobo or crack dealer and ask him about the art? Do you ask your favorite 6-year old relative? Do you consult a physics professor? No, probably not. Even though “art is subjective” and beauty is in the eye of the beholder, you go to the fellow artist or art critic to hear their professional, trained opinion. If the art critic’s opinion is worth more than that of the average person, then there must be some part of art that is objective. If you met someone at a formal event who said, “I hate the Mona Lisa, it’s a terrible piece of art!” you would probably think this person is uncultured and has an inferior art opinion despite your belief that art is subjective.

Ordinary Faith vs Religious Faith

It is perfectly rational to have faith in the conventional sense, but it is almost always irrational to have faith of the religious variety. I am okay with believing something with no proof if I still consider it a reasonable decision. Do I have absolute proof that the Sun will come up tomorrow? No, but I’ll bet anyone 10,000 to 1 odds that it will (if it doesn’t, I’ll give you $10,000; if it does, you owe me $1). For me to make this bet, that means I have to believe the probability of the Sun coming up tomorrow is >99.99%, given certain risk aversion preferences. If a billionaire whom I was best friends with and a homeless beggar both asked me for $100 as investment money and promised to give me a $50 a year for the next 10 years, given that I trust the billionaire sufficiently (and that inflation/interest rates are as they are now), I would give it to the billionaire (i.e. I would have faith in this billionaire), but would obviously not give any money to the beggar. Rationally, anything with a high enough probability of happening and with a low enough max cost, is reasonable to believe.

Religious faith corrupts the usual concept of faith. Instead of having strong evidence (the Sun has come up every single day since recorded history and according to science there is nothing to suggest a high probability of the Sun not coming up tomorrow; or this person is a self-made billionaire and so must know how to invest money, and is also a good friend) and therefore believing something, I am given ZERO evidence and expected to believe something. Not even a speck of evidence.

Conclusion

This article wasn’t really written in a way that lends to a conclusion, but given the length, I find it nonetheless necessary to include a “Conclusion” section. The post was much longer than I expected (around 2900 words), but I think I gained a more organized view of these ideas. The topic is, of course, open to rational debate.

When Does Not Deciding Count as a Decision?

decisions-2

This week’s topic is whether not deciding is itself a decision. Let us start by escalating things quickly: consider the classic trolley problem.

There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. Unfortunately, you notice that there is one person on the side track. You have two options: (1) Do nothing, and the trolley kills the five people on the main track. (2) Pull the lever, diverting the trolley onto the side track where it will kill one person. Which is the correct choice?

While there are many interesting aspects of the trolley problem, with many variants of the problem that may cause one to reconsider their views, this article is concerned with one particular question: Is actively choosing option (1), or doing nothing, equivalent to passively not making a decision? (It turns out this question has real-world consequences, as will be evident below.)

That is, is there is a difference between

  • A) Considering (1) and (2), and deciding that (1) is morally superior; and
  • B) Ignoring the decision, and thus passively allowing (1) to occur?

For one difference, consider the same trolley problem except that the trolley is initially headed for the 1 person, and you have to pull the lever to turn it to the 5 people. In such a case, someone using thought process (A) would STILL choose (1), to have the trolley hit 5 people, whereas someone using thought process (B) would now “choose” (2), allowing the 1 person to be killed.

In the original case, it is difficult to justify non-decision; however, one would most likely be viewed as innocent if one made no decision and allowed the trolley to kill 5. This is because the legal system generally can only punish decisions, not non-decisions. So the real question is, are the following equivalent?

  • I) The trolley is already headed towards the 5 people, and you allow it to continue on course.
  • II) The trolley is headed towards the 1 person, and you divert it to head towards the 5.

The outcome of both situations is the same, namely the 5 people die but the 1 person survives. However, it seems that if this were considered a wrong action, we would be able to legally punish (II), but not (I), since (I) could have been based on not deciding. However, should they be legally viewed the same? That is, should someone be accountable for not deciding?

Speed Chess

One of the interesting examples of decision vs non-decision in a non-legal, non-moral context is blitz chess. When you have only a few minutes for the whole game, you cannot afford to spend a sufficient amount of time thinking about every move. Instead, you must ration your time as a resource, and in some cases choose to not think on a particular move. Speed chess is indeed based primarily on intuition, less so on cold calculation.

Thus in speed chess, it is very feasible that not thinking about a move is itself a decision. Once you have a lot of experience, you gain the intuition of which types of positions require calculation and which do not. It becomes possible to say when it is “correct” to not decide. In this case, not deciding is clearly a decision.

Willful Ignorance

Decisions are based on available information, so a natural question relevant to whether one can be held accountable for not deciding is whether one can be held accountable for not knowing. Moreover, it is important whether someone can be held accountable for intentionally refusing to know. After all, no one would blame a child for thinking that Earth is flat. But when adults believe the world is flat, that is an entirely different issue, because most likely they have intentionally refused to hear the case of the round Earth.

The same goes for evolution, only there are significant national and state policy decisions made based on the refusal to learn about it. Of course, we wouldn’t hold a child responsible for their beliefs, but for an adult to use willful ignorance in decision making is inexcusable.

Whether willful ignorance is problematic in principle can be seen in a trolley variant. Suppose the person who has the power to pull the lever believes that the case is as in the original trolley problem. However, the side which supposedly has 1 person actually has 100.

The operator pulls the lever, diverting the trolley from the side of 5 people to the side of 100, killing all 100 people. Note that the operator cannot be blamed because of genuine ignorance.

Now consider an alternative scenario. The situation is the same as above: the operator believes that is a matter of 5 lives vs 1 life, but it is actually a matter of 5 vs 100. Before making the decision, someone else runs in, screaming that there are actually 100 people on the second track. It would be extremely easy to verify this, but instead, the operator refuses to listen to the new information and diverts the track to the 100 anyways, still clinging to the belief that there is only 1 person. In this case, the operator is being willfully ignorant.

(Can some lawyer explain if there are indeed differences in the previous situations?)

There are countless other examples where the intentional lack of information should not be a valid excuse for a bad decision. Suppose someone is about to receive the death penalty for a crime. A piece of evidence shows up that could provide reasonable doubt in the conviction. It would be absurd to refuse to see this evidence, especially because the refusal to see it would most likely be the result of the people really wanting this person to receive the death penalty, and that the extra information could disturb their beliefs.

A similar example is that some nation has borderline-quality intel justifying a war, and they decide to launch the war before they look at newer intel that could possibly negate the previous intel. Thus even if they are later found to be wrong, they would be able to use the ignorance argument by saying they didn’t know better at the time, even if they knew of the possibility of being wrong. There is a difference between genuinely believing the lack of contradictory information and the intentional refusal to look at (possibly) contradictory information.

It’s not a fine line that separates non-decision and the active decision that leads to the same result as in non-decision. Similarly, it’s not a fine line that separates genuine ignorance and willful ignorance. But even without a perfectly clear demarcation, the differences are real and these actions can and should be treated differently.