Culture, Biases, and Empathy

A few disclaimers before I start:

  1. This is a complicated issue. While I may simplify definitions or arguments for the sake of making a point, I realize the truth is more complex than that.
  2. I’m not completely sure about the conclusions, and this is not a topic that I am an authority on. Still, there are some things that I find so disturbing that I feel the need to say something, even if it is just armchairing.
  3. Culture can be taboo, especially to criticism. I realize this.
  4. I am going to throw in more caveats than usual, particularly because of the first three reasons. The last post I wrote in this area, on the social construction of progress, seemed to strike the wrong nerve even among some of my friends, so I’ll be extra careful here. I feel that I shouldn’t need to make such disclaimers, and hopefully this will clarify understanding rather than confound it.

The topic for today is the criticism of other cultures. In particular, we are very reluctant to criticize even a tiny facet of another culture, and while this is for good reason due to the not-so-friendly history of cultural superiority, I think we have overcompensated in the moral relativism direction and have ended up shielding even the worst culture-specific behaviors from criticism.

Wariness in Criticizing Cultures

As noted in the social progress post, much of our (post-)modern reservation to proclaim objective truths is well intentioned: to prevent future atrocities from happening as a result of the feelings of cultural superiority. The Holocaust comes to mind immediately, and European colonialism is another.

However, to (theoretically) renounce objective truth altogether would go too far. Then on what grounds do we have to say that stoning someone for adultery is wrong? Or rather, how can we criticize a culture that practices stoning as punishment for adultery? Or a culture with the punishment of 200 lashes for the crime of being raped? (Yes, you read that right—200 lashes not for the perpetrator, but for the victim.) We don’t have any grounds to make such criticism on at all, if we subscribe to extreme moral relativism.

Of course, this is an extreme scenario. The average person doesn’t watch a video of a woman being stoned to death and then say, “That’s okay because it’s okay in their culture and we have to respect that.” The reaction is outrage, as it should be.

Cultural Anthropic Principle

I want to take one step back and talk about a peculiarity in the logic of cultural critique: a selection effect on what we are saying. It is similar to an effect in cosmology called the anthropic principle: given that we are observing the universe, the universe must have properties that support intelligent life. That is, it addresses the question of “Why is our universe suitable for life?” by noting that if our universe were not suitable for life, then we wouldn’t be here making that observation. That is, the alternative question, “Why is our universe not suitable for life,” cannot physically be asked. We must observe a universe compatible with intelligent life.

A similar effect is found in some areas of cultural analysis. We have, for instance, many critiques of democracy written by people living in democracies. One might ask, what kind of criticisms do people make within a totalitarian state? The answer might be none: given that a writer is in a totalitarian system, their critique of the totalitarian government may never be published or even written in the first place for fear of imprisonment by the state. The net result is, given that we are critiquing our own political system, we are most likely in an open political system. This seems to answer the question, “Why is political analysis democracy-centric?”

The same principle applies to the criticism of cultures. More intellectually advanced cultures tend to be more open to self criticism and be more wary of criticizing other cultures. So, a culture that is wary about criticizing other cultures tends to be more intellectually sophisticated, and thus often are concerned with epistemological questions of cultural analysis in the first place and can often give a better answer than one that is less self-aware.

Cultural Exclusion, Bias

In any discussion with one person criticizing another culture, the go-to defense is, “You are not from culture X, so you cannot possibly understand X.” This seems to be a very exclusionary argument that implicitly denies the role of empathy. By saying “you cannot possibly understand,” one implies that there is something mysterious that cannot be shared with someone outside the group.

I’m all for people of different cultures to communicate and get along with one another, but the mindset of “you cannot possibly understand” seems to reinforce cultural divisions and deny the possibility for mutual understanding.

Along the lines of “you cannot possibly understand,” a related argument is, “You are from culture X, therefore your opinion is biased,” where X usually equals Western culture.

Of course opinions are biased! But it’s not as simple as biased vs unbiased (and does an unbiased person even exist?)—there is a whole range of biases along different dimensions. To reiterate my favorite Isaac Asimov quote:

When people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together.

Interestingly enough, the context of this quote (source) is that it was in response to an English major who “…went on to lecture me severely on the fact that in every century people have thought they understood the universe at last, and in every century they were proved to be wrong. It follows that the one thing we can say about our modern ‘knowledge’ is that it is wrong.” Asimov’s response signifies that wrongness exists as not a dichotomy, but a scale. (It is kind of ironic that Asimov was the one who argued that wrongness is relative, to an English major in 1989.)

So yes, we are biased, but that does not mean we should just abandon cultural analysis. As we understand biases more, we get better at working around them and minimizing their impacts. One example is the anchoring bias, which says that if you are trying to guess a number but think of some other number beforehand, your guess will move slightly closer to that other number. For example, in situation (1), I ask you, “What is 1000 plus 1000?” and then ask you to estimate the price of a car, versus (2) I ask you, “What is a million plus a million?” and then ask you to estimate the price of the car. You will give a lower estimate in the first case and a higher estimate in the second case, even though it is the same car! To work around this, try to not expose someone to arbitrary numbers beforehand if you want an honest estimation from them, for instance. (For more on biases, see Daniel Kahneman’s Thinking, Fast and Slow.)

Probably, we cannot eliminate all biases from our minds. But in regards to cultural criticism, bias cannot be used as a disqualifier. In 12th grade history, we had an essay where one of the points was to analyze and contextualize sources, e.g. looking for bias. Some of my classmates apparently had used the “you cannot possibly understand” mentality on the source analysis. Our teacher had to announce in class that “This author is not from country X and therefore must be biased when talking about country X” is not a valid scholarly argument. From my college experience, professors explicitly warn against doing this as well, so to be clear, my argument on cultural criticism is not targeted against academics (who I think are approaching things correctly), but against a popular/cultural sentiment.

This recent Buzzfeed article “Why Muslim Americans Are Giving ‘Alice In Arabia’ Major Side-Eye” is an apt example of this sentiment. It’s interesting that the criticisms are not of the content but of the context—that the writer is a white woman and therefore must be racist and cannot possibly understand Muslims. I won’t say too much more about it here, but it’s pretty interesting and solidly demonstrates the point of this post. It isn’t even criticism of culture so much as even portrayal of/writing about another culture. Which leads me to…

Personal Investment and Empathy

“You cannot possibly understand” as an argument seems to deny empathy. The point of empathy is you can understand someone else. More specifically, we are concerned with intercultural empathy, trying to understand another culture. There are plenty of people who come from multicultural backgrounds and who have adapted from one culture to another, so it happens all the time.

Recently, I also ran into the argument of “you are not personally invested in X, therefore you have no point in talking about X,” which is again a denial of empathy and an affirmation of total self interest. This argument was made in a comment to the social progress blog post, and the commenter ended with the following:

Your stakes in this critical project are low, and you’re yelling that from your desk chair for some reason.

I think the implication was that since I’m not a humanities major, I shouldn’t be interested in talking about the humanities. Really? In addition, this sentiment is simply historically wrong. From a previous blog post:

It is important to keep in mind that when groups do agitate for rights, their practical purpose is to convince whomever is in charge to give them rights. Just looking at American history, we see that every time there is a major social revolution granting rights to a previously discriminated group, the government itself contained extremely few, if any, members of that group.

Abraham Lincoln was white, and so was the rest of the US government when the Civil War occurred. When Congress granted women the right to vote, there were no women in Congress. And when the LGBT community first agitated for rights, no member of Congress of such an orientation had openly declared it.

According to the commenter’s logic, these rights revolutions should never have happened because there was no personal investment for any white member of Congress to support rights for racial minorities, or for any male Congressperson to support rights for women, or for the straight Congress to support LGBT rights, etc.

And according to the commenter’s logic, pretty much everything I talk about should not be talked about. I’ve spoken in the past about LGBT rights and perceptions, women’s rights, and the wealth gap, even though I’m straight, male, and will be working on Wall Street. So why do I write on these topics? One word: empathy. (Arguably, even my atheism-related posts are not really personally invested: I’ve never felt discriminated against due to my atheism. It’s sometimes more of giving a voice to those who are prevented from having one.)

“You are not personally invested in X” is not as common as the other sentiments, but I feel that it needs an explanation. Maybe we are so well conditioned to look for biases that we assume everyone must have some personal vestment/personal reason for doing something. Perhaps it does stem from similar lines of thinking to “you cannot possibly understand.” If you assume that everyone is purely self-interested, then this argument is not as ridiculous, but it’s still shaky at best.

In all, we must be careful in analyzing other cultures, minimize the impact of our biases, and use empathy to even try to understand those whom we don’t normally associate with. And most of all, we need to move beyond “you cannot possibly understand.”

What Is the Best Superpower?

smbc-negative-wishes
From SMBC Comics

We often have discussions in our apartment on the most arbitrary topics. One time, we debated the question: What is the best superpower?

Despite the catchy title, this post is not really about the best superpower. Sure, it talks about that a lot, but that’s not the main point. The main point is about how messy a debate can be when the rules and terms are ill-defined.

What Is a Superpower?

From the start, it was unclear what was meant by “superpower.” It was implicitly understood that something completely all-encompassing like omnipotence is invalid because it is too broad, but this wasn’t formally forbidden. The only thing that was formally forbidden was any superpower than entailed having multiple other superpowers, like wishing for more wishes (but it gets fuzzy as to what counts as one superpower and what counts as multiple).

Being a smart-ass, instead of answering with the usual answers like telekinesis or mind control or invisibility or flying, I suggested the power to move subatomic particles. Let’s just call this particle manipulation for short.

From a naturalist perspective, i.e., physics, particle manipulation encompasses most other plausible powers (hold on for what “plausible” means):

  • To move a large object, you just make quadrillions of quadrillions of particles move in the same direction.
  • To start a fire, you make the particles move faster.
  • To create something out of thin air, or to regenerate any injury, you rearrange particles from the air into atoms and molecules to get what you want.
  • To control someone’s mind, you manipulate the neurons directly and make certain connections fire and others not fire.
  • To defuse a world war, you could just vaporize every nuke into air.
  • To become infinitely rich, you could just turn lead, or any other material, into gold, or into dollar bills.

However, my friend who initiated this discussion, and whose own answer was mind control, thought this answer I gave was “implausible” or “unrealistic.” So what is plausible and implausible? What is realistic and unrealistic?

Doesn’t the word “superpower” imply that it is NOT real? Why does moving a nearby object with your mind seem “realistic”? Does it take a lot of mental power or concentration? Are you limited in the number of objects you can control? Do I always write blog posts that have 7 questions in a row?

Much of our intuition of superpowers comes from the film industry (and thus indirectly from the comic book industry). Before getting bogged down with more philosophical questions, let’s appreciate some good old superpower usage in X-Men: First Class!

Observe the amount of concentration required in the first scene, compared to the relative ease in the second.

The second act is arguably more difficult: it requires control of a scattered collection of objects rather than just one, the control is required at far range, and the change in velocity is much greater. It’s hard to say which is more valid or realistic.

What Powers Are Valid?

Because the particle manipulation power was considered too strong, we decided to forbid it and use only well-known superpowers, to avoid some of the questions as to what was considered a superpower. But this clarification did not come at the beginning, it was more of a change of rules halfway in.

Even so, if you look at the comics, some powers are significantly stronger than portrayed in film. It’s still arguable that Jean Grey’s powers, especially as the Phoenix, are valid and are much stronger than most of the ones we talked about later in the discussion. Even so, do we count these powers separately? Are telepathy and telekinesis separate, or are they included together like in Jean’s case?

Magneto, for instance, is mostly known for him namesake, magnetism. But according to science, electricity and magnetism are really the same force, so does control of magnetism also come with control of electricity? According to Wikipedia:

The primary application of his power is control over magnetism and the manipulation of ferrous and nonferrous metal. While the maximum amount of mass he can manipulate at one time is unknown, he has moved large asteroids several times and effortlessly levitated a 30,000 ton nuclear submarine. His powers extend into the subatomic level (insofar as the electromagnetic force is responsible for chemical bonding), allowing him to manipulate chemical structures and rearrange matter, although this is often a strenuous task. He can manipulate a large number of individual objects simultaneously and has assembled complex machinery with his powers. He can also affect non-metallic and non-magnetic objects to a lesser extent and frequently levitates himself and others. He can also generate electromagnetic pulses of great strength and generate and manipulate electromagnetic energy down to photons. He can turn invisible by warping visible light around his body. […] On occasion he has altered the behavior of gravitational fields around him, which has been suggested as evidence of the existence of a unified field which he can manipulate. He has demonstrated the capacity to produce a wormhole and to safely teleport himself and others via the wormhole.

Thus, from a logical and consistency perspective, I found it difficult to reject the validity of powers such as these. We essentially watered down telekinesis to being able to move objects within X meters and within sight range.

Telekinesis vs Mind Control

Among the remaining, weaker powers, the debate ended up being between telekinesis and mind control. More and more rules were made up on the spot. Once it was established that one power was generally stronger, the other side tried to state some technicality that would limit the power, and thus bring both back to equal levels. At this point, I thought the debate was pointless because we already conceded so many of the better powers, and then kept limiting the remaining powers because of arbitrary, subjective reasons such as being “unrealistic,” which was the main counterpoint. This seems absurd, because you are debating superpowers in the first place—they’re not supposed to be realistic!

It seemed like a debate regarding “What is the highest whole number?” At first we got rid of infinity (omnipotence was not allowed). Getting rid of really strong powers turned into “What is the highest whole number less than 100?” Then when one side says 99, the other side uses a limiting argument basically saying, “The same way numbers over 100 are not allowed, 99 is absurdly high and should not allowed either.” It then becomes “What is the highest whole number less than 99?” And so on.

While there was some semblance to rational debate, it was clear that on the big picture scale, there were essentially no logical points being discussed. It was a matter of imposed fairness. “It’s unfair that your superpower gets to do X and ours does not, so yours is invalid.” But this defeats the purpose of the question in the first place, which was to determine which one was the best. It devolved into the question, “Given that a superpower does not exceed some power level N, what is the best superpower?” Of course, the answer will just be ANY sufficiently good superpower, restricted enough to be at level N. In this case, making up rules on the spot completely defeated the purpose of the question.

Conclusion

There were a bunch of other complications in the debate, but overall it was pretty fruitless. The rules of the debate, namely allowing one to make up rules spontaneously, defeated the purpose of the debate in the first place. It was not completely pointless, however, as it showed the need for setting clear guidelines at the start, and for being consistent.

2013 in Review

This is a societal and personal reflection on 2013. I’ll start with the societal, and I’ll keep the personal pretty short.

2013 in Review

2013 was a year in which not much unusual happened, and that was perhaps the most unusual thing about it. Snowden leaked NSA data, but this didn’t seem all that surprising, given the power of the government and the precedent set by Julian Assange and others. The Boston marathon bombings happened unexpectedly, but it didn’t come as a shock that the perpetrators held extreme Islamist views; it was not as if the US suddenly gained new political enemies. The Obamacare website didn’t quite roll out as planned, but for people who are used to seeing server crashes upon the release of new content simultaneously being accessed by millions of people, it was again not a huge surprise. The Supreme Court struck down DOMA, but this was more like a delayed result of a larger trend: support for marriage equality had already been increasing for years. Overall, long-term trends in society, economy, and technology continued without much interruption; no one made a smartphone killer… yet.

Perhaps the signature event of 2013, therefore, was the government shutdown. I don’t even want to describe it. But it did signify the occurrence of nothing, which seemed to be the main idea of the year.

This doesn’t mean 2013 was a bad year, only that it was a relatively uneventful year. The markets rose a lot: [S&P 500]

s&p

Perhaps compared to 2012, which had a widely-believed but failed doomsday attached to its legacy, 2013 seemed like a bore.

2013 in Life

It was the first year (plus or minus a few days) that I was 21, which was enough by itself to make 2013 a very eventful year. I had an epic summer experience, which is the reason I now care about graphs like the one above. As a senior, I attended 10/23 for the first time. My classes this year were all in math or computer science; it was a year of specialization. My blogging schedule was the most consistent since 2010. And finally, this year I sorted out my plans for after college.

So, it was a great year. Here’s to 2014!

When Principles Collide

One of the things about growing up with a sheltered life is that you rarely ever have to stand up for your principles. This could be due to several reasons: maybe they’re not really your principles, but your parents’; maybe you’re just not placed into situations where conflicts occur; maybe your principles themselves seek to avoid confrontation. I recall so many times when I was younger that I had some well thought-out idea for something but then instead went along with someone else’s idea without question, in the interest of avoiding conflict. I’m not saying that you should always insist what you’re doing is correct, but I think on the spectrum I was too far on the side of passivity.

Throughout college (and perhaps starting senior year of high school), I found myself more often at points where I needed to disagree. It wasn’t really conflict for the sake of conflict, but rather to get to the truth or to make a situation better, by challenging faulty ideas or plans. I think this change is evident on my blog: in the past, most of the topics I wrote about were very non-controversial, but recently, they have been more questioning of commonly held ideas. Granted, my online persona (including on Facebook) and my real life character are still quite different—in real life I don’t go around seeking to criticize peoples’ religious beliefs, an activity that is reserved for the internet. That’s another topic.

Contradictory Principles

For a really simple example, consider the principles “be honest” and “don’t be a jerk.” Everyone follows these principles, and most of the time they support each other. You’d be quite a jerk if you lied to your friends about so many things to the point where nothing you say has any credibility. However, when you find minor fault in something someone did, you could be honest and tell them, but most of the time it’s better to be silent about it. Of course, the best choice depends all on the situation.

contradiction-signs
I respect both ownership rights and aesthetic cleanliness—do I pollute whitespace by citing the image source, especially if the image isn’t all that special?

Perhaps a more pertinent contradiction is that between tolerance of others and… tolerance of others. For example, most of my audience probably tolerates the LGBT community. Yet, there are many people in America who do not. This leads to a tolerance paradox (that I think many of us don’t think about): Is is possible to simultaneously be tolerant of LGBT individuals and tolerant of people who are intolerant of them? Is a hypothetical all-tolerant person also tolerant of intolerance?

This depends somewhat on how you define tolerance, but it points to a deeper issue, that simply using the principle “tolerate others” is insufficient in these fringe cases. There must be some overriding principle that tells you what to be tolerant of and what not to be tolerant of. I think that being intolerant of intolerance is still tolerant.

In chess, one of the most important principles, among the first to be taught to new players, is to never lose their queen unless they can get the opponent’s queen as well. While this is a great principle 99.9% of the time, there are cases where losing your queen (for no pieces or just a pawn in return) is the best move, and there are even cases where it is the only non-losing move. It’s because the principle of “win the game” overrides the principle of “don’t lose your queen.”

Interestingly enough, even meta-principles can contradict one another. For me, “stand up for your principles” is a good principle, and so is “be open-minded about your principles.” Often blindly standing up for principles is a very bad idea (in the typical novel/movie, the antagonist may have good intentions but focuses on one idea or principle to the exclusion of all others, thus causing more overall harm than good; on the other hand, this principle seems required to become a politician).

Throughout my first two years of college, I wanted to go into academia, and I naively shunned finance because I thought people went into it just for money. Of course, once you start thinking about what to do after college and the need for money comes closer, you realize that you need money to live(!) and that despite the negative outside perception, the industry is not all evil people trying to figure out how to suck away all your money. Of course, on the “stand up for your principles” front, this change fails pretty hard, but it follows “being open-minded about your principles,” which I consider to override the first in this case. After all (to add one more layer of contradiction), it is standing up for the principle of being open-minded.

Spontaneous Decision Making

Decision

This post is about my own decision-making habits. In particular, I don’t plan ahead details ahead of time, as I abhor fixed schedules or fixed paths. Perhaps an interesting case is from a 2011 post:

For example, last semester, to get to one of my classes from my dorm I had two main paths, one going over the Thurston Bridge and the other over a smaller bridge that went by a waterfall. For the first couple weeks I took the Thurston Bridge path exclusively, as I thought it was shorter than the waterfall path. But then one day I went the other path and timed it, with about the same time, maybe a minute slower (out of a total of 15 minutes). So I started taking the waterfall path exclusively. But eventually that got boring too, so I started alternating every time. You might think that’s how it ended.

But a consistent change like that is still… consistent. Still the same. It was still repetitive, and still very predictable. Perhaps the mathematical side of me started running pattern-search algorithms or something. Eventually, I ended up on a random schedule, not repeating the same pattern in any given span of 3 or 4 days.

This example involved physical paths, but it is true for figurative paths as well. I can’t stand any repetitive task for a long time, including for things that I might like.

Another set of examples comes from video games. I tend to play extremely flexible classes/builds that have multiple purposes, and I try to have multiple characters or styles to be able to adapt quickly and to know what other people are thinking:

  • World of Warcraft: 8 (out of 11) classes at level 85+; raided as tank, dps, and heal.
  • Diablo 3: all 5 classes at level 60.
  • Path of Exile: all 6 classes at level 60+.
  • DotA: every hero played (up to a certain version).
  • Starcraft 2: all 3 races to level 30.

In WoW, the game I have definitely spent the most time on, my two main characters when I raided were a Priest (disc/shadow) and Paladin (prot/holy), having all 3 roles covered. Even within one specialization, I switched out strategies all the time: one day I would stack haste, the next day I would stack crit, and so on. Even so, I was usually very indecisive about what to do until the last moment.

My blogging follows a similar pattern. I find it hard to focus on one topic to write about in consecutive posts, and I generally cover whatever topic comes to mind. Yes, I set a schedule of one post per week. However, I usually don’t come up with a topic until the last day. The topic for this post did not arise until yesterday, from the suggestion of a friend (whom we were visiting also as a result of a spontaneous decision).

Being too spontaneous, however, also didn’t work well. In 2011 I decided to blog spontaneously (see the first link). Largely due to indecision, I ended up writing only 33 posts the entire year, 20 of which were written in the first two months. By contrast, in the December of 2010, I wrote 38 posts. The current system of sticking with a posting schedule but not a topic schedule is working much better, as every once in a while it forces me to make a decision and choose some topic to write about. This removes indecision from the equation.

(Edit: Due to an inordinate amount of spam on this page, the the comments are disabled.)

Math or Computer Science?

Well this is an interesting situation. Just a month ago I announced that I was adding a computer science degree, so that I am now double majoring in math and computer science. The title of the post, after all, is “Computer Science AND Math.” Given the circumstances at that time, I think it was a good decision. My work experience had been mostly in software, and a CS degree from Cornell should look pretty good. In addition, I was wanting a more practical skillset.

decisions-2

In the past week, however, things have changed. I received and accepted an internship offer from my dream workplace, based on my background in mathematics and not in CS (though my prior CS experience was a plus). Based on this new situation, I have considered dropping the CS major (next semester) and taking more advanced math:

  • The CS degree has some strict course requirements, and I am afraid that if I go for the degree, I may be forced to skip certain math classes that I really want to take. For instance, I may have to take a required CS class next semester that has a time conflict with graduate Dynamical Systems, or with Combinatorics II. And given that I am currently a second-semester junior, I don’t have that much time left at college.
  • Even this semester, I am taking Algorithms, which meets at the same time as graduate Algebraic Topology. While Algorithms is pretty interesting and the professor is excellent, I am already very familiar with many if not most of the algorithms, extremely familiar with the methods of proof, and I feel that the experience is not as rewarding as possibly taking Algebraic Topology with Allen Hatcher, who wrote the textbook on the subject. I feel that I could learn algorithms at virtually any time I want. But learning algebraic topology with Allen Hatcher is a once-in-a-lifetime opportunity that I am afraid I am missing just because I want to get a CS degree to look good.
  • Even not being a CS major, I will still be taking some CS classes out of curiosity. However, these classes will no longer feel forced, and will not restrict me from taking the higher level math courses that I want to take.
  • My risk strategy for grad school is different now because of the internship. In the past, I would have been willing to take a decent grad school in math or really good job. (I would prefer grad school over getting a job, but of course, a good job is better than a mediocre grad school.) However, now that I have my dream internship, I am willing to play the grad school game with more risk.
  • But whether for grad school, trading, or just for curiosity, I would prefer taking advanced (graduate) math classes over undergraduate CS classes. In a sense, my taking of the CS degree was a hedge bet, as I wanted to reduce the possible cost of the worst case scenario. I knew that it would directly inhibit my ability to take advanced math classes via class time conflicts, but the thought was that if I couldn’t get into a good math grad school or get a good job using math, at least I would have a CS degree from Cornell. But, in this new situation, I think the risk is significantly reduced and the hedge is no longer necessary.

Interestingly enough, the primary motivation for dropping CS wouldn’t be to slack off, but to be able to explore more advanced math. (At least, that’s what I tell myself.)

I think this might be the second time in my life where I have had to make an important decision. (The first time was deciding where to go to college, and I certainly think I made the right choice there.) Unfortunately, I really can’t be both taking as many interesting math courses as I can, and at the same time be pursuing a CS degree. As much overlap as there is, I can’t do both. In an ideal world this might be possible, but not currently at Cornell.

So instead of the idea of having math and computer science, I am now having to think in terms of math or computer science. I am currently in favor of going with math, but I am not completely sure.

Edit: Thanks for the discussion on Facebook.

Making Mistakes—And Quickly Correcting Them

A couple days ago on my math blog, I talked about my interview experiences with a certain trading firm. I would normally write about job or life experiences on this current blog, but given the amount of mathematics in those interviews, I wrote it over there instead.

walking-off-cliff

An Interview Mistake

One of the things I did not mention in that post was a particular chip betting situation during one of the on-site interviews. I do not want to give away their on-site questions on the web, but I can say enough of it to make a point here.

The situation was a game where I had positive expected value. That is, if I played it again and again with my strategy, then over the long run I would gain chips.

My interviewer added a new rule to the game which did not affect the expected payoff of the game given that I kept the same strategy. However, the new rule was psychologically intimidating enough that I changed my strategy, and after a couple of plays, I realized I was now losing chips on average, instead of gaining.

My old strategy would have kept gaining chips, but the new strategy that I switched to was losing chips. I only realized this after 3 rounds, and just before the interviewer started the 4th round, I interjected, saying I realized my new strategy was a bad strategy, and I stated what the new (negative) expected value was for this strategy.

At this moment I felt that I had made a fatal error that would be reflected on in the decision. But instead of giving me a stern look for my mistake, my interviewer suddenly became really happy that I had corrected it! In fact, he said that almost everyone they interviewed had done the same thing, by switching their good strategy to a bad strategy when that new rule was added.

Acknowledging Mistakes

The first and most important part of correcting a mistake—and eventually benefiting from one—is to acknowledge the mistake. This most likely sounds trite, but acknowledging a mistake really is the most significant step of this process.

In matters involving numbers, it can usually be very easy to acknowledge a mistake. In my interview, all I had to do was to sense something fishy about the bet, and then recalculate the expected value to see that I had made a mistake. Since numbers don’t lie (and since I had chips on the table), I acknowledged the mistake as quickly as possible.

It can be much tougher, however, when the mistake is on some emotionally vested or less clear-cut issue. We’ve all had arguments with someone where we were totally sure we were correct, and only much later, we realized we were flat-out wrong.

And then sometimes we still maintain our original position even though we know we are completely wrong. This can lead to strange effects, but often, a person in such a state of mind is difficult to convince otherwise. Anyone who has tried arguing on the internet can give testament to this phenomenon.

Looking at the Evidence

Someone in such a state undergoes several cognitive biases:

  • Refusal to look at opposing evidence.
  • Cherry-picking evidence to only consider supporting evidence.
  • Blaming something else for opposing evidence, and waving it off.
  • Etc.

Let’s say that during my interview, I was adamant that my new strategy was good. After I start losing chips for a while, I might explain away losing streaks as bad luck, while at the same time explaining winning streaks by superior choice of strategy. I might complain that the coin was unevenly weighted, that the die was rigged, or that the deck had been stacked.

While these are somewhat reasonable conclusions to make, the problem would be if I were confronted with the fact that my strategy was bad. For instance, if I knew I was losing chips (say I lost 20% of them), but I believed in my mind that my strategy was still winning chips, then suppose the interviewer informed me that my strategy was losing chips. My first reaction, in this state, would be to reject this information and maintain that my loss of chips was due to bad luck or to unfair conditions. Of course, this behavior would be disastrous in an interview, and I would probably be rightfully rejected right there.

In the real scenario, I had some intuition about the probabilities involved, so I realized after 3 rounds that my strategy was flawed. But even if I had no intuition about what the probabilities were, after I played say 10 rounds, I would have seen the evidence and realized I was losing chips, and would have begun to start questioning my strategy.

Catching Mistakes and Learning From Them

Sometimes you are not afforded enough time to completely think something through. In this case, you need to give a most likely answer, but the important part is to keep thinking about the answer even after you have stated it. Sometimes, you might be given additional time to reanalyze it, other times what you state is final. This can be the worse feeling, when you catch a mistake only after making a final decision.

chess-sketch

I used to play chess competitively, and while at the high levels winning often requires outsmarting your opponent, at the lower levels a win is typically achieved simply by making fewer mistakes than your opponent. If I were ever to get back into chess, my #1 area of improvement would be to reduce the number of blatant mistakes. I have turned many equal or favored positions into hopelessly lost positions by accidentally dropping a piece.

It is often psychologically damaging in chess because sometimes you know you’ve made a mistake after you made your move but before your opponent makes a decision. At this point you could hope your opponent doesn’t see your mistake. Or, you could think about how to avoid that mistake in the future. I think in the latter part of my chess playing, I dwelt too long on the first option and didn’t spend enough time on the second, and as a result, my chess rating hit a plateau.

Lies, Damned Lies, and Statistics

In addition, in 9th/10th grade I went through a phase where I thought global warming was not a well-founded theory. I subscribed to the solar cycle explanation for the “recent” warming, and thought that was more significant than the greenhouse effect contribution. I do have to add one caveat though for the record: even with that position on global warming, I still considered myself an environmentalist—I thought there were many issues with the environment, some which were far more urgent than global warming, and that global warming shouldn’t have eaten up all the priority and public interest. However, as debates go, my opposition always were able to label me as a “denier” of a sort, even though I never really denied it.

Anyways, I think the evidence since then has put a nail in the coffin. I knew the burden of proof was on the solar cycle model, and I waited to see if the temperature would drop back down. But it kept going up (in fact, even it had just stayed constant, it would have contradicted the solar cycle model). Moreover, one of the leading advocates of the solar cycle model abandoned it a couple years later. As a result, sometime during 12th grade, I went back to the scientific consensus view on this.

The Portals of Discovery

Realizing a mistake can be a rewarding experience. There is a quote by Donald Foster:

No one who cannot rejoice in the discovery of his own mistakes deserves to be called a scholar.

And a good one by James Joyce:

Mistakes are the portals of discovery.

Wile E Coyote(Well, not if you keep making the same mistakes.)