The Mathematician’s Answer is a meta-joke about how mathematicians usually behave in jokes. From tvtropes:

If you ask someone a question, and he gives you an entirely accurate answer that is of no practical use whatsoever, he has just given you a Mathematician’s Answer.

It goes further on to say: “A common form of giving a Mathematician’s Answer is to fully evaluate the logic of the question and give a logically correct answer. Such a response may prove confusing for someone who interpreted what they said colloquially.”

Perhaps the most famous example is the hot-air balloon joke, where a man in a hot-air balloon asks someone where he is, to which the response is, “You’re in a hot-air balloon!” The rider concludes that the responder must be a mathematician, because the answer given was absolutely correct but utterly useless.

The tvtropes site contains a bunch of examples of Mathematician’s Answer in dialog. But this kind of joke also sometimes pokes fun at actions as well as words. My favorite is the hotel joke (this version from the Cherkaev “Math Jokes” collection):

An engineer, a physicist and a mathematician are staying in a hotel.

The engineer wakes up and smells smoke. He goes out into the hallway and sees a fire, so he fills a trash can from his room with water and douses the fire. He goes back to bed.

Later, the physicist wakes up and smells smoke. He opens his door and sees a fire in the hallway. He walks down the hall to a fire hose and after calculating the flame velocity, distance, water pressure, trajectory, etc. extinguishes the fire with the minimum amount of water and energy needed.

Later, the mathematician wakes up and smells smoke. He goes to the hall, sees the fire and then the fire hose. He thinks for a moment and then exclaims, “Ah, a solution exists!” and then goes back to bed.

In line with the engineer/physicist/mathematician trio, another great one is the Scottish sheep joke:

A mathematician, a physicist, and an engineer were traveling through Scotland when they saw a black sheep through the window of the train.

“Aha,” says the engineer, “I see that Scottish sheep are black.”

“Hmm,” says the physicist, “You mean that some Scottish sheep are black.”

“No,” says the mathematician, “All we know is that there is at least one sheep in Scotland, and that at least one side of that one sheep is black!”

And then, we have the infamous examples where it was the students ironically who used the Mathematician’s Answer on their math teachers:

Now, aside from the meta-joke status of the Mathematician’s Answer, is there any truth to it? Do math-minded people really say, “You’re in a hot air balloon,” in real life?

From all the math classes I’ve taken at college, I have never witnessed a professor respond unwittingly with a Mathematician’s Answer. Every time it was used, it was clear that it was meant as a joke. Sure, some live up to mathematician archetype, but they’re all normal people, not John Nashes.

In high school, my favorite form of humor was the pun. Starting junior or senior year of college, however, I had somehow transitioned to the Mathematician’s Answer as my go-to response when I can’t think of anything to say. It is extremely easy to use, as almost every situation can lead to this kind of joke. It’s really fun to use and really versatile.

It doesn’t even need to be used in response to a question. Just yesterday, someone remarked that it was March 1st already. Immediately, I added, “Oh yeah, that’s exactly one month away from April 1st.” The same person later asked how far 10 yards was, and, like a true mathematician, I answered by saying it was like 5 yards but double that.

Our campus Internet has one network called “RedRover” and another called “RedRover-Secure.” Someone asked what the difference between these was, and I quickly responded, “Well, they’re the same, except one of them is secure.”

I think it interests me because I’m generally fond of logical and tautological humor. The only downside of the Mathematician’s Answer is that it doesn’t really work in anything that is related to mathematics. The language of math is designed to minimize ambiguity, and even when situations do arise where there are two interpretations, it’s much harder to distinguish between a literal and a figurative meaning. One of the few mathematical ambiguities I know if is if someone writes

$1 \leq x, y \leq 10$,

do we choose x and y such that x is at least 1 and y is at most 10, or is it that both x and y are between 1 and 10? On the other hand, Mathematician’s Answer works really well in areas as far removed from mathematics as possible. Anyway, here is one last example:

An engineer, a physicist and a mathematician find themselves in an anecdote, indeed an anecdote quite similar to many that you have no doubt already heard. After some observations and rough calculations the engineer realizes the situation and starts laughing. A few minutes later the physicist understands too and chuckles to himself happily as he now has enough experimental evidence to publish a paper.

This leaves the mathematician somewhat perplexed, as he had observed right away that he was the subject of an anecdote, and deduced quite rapidly the presence of humor from similar anecdotes, but considers this anecdote to be too trivial a corollary to be significant, let alone funny.

# The Signal and the Noise, and Other Readings

The Signal and the Noise

Since last year’s presidential election, everyone has heard of the legendary Nate Silver, who predicted the outcomes of all 50 states correctly. Given that he also correctly predicted 49 out of 50 states in the 2008 election, this repeat feat seemed like clairvoyance, not coincidence. So the question is, what did Silver do right that so many polls and pundits did wrong?

Statistics.

The Signal and the Noise (2012) is basically a popular applied statistics book, with more history, philosophy, and psychology than formulas. The first half of the book illustrates the failures of prediction including the 2007/8 financial crisis, elections, sports, and natural disasters; the second half explains how to predict the correct way, using Bayesian probability. Overall it does an excellent job at explaining the concepts and not going into mathematical detail (which is probably a plus for most people; even for a math person like me, I know where to look up the details).

Sidenote: While I was reading the chess section, my mind literally blanked for about 10 seconds upon seeing the following:

My chess intuition immediately told me that something was wrong: there is no way this position could have occurred “after Kasparov’s 3rd move.” Since Kasparov was white, this implied the white position must have 3 moves, but clearly there are only two moves: the Knight on f3 (from g1) and the Pawn on b3 (from b2). Yet this book was written by Nate Silver, so he couldn’t have gotten something wrong that was so simple. Once I realized it must have been a mistake, I looked up the game and found that at this point of the game, the g2 pawn should be on g3. I thought it was an interesting mind lapse.

Breaking the Spell

This book argues that scientific analysis should be applied to religion. Namely, the title refers to the taboo of preventing rational discussion of religion, and that to “break the spell” is to break the taboo. In addition, it discusses the theories as to how religion arose; ironically the names for such theories are evolutionary theories, as they concern how modern religion has evolved over time from ancient spiritual beliefs (e.g. which specific doctrines maximize a belief system’s chances of survival, etc.).

Reading this means I have now read at least one book from each of the four “horsemen”: Dawkins, Dennett, Harris, and Hitchens. Of the four, Dennett is by far the least provocative. While the other three make arguments that outright use logical analysis on religion, in this book Dennett is the one carefully arguing that one should be allowed to make arguments that analyze religion just as one can on any other phenomena. This book should be nowhere near as controversial as The God Delusion or The End of Faith.

Overall the book makes good points but is quite slow, makes overly cautious caveats, and has a very formal tone. I think if someone like Dawkins had written this, it would be much more readable. I wouldn’t really recommend this to anyone who doesn’t have a lot of interest in philosophy.

CEO Material

The main competitive advantage of this book over the typical leadership book is that it quotes very often from 100+ real CEOs. Overall these first-hand experiences supplemented the author’s main points quite well. However, for the sake of privacy I presume, the quotations are not labeled with the speaker, so it is sometimes difficult to tell how any particular passage applies to a given situation. For example, do I want to listen to the advice of a food company CEO on a particular issue and apply it to run a tech company? Perhaps the overall message is similar but clearly the details matter. Some say that context is everything, and without the context of who said it, each quote has much less power.

Most of the points seemed like common sense, although that is to be expected—the system is efficient enough that if the most effective behavior for a CEO were radically different from what we already do, then we would have adapted to that already (hopefully). Even so, there are still some interesting points made with real justifications, though again it would be helpful if we knew who said each quote, even for a few of them. In all, Benton did make points that changed the way I look at things, so it was worth reading.

The Blind Watchmaker

While The Selfish Gene focuses on how genes propagate themselves and how they dynamically compete over time (evolutionary game theory), The Blind Watchmaker covers an entirely different issue: How did complexity arise?

Some of its answers, written at an earlier time (1986), seem somewhat outdated now, ironically more so than The Selfish Gene which was written even earlier in 1976. This is probably due to The Selfish Gene being more of “Here’s the progress we made in the last decade” when it was written, while The Blind Watchmaker is more along the lines of “Here’s why this work from 1802 is nonsense” and that this counter-argument doesn’t particularly need to invoke the most up-to-date findings.

But anyways, we don’t judge books by how outdated they seem in 30 years, so let’s move on to the content. Due to its premise, the book is more philosophical than The Selfish Gene, which is itself more scientific, hardly addressing at all the conflict between evolution and religion. While The Blind Watchmaker still has a formidable amount of science, it addresses some philosophical questions as well and confronts the conflict head-on. I would recommend it to those looking to question philosophical beliefs, whether of others or of their own.

Mortality

Of the books in this post, Mortality is the answer choice that doesn’t belong with the others. While the other four are strict nonfiction works that try to explain or teach certain something, Mortality comes off more as a dramatic story, the story of coming to terms with terminal illness. Hitchens opens up with the stark statement, “I have more than once in my life woken up feeling like death.” As usual, Christopher Hitchens’ signature writing style and tone are apparent.

“What do I hope for? If not a cure, then a remission. And what do I want back? In the most beautiful apposition of two of the simplest words in our language: the freedom of speech.”

“It’s probably a merciful thing that pain is impossible to describe from memory.”

“The politicized sponsors of this pseudoscientific nonsense should be ashamed to live, let alone die. If you want to take part in the ‘war’ against cancer, and other terrible maladies, too, then join the battle against their lethal stupidity.”

“The man who prays is the one who thinks that god has arranged matters all wrong, but who also thinks that he can instruct god how to put them right.”

“I have been taunting the Reaper into taking a free scythe in my direction and have now succumbed to something so predictable and banal that it bores even me.”

“Myself, I love the imagery of struggle. I sometimes wish I were suffering in a good cause, or risking my life for the good of others, instead of just being a gravely endangered patient.”

“To the dumb question ‘Why me?’ the cosmos barely bothers to return the reply: why not?”

# Why Are College Students Not Choosing Math/Science?

From the Wall Street Journal in 2011:

Although the number of college graduates increased about 29% between 2001 and 2009, the number graduating with engineering degrees only increased 19%, according to the most recent statistics from the U.S. Dept. of Education. The number with computer and information-sciences degrees decreased 14%

After coming up with the topic for the post, I found this article from 2011 with a similar title and citing the same WSJ story. It argued that the high school teaching environment was not adequate in preparing students for rigorous classes in college.

In addition, the article includes the argument that in the math and sciences, answers are plain right or wrong, unlike in the humanities and social sciences.

I can agree with these two points, but I want to add a few more, with the perspective of year 2013. Also, I am going to narrow down the STEM group a bit more, to just include math and science. The main reason is that in the past years, the number of CS majors has actually increased rapidly. At Cornell, engineering classes can be massive and there does not seem to be a shortage of engineers. Walk into a non-introductory or non-engineering-oriented math class, however, and you can often count the number of students with your fingers. So even though STEM as a whole is in a non-optimal situation, engineering and technology (especially computer science) seem to be doing fine. So then the question remains.

Why Is America Leaving Math and Science Behind?

I mean this especially with regards to theoretical aspects of math and science, including academia and research.

In this situation, money is probably a big factor. The salary of a post-grad scientist (from one article at $37,000 to$45,000) is pitiful compared to that in industry (which can a median early-career salary of up to \$95,000, depending on the subject, according to the same article). Essentially there is a lack of a tangible goal.

There are other factors besides money. Modern math and science can be quite intimidating. All major results that could be “easily” discovered have already been discovered. In modern theoretical physics, for instance, the only questions that remain are in the very large or the very small—there is little left to discover of “tabletop” physics, the physics that operates at our scale. Most remaining tasks are not problems in physics, but puzzles in engineering.

Modern mathematics is very similar. While there are many open questions in many fields, the important ones are highly abstract. Even stating a problem takes a tremendous amount of explanation. That is, it takes a long time to convey to someone what exactly it is you are trying to figure out. The math and science taught in high school is tremendously unhelpful in preparing someone to actually figure out new math and science, and it is thus difficult for an entering college student to adjust their views of what math/science are.

Even the reasons for going to college have changed. More than ever, students list their top reason for going to college as getting better job prospects rather than for personal or intellectual growth.

In addition, society seems more than before focused on immediate gain rather than long term investment. Academia’s contribution to society, especially in math and science, is often not felt until decades or even centuries after something was invented. Einstein’s theories of relativity had no practical application when he made them, but our gadgets now use relativity all the time. Classical Greece knew about prime numbers, but prime numbers were not useful until modern-age data encryption was required. Even a prolific academic could receive very little recognition in one’s own life.

However, with the rise of online social networks in the last several years, you can now see what your friends are up to and what they are accomplishing in real-time. This should at least have some psychological effect on pushing people towards a career where real, meaningful progress can be tracked in real-time. Doing something that will only possibly have an impact decades later seems to be the same as doing nothing.

Considering the sentiment of the last few paragraphs, it might sound like I am talking about the decline in humanities and liberal arts majors. Indeed, while the number of math and science majors is increasing (though not as much as in engineering/technology), it almost seems like the theoretical sides of math and science are closer in spirit to the humanities and liberal arts than they are to STEM. The point is not for immediate application of knowledge, but to make contributions to the overall human pool of knowledge, to make this knowledge available to future generations.

Is this just a consequence the decline of education or the fall of academia in general? STEM is not really education in the traditional sense. It is more like technical training.

In all, the decline of interest in theoretical math/science is closely correlated with the decline of interest in the humanities/liberal arts. Our culture is fundamentally changing to one that values practicality far more than discovery. (For instance, when is NASA going to land a human on Mars? 2037. JFK might have had a different opinion.) Overall this is a good change, mainly in the sense of re-adjusting the educational demographics of the workforce to keep America relevant in the global economy. But, we should still hold some value to theory and discovery.

• National Science Foundation statistics – [link]
• National Center for Education Statistics – [link]
• Pew social trends – [link]

# Survival of the Selfish Gene

After reading The God Delusion, I decided to study some of Richard Dawkins’ earlier works. For this post, I read The Selfish Gene (and among the books on my queue are The Blind Watchmaker and The Greatest Show on Earth).

Published in 1976, The Selfish Gene explores the phenomena at play regarding the behavior of replicators, namely genes and memes. I was expecting to see lots of biological arguments, and while there are many, I was shocked at what I found was the main tool used in the book: game theory.

Of course, once you think about it, it makes perfect sense that game theory is extremely important when talking about genes and how they spread from one generation to the next. And by game theory, I do not mean board games or video games, but economic game theory, applied to biology in what is now known as evolutionary game theory. In fact, this book would be an excellent read for people interested in mathematics or economics, in addition to the obvious group of those interested in biology. Dawkins uses concepts like Nash equilibria, though the term is not explicitly stated (consider the date of the book), and the Prisoner’s Dilemma, just for a couple examples, to explain many biological behaviors found in various animals, including humans. This kind of game-theoretic analysis followed largely from the work of John Maynard Smith.

In addition to having studied a bit of game theory, I have also studied dynamical systems, though from the perspective of pure math and not biology. Even so, the concepts in the book were very familiar. I do not think The Selfish Gene is controversial from an academic standpoint. The now 40-year old ideas are still relevant today, and the ideas are really not that difficult to understand, given a sufficient mathematical and scientific background.

Instead, the controversy around the book seems to come solely from the title itself, and perhaps the attached stigma to writing anything about evolution, which seems to be more of an issue today than it was in 1976. Dawkins notes this years later in the preface to the second edition:

This is paradoxical, but not in the obvious way. It is not one of those books that was reviled as revolutionary when published, then steadily won converts until it ended up so orthodox that we now wonder what the fuss was about. Quite the contrary. From the outset the reviews were gratifyingly favourable and it was not seen, initially, as a controversial book. Its reputation for contentiousness took years to grow until, by now, it is widely regarded as a work of radical extremism.

I do find this amusing. It seems to have not to do specifically with the theory of evolution itself, but with the unfortunate anti-intellectual sector of the US. (Of course, Dawkins is from the UK, but I am talking about American opinion of these kinds of books.)

In current society it seems like a fad to wear one’s ignorance on one’s sleeve, as if boastfully declaring, “My ignorance is just as good as your knowledge.” Of course I am not advocating that we should go the opposite direction and be ashamed for not learning, but we should be able to come together and agree that ignorance is not a virtue, especially not in the most scientifically advanced country in the world. I am not really sure how the United States is supposed to recover from this, other than that we become more reasonable over time. And that will take education, not ignorance.

The title of the book is somewhat misleading, only if one does not understand what the word “selfish” is describing. The “selfish gene” is not so much talking about a gene that causes selfishness in individuals (this is an ambiguous notion in itself), but rather, it describes the word “gene” directly, that genes themselves propagate themselves in a manner that appears selfish. The individual is merely a “survival machine” for the gene. There is a critical difference here between the two notions.

The selfish gene is merely a gene that, for practical reasons, has a higher chance of being passed on. It does not really contradict any current notion of evolution, and in fact, at the time of publication, it became the new and improved theory of evolution that is now the textbook standard. In any case, the message is that evolution works not by the survival of the fittest individuals, but by the survival of the fittest, or most selfish, genes.

When we look at the selfish gene, there are situations (as demonstrated in the book) where the intrinsically selfish thought appears on the outside as altruistic. Mutual back-scratching benefits both individuals, and moreover, benefits the gene for it, thus making the gene more likely to spread. So while the behavior of back-scratching seems altruistic, it may be nothing more than concealed selfishness. This idea can be extrapolated to many phenomena. Often people put on acts and fake displays of kindness only for the selfish benefit of “seeming” nice. Or they are so “humble” that they announce their humbleness everywhere they please and make you feel bad for not being as humble as they are. The list goes on. However, I will not comment too much on this as this goes under cultural behavior and not strictly genetic behavior, although they are related.

The controversy around this book also seems to stem from perceived personal offense. Included in The Selfish Gene is an interesting quote from Simpson regarding historical developments in explaining how the current species on Earth came to be:

Is there a meaning to life? What are we for? What is man? After posing the last of these questions, the eminent zoologist G. G. Simpson put it thus: ‘The point I want to make now is that all attempts to answer that question before 1859 are worthless and that we will be better off if we ignore them completely.’

While this statement is perfectly true in trying to understanding biology, I can see how religious people might take offense. To declare that all mythological ideas in this area before Darwin’s The Origin of Species are worthless is a bold claim, even when it is correct.

Regarding the actual content of the book, I have already mentioned that Dawkins makes extensive use of game theory. There are many numbers in some of the more technical chapters, making the book possibly difficult to read in real-time unless the reader is versed in mental mathematics. Though, with some deliberate thought on these chapters, any reader should be able to get through them.

The Selfish Gene is a remarkable book, giving clear explanations of basic biology and evolutionary game theory for the layman. It is a shame that such educational material is viewed as controversial. I wish I could succinctly summarize the fascinating interplay of evolutionary game theory in a single post, but it would be better to leave it to you to pick up this book and think about it for yourself. If you do not like evolution, however, you have been warned.

# Math or Computer Science?

Well this is an interesting situation. Just a month ago I announced that I was adding a computer science degree, so that I am now double majoring in math and computer science. The title of the post, after all, is “Computer Science AND Math.” Given the circumstances at that time, I think it was a good decision. My work experience had been mostly in software, and a CS degree from Cornell should look pretty good. In addition, I was wanting a more practical skillset.

In the past week, however, things have changed. I received and accepted an internship offer from my dream workplace, based on my background in mathematics and not in CS (though my prior CS experience was a plus). Based on this new situation, I have considered dropping the CS major (next semester) and taking more advanced math:

• The CS degree has some strict course requirements, and I am afraid that if I go for the degree, I may be forced to skip certain math classes that I really want to take. For instance, I may have to take a required CS class next semester that has a time conflict with graduate Dynamical Systems, or with Combinatorics II. And given that I am currently a second-semester junior, I don’t have that much time left at college.
• Even this semester, I am taking Algorithms, which meets at the same time as graduate Algebraic Topology. While Algorithms is pretty interesting and the professor is excellent, I am already very familiar with many if not most of the algorithms, extremely familiar with the methods of proof, and I feel that the experience is not as rewarding as possibly taking Algebraic Topology with Allen Hatcher, who wrote the textbook on the subject. I feel that I could learn algorithms at virtually any time I want. But learning algebraic topology with Allen Hatcher is a once-in-a-lifetime opportunity that I am afraid I am missing just because I want to get a CS degree to look good.
• Even not being a CS major, I will still be taking some CS classes out of curiosity. However, these classes will no longer feel forced, and will not restrict me from taking the higher level math courses that I want to take.
• My risk strategy for grad school is different now because of the internship. In the past, I would have been willing to take a decent grad school in math or really good job. (I would prefer grad school over getting a job, but of course, a good job is better than a mediocre grad school.) However, now that I have my dream internship, I am willing to play the grad school game with more risk.
• But whether for grad school, trading, or just for curiosity, I would prefer taking advanced (graduate) math classes over undergraduate CS classes. In a sense, my taking of the CS degree was a hedge bet, as I wanted to reduce the possible cost of the worst case scenario. I knew that it would directly inhibit my ability to take advanced math classes via class time conflicts, but the thought was that if I couldn’t get into a good math grad school or get a good job using math, at least I would have a CS degree from Cornell. But, in this new situation, I think the risk is significantly reduced and the hedge is no longer necessary.

Interestingly enough, the primary motivation for dropping CS wouldn’t be to slack off, but to be able to explore more advanced math. (At least, that’s what I tell myself.)

I think this might be the second time in my life where I have had to make an important decision. (The first time was deciding where to go to college, and I certainly think I made the right choice there.) Unfortunately, I really can’t be both taking as many interesting math courses as I can, and at the same time be pursuing a CS degree. As much overlap as there is, I can’t do both. In an ideal world this might be possible, but not currently at Cornell.

So instead of the idea of having math and computer science, I am now having to think in terms of math or computer science. I am currently in favor of going with math, but I am not completely sure.

Edit: Thanks for the discussion on Facebook.

# Making Mistakes—And Quickly Correcting Them

A couple days ago on my math blog, I talked about my interview experiences with a certain trading firm. I would normally write about job or life experiences on this current blog, but given the amount of mathematics in those interviews, I wrote it over there instead.

An Interview Mistake

One of the things I did not mention in that post was a particular chip betting situation during one of the on-site interviews. I do not want to give away their on-site questions on the web, but I can say enough of it to make a point here.

The situation was a game where I had positive expected value. That is, if I played it again and again with my strategy, then over the long run I would gain chips.

My interviewer added a new rule to the game which did not affect the expected payoff of the game given that I kept the same strategy. However, the new rule was psychologically intimidating enough that I changed my strategy, and after a couple of plays, I realized I was now losing chips on average, instead of gaining.

My old strategy would have kept gaining chips, but the new strategy that I switched to was losing chips. I only realized this after 3 rounds, and just before the interviewer started the 4th round, I interjected, saying I realized my new strategy was a bad strategy, and I stated what the new (negative) expected value was for this strategy.

At this moment I felt that I had made a fatal error that would be reflected on in the decision. But instead of giving me a stern look for my mistake, my interviewer suddenly became really happy that I had corrected it! In fact, he said that almost everyone they interviewed had done the same thing, by switching their good strategy to a bad strategy when that new rule was added.

Acknowledging Mistakes

The first and most important part of correcting a mistake—and eventually benefiting from one—is to acknowledge the mistake. This most likely sounds trite, but acknowledging a mistake really is the most significant step of this process.

In matters involving numbers, it can usually be very easy to acknowledge a mistake. In my interview, all I had to do was to sense something fishy about the bet, and then recalculate the expected value to see that I had made a mistake. Since numbers don’t lie (and since I had chips on the table), I acknowledged the mistake as quickly as possible.

It can be much tougher, however, when the mistake is on some emotionally vested or less clear-cut issue. We’ve all had arguments with someone where we were totally sure we were correct, and only much later, we realized we were flat-out wrong.

And then sometimes we still maintain our original position even though we know we are completely wrong. This can lead to strange effects, but often, a person in such a state of mind is difficult to convince otherwise. Anyone who has tried arguing on the internet can give testament to this phenomenon.

Looking at the Evidence

Someone in such a state undergoes several cognitive biases:

• Refusal to look at opposing evidence.
• Cherry-picking evidence to only consider supporting evidence.
• Blaming something else for opposing evidence, and waving it off.
• Etc.

Let’s say that during my interview, I was adamant that my new strategy was good. After I start losing chips for a while, I might explain away losing streaks as bad luck, while at the same time explaining winning streaks by superior choice of strategy. I might complain that the coin was unevenly weighted, that the die was rigged, or that the deck had been stacked.

While these are somewhat reasonable conclusions to make, the problem would be if I were confronted with the fact that my strategy was bad. For instance, if I knew I was losing chips (say I lost 20% of them), but I believed in my mind that my strategy was still winning chips, then suppose the interviewer informed me that my strategy was losing chips. My first reaction, in this state, would be to reject this information and maintain that my loss of chips was due to bad luck or to unfair conditions. Of course, this behavior would be disastrous in an interview, and I would probably be rightfully rejected right there.

In the real scenario, I had some intuition about the probabilities involved, so I realized after 3 rounds that my strategy was flawed. But even if I had no intuition about what the probabilities were, after I played say 10 rounds, I would have seen the evidence and realized I was losing chips, and would have begun to start questioning my strategy.

Catching Mistakes and Learning From Them

Sometimes you are not afforded enough time to completely think something through. In this case, you need to give a most likely answer, but the important part is to keep thinking about the answer even after you have stated it. Sometimes, you might be given additional time to reanalyze it, other times what you state is final. This can be the worse feeling, when you catch a mistake only after making a final decision.

I used to play chess competitively, and while at the high levels winning often requires outsmarting your opponent, at the lower levels a win is typically achieved simply by making fewer mistakes than your opponent. If I were ever to get back into chess, my #1 area of improvement would be to reduce the number of blatant mistakes. I have turned many equal or favored positions into hopelessly lost positions by accidentally dropping a piece.

It is often psychologically damaging in chess because sometimes you know you’ve made a mistake after you made your move but before your opponent makes a decision. At this point you could hope your opponent doesn’t see your mistake. Or, you could think about how to avoid that mistake in the future. I think in the latter part of my chess playing, I dwelt too long on the first option and didn’t spend enough time on the second, and as a result, my chess rating hit a plateau.

Lies, Damned Lies, and Statistics

In addition, in 9th/10th grade I went through a phase where I thought global warming was not a well-founded theory. I subscribed to the solar cycle explanation for the “recent” warming, and thought that was more significant than the greenhouse effect contribution. I do have to add one caveat though for the record: even with that position on global warming, I still considered myself an environmentalist—I thought there were many issues with the environment, some which were far more urgent than global warming, and that global warming shouldn’t have eaten up all the priority and public interest. However, as debates go, my opposition always were able to label me as a “denier” of a sort, even though I never really denied it.

Anyways, I think the evidence since then has put a nail in the coffin. I knew the burden of proof was on the solar cycle model, and I waited to see if the temperature would drop back down. But it kept going up (in fact, even it had just stayed constant, it would have contradicted the solar cycle model). Moreover, one of the leading advocates of the solar cycle model abandoned it a couple years later. As a result, sometime during 12th grade, I went back to the scientific consensus view on this.

The Portals of Discovery

Realizing a mistake can be a rewarding experience. There is a quote by Donald Foster:

No one who cannot rejoice in the discovery of his own mistakes deserves to be called a scholar.

And a good one by James Joyce:

Mistakes are the portals of discovery.

(Well, not if you keep making the same mistakes.)

# Thoughts on Classes, Spring 2013

In a previous article, I posted my schedule and about my decision to double major in mathematics and computer science. The computer science department seems to be quite backed up at the moment, so I have not received any official response yet.

I can see why the CS department is really backed up. In most of my experience at Cornell, I had class sizes of 10-30, with larger class sizes (150-250) only at introductory level courses such as Sociology 1101 or Astronomy 1102. It would be quite rare to have an advanced level course with that many people in them.

But, CS easily has 150-250 people in each class. In the first few days of class, even in large lecture halls, there were no seats left and the late-arrivers had to sit in the aisles. I think students here see CS as too lucrative of a skill to pass up. Some difficult or otherwise time-consuming homework assignments have caused class sizes to drop significantly, but there are around 150-250 people in the CS classes. On the other hand, my math classes have 14 and 6 people respectively.

Math 4340 – Abstract Algebra

Professor: Shankar Sen

This is a fairly trivial class so far. We are covering basic group theory and it is quite a relief compared to some of the more intense math I did last semester (*cough* topology). The course is supposed to move on to rings and modules later; however, in linear algebra we actually covered much of the foundations of ring theory and modules.

However, given the lack of difficulty of the topic so far, the homework grading has been quite harsh. I usually skip writing down every rigorous step if I think some part is obvious. Learning the material is more important than writing down every detail of the proof, in my opinion.

Math 7370 – Algebraic Number Theory

Professor: Shankar Sen

There are no exams, no prelims, and no homework. However, it is a graduate level seminar-type class and it is pretty insane. I have put up my lecture notes on Scribd, and even if you know nothing about college math, if you click that link, you can probably see how much more difficult 7370 is than 4340.

It is a really good thing I had a basic introduction to ring theory and modules before taking this course. Knowing what PID (principal ideal domain) and UFD (unique factorization domain) mean, knowing the difference between prime and irreducible, etc., was extremely helpful.

This class is even more difficult than the graduate Complex Analysis course that I took last year. Before I took complex analysis, I actually knew quite a bit about complex variables, complex functions, and contour integrals. I had even studied the Riemann zeta function in high school. And on top of that, I was not the only undergraduate in that class—there were at least 3 others.

But for algebraic number theory, this is really new material, most of which I haven’t seen or even heard of, and moreover, I am the only undergrad in the class. However, I talk with the professor outside of class and I am confident that I can learn the material if I really try.

Math 4900 – Independent Research/Reading – Elliptic Curves

Since I felt that I was doing too much CS and not enough math, I decided to add on an independent reading class. The book is The Arithmetic of Elliptic Curves by Joseph Silverman.

I have seen elliptic curves in complex analysis in the form of the Weierstrass P-function and equating points in the complex plane by a lattice. To see the algebraic side of it will be interesting though, especially because I am interested in number theory for possible research.

In addition to this official reading, I am also reading and doing problems from Tom Apostol’s Introduction to Analytic Number Theory, so that I can get both the algebraic and analytic sides to it.

CS 4820 – Introduction to Algorithms

Professor: Dexter Kozen

This is a really fun theoretical and mathematically oriented class. After all, Kozen is practically a mathematician.

Given my mathematical background, especially the combinatorics class I took last semester, this algorithms course is not too difficult and in fact fairly trivial so far. But, I expect it to get more sophisticated once we get over the introductory stuff. For instance, on our discussion board on Piazza, one student asked how to use a contradiction proof. In just topology alone, I probably used about a hundred.

In addition, Kozen shares some very interesting stories during lecture. Just last Friday, he was talking about dynamic programming and discussed a project using body scan data to analyze the number of dimensions it took to store the size information of a human body. “Are women 2-dimensional? I don’t think so,” said Kozen. In fact, he recalled from the study that women were around 5-dimensional and men were fewer.

Also, when he was explaining the growth of the Ackermann function A(n), he noted that even A(4) was an extraordinarily large number, and in fact that it was “even higher than Hopcroft’s IQ.”

CS 4850 – Mathematical Foundations for the Information Age

Professor: John Hopcroft

From the title of this course, one might think it is really easy, but even as a math major, I find it nontrivial (that means hard, in math terms). In fact, I’d say at least 30-40% of the class has dropped since the first day. The fact that Hopcroft won a Turing award makes the class no easier.

It is essentially a mathematical and statistics course with applications. We proved the Central Limit Theorem on the first day of the class and then looked at spheres in high dimensions, with the intent of generating random directional vectors in high dimensions. As it turns out, most of the volume of a high-dimensional sphere is on a narrow annulus or shell, and when a given point is taken to be the north or south pole, the rest of the volume is located at the equator.

Currently we are studying properties of large random graphs, in particular, properties that appear suddenly when the edge saturation of the graph passes a certain threshold. For instance, below a certain number the components of the graph are all small, but above that number, a giant component arises. For an assignment I showed how this giant component phenomenon arises in connections of the Reddit community.

CS 3410 – Computer System Organization and Programming

Professor: Hakim Weatherspoon

In contrast to the high-level programming I have done in the past, this course is about low-level programming and the hardware-software boundary. The programming language for this course is C.

We are building up a processor from the ground up, one could say, with basic logic gates to begin with. The first project was to design a 32-bit arithmetic logic unit (ALU) using Logisim, a circuit simulation program. For instance, for a subcircuit we needed to create a 32-bit adder with overflow detection.

The above picture is actually a screenshot of the overall ALU that I designed for the class. The subcircuits are not shown (this project is not due yet, so it would break academic integrity to show a more coherent solution).