Computer Science and Math

At the end of last semester, I decided to double major in computer science and math, rather than just in math. This decision was based on several reasons:

  • Practicality. As much as I love theoretical math, most of it is totally irrelevant to the real world. CS is closely related but far more useful.
  • Opportunity. Cornell has a top-rate CS department, and it would be a shame for me to not take advantage of it. I am virtually done with my math major as well, so it does not cut into that.
  • Expanding my skill set. I think CS is a strong backup in case I didn’t get anywhere with math.

The catch is, being a junior already, I need to rush the major in my remaining 3 semesters (including this one). This will require quite a bit of work, but due to my math major, I have much of the foundation done. I also have every liberal arts distribution requirement out of the way. In addition, during my sophomore year I took CS 2110, so that is another requirement done. My schedule for this semester is below.

2013 Spring Schedule

The most interesting class will be Math 7370, which is Algebraic Number Theory at the graduate level. I have some background in analytic number theory but not in algebraic number theory, so it will be interesting to see the differences. Also, the same professor is teaching 4340 and 7370, so I should have ample opportunity to ask any questions about algebra.

As for CS classes, I hear CS 3410 has a lot of work, but I am prepared. Also, given that I did decently in Combinatorics (Math 4410) last semester, CS 4820 should not be too hard. And given the knowledge from a math major, I doubt 4850 will be that difficult.

In addition, I have planned a more consistent posting schedule for this blog. There won’t necessarily be more posts, but they should be spaced more evenly. Also keep an eye on my math blog epicmath.org—I will continue to update it this year.

How to (Theoretically) Win a 2-Party Presidential Election with Just 21.8% of the Popular Vote

In an extreme case, consider the following electoral map:

Of course, this is nonsensical as DC is red and Texas is blue, but let’s assume this happened for the sake of argument. Despite the map being overwhelmingly red, the red states win the electoral vote by only the slightest margin of 270 to 268.

Let us assume that every single state was nearly evenly split, something like 50.01% to 49.99%. Then even though the red states won the electoral vote, the blue states contain 56.4% of the population, thus the blue candidate actually wins the popular vote 56.4% to 43.6%, a huge lead.

Now, suppose the vote was nearly even in the red states, but let the blue candidate win 100% of the vote in all the blue states. Then the red candidate wins only 21.8% of the popular vote, yet still wins the election, despite 78.2% of the electorate voting against him.

List of states in our hypothetical model: Red – Wyoming, District of Columbia, Vermont, North Dakota, Alaska, Rhode Island, South Dakota, Delaware, New Hampshire, Montana, Maine, Hawaii, Nebraska, West Virginia, Idaho, New Mexico, Nevada, Utah, Kansas, Arkansas, Mississippi, Iowa, Connecticut, South Carolina, Minnesota, Alabama, Oklahoma, Kentucky, Oregon, Colorado, Washington, Louisiana, Wisconsin  Maryland, Tennessee, Arizona, Indiana, Massachusetts, Missouri, and North CarolinaBlue – Virginia, Michigan, New Jersey, Pennsylvania, Georgia, Ohio, Illinois, Florida, Texas, New York, and California.

Note: After I wrote this post, I googled the 21.8% and found a few cases where people used intense computer computation with exponential-time algorithms to figure this out.

I was shocked when I discovered these methods, as my own method was extraordinarily simple, taking all of two minutes in Excel, with otherwise no number-crunching: just list out the states in ascending order of population per electoral vote, and then go down the list until you get to 270 or more. The list I got started with Wyoming, and went all the way down to Georgia. However, this added up to 271, so I searched for any way to shave off 1 electoral vote. As it turns out, there was a way: I replaced Georgia (16 electoral votes) by North Carolina (15 electoral votes), which has a smaller population.

Bonus Round #1:

Now pretend there are more than 2 parties. Then it is possible to win 270 with an even smaller percentage of the popular vote. Let n be the number of parties. Then you can win with 43.6/n % of the popular vote. The 2-party example of 21.8% is just a special case of this. For example, with 3 parties the red candidate just needs to win a 33.34% vs 33.33% vs 33.33% plurality in each of the required states, so to win the presidency, he only needs to win 43.6/3 %, or 14.5%, of the popular vote.

Bonus Round #2:

Let’s go back to 2 parties. Someone on Facebook asked:

What if the electoral college reps were voted in based on district? The representatives are based on members of congress, right? So what if every state did what Maine and Nebraska does and allow their representatives to split? Two elected based on statewide votes, the rest by congressional district. Keeps the small states relevant to the campaign while making the electoral process more representative.

As it turns out, the answer doesn’t change if we let electors vote by district. Just let the red candidate win every single district in every single red state listed above by one vote. Then it’s still an electoral win with 21.8% of the popular vote.

The number does change, however, if we remove the two state votes from each state and have the vote counted purely by district. Then a candidate must win 23.2% of the popular vote (win the 219 smallest districts by a marginal amount, outright lose the other 217 districts).

Sources:

The Perfect Prediction

Many have heard of Nate Silver’s prediction of the 2012 presidential election. For those of you who haven’t seen it yet, here was his prediction for last Tuesday, which may seem uncannily familiar:

It seems familiar because it bears a striking resemblance the actual results:

In fact, that’s all 50 states correctly predicted.

Given two equally likely options for each states, the chance to predict all 50 states correctly is one in 2^{50}, or one in 1.126 quadrillion. Granted, we already knew which direction states like Texas or Vermont would vote for, so for the sake of simplicity let’s consider only the 9 “swing” states: Colorado, Florida, Iowa, Nevada, New Hampshire, North Carolina, Ohio, Virginia and Wisconsin. That’s still a one in 512 chance to guess all 9 states correctly from sheer luck.

How did he do it? The answer is high-caliber aggregate statistics. When you conduct a poll, you are going to have very high uncertainty if you poll only a few people. But if you poll a lot of people, your prediction gets more accurate. And one way to poll many people at once is to aggregate the data from many, many polls.

In 2008, Silver’s prediction was accurate in 49 of the 50 states, missing only Indiana.

New Math Blog

I started up a math blog yesterday called Epic Math: http://epicmath.org/

The reason for doing this is that some of the higher-level topics in math that I want to write about are just too advanced for a general audience. In this earlier post, I debated what direction this blog was headed. And I felt it was important to always keep the audience in mind.

My decision is largely due to how theoretical some of my classes are becoming. It would simply make no sense for me to write posts about theorems in topology or abstract algebra or even combinatorics, alongside posts about politics or religion or philosophy. It would be a rather awkward combination.

Thus, I am splitting off mathematical topics into the new blog. The old math posts already on this blog shall remain. Anyways, the new blog will contain more topics, most likely based on what classes I am taking.

Happy Halloween! (Seems like I haven’t posted on a major holiday in a while.)

How Do You Define Obvious?

Mathematical Obviousness

Mathematicians often say things like “It is obvious that…” or “It can be easily shown that…” when the desired result is relatively easy. But to a normal person, what they are saying is not obvious at all. Other fields often have similar situations. An economist might consider a particular problem trivial, yet even a bright student might not understand it at all.

Consider the following:

1 + 1 = 2

That was obvious, right? Then what about this:

\frac{1}{2} = \frac{3}{6}

You probably thought that was obvious as well. Even though the two numbers are written differently, they are the same number.

Okay, one more statement. The question is, as always: obvious or not obvious?

0.999... = 1

There are three types of people on this issue:

  1. Those who think it is true and find it obvious.
  2. Those who think it is true and find it not obvious.
  3. Those who think it is false.

My opinion, which belongs to that of the first group, is no doubt influenced by the fact that I am studying math. For me, this is the same as saying 1 + 1 = 2 or saying 1/2 = 3/6. The two expressions on the left and right sides of the equals sign are written differently, but as the math shows, they exactly equal. Yet many people cannot accept this truth, and are bent on believing 0.999… is not equal to 1.

True Story

Last year I took Math 6120, or graduate Complex Analysis. One day Professor John Hubbard was proving Jensen’s Theorem. I do not remember where in the proof it happened, but at some point, there was a small detail that Hubbard could not immediately prove.

As it turns out, the book he was using left out this particular part of the proof. When Hubbard prepared his notes, he assumed that any small detail the author left out would be “obvious,” and that he would be able to derive it quickly during the lecture.

So he writes the statement on the board and asks, “Why is this obvious?” Nobody in the room has any idea.

It took a while to figure out why that little “obvious” statement was true. And even then, Hubbard had to re-explain, as there were grad students who still did not understand it. Then we found that our justification for why it was “obvious” did not work in general, but luckily did work for this specific problem. When it was all over, this “obvious” result took about 15 minutes of our time.

The Math You Weren’t Allowed to Do

You probably learned a bunch of things in school math about what you can and can’t do. When you were a first grader, perhaps you learned that that you can’t subtract 5 from 2, but later on, you learned about negatives: 2 – 5 = -3.

You also might have been told that you can’t divide 2 by 6, but then you learn about fractions. And by now, you are no doubt an expert at splitting 2 pizzas among 6 people.

Even so, there is much more that you may not have known…

1. You Can’t Count to Infinity

Actually, you can. It can be done via ordinal numbers.

You start out counting by

1, 2, 3, 4, 5, 6, 7,… 700,… 30,000,000, etc.

When you played a “what’s the highest number?” game with someone, every time you said a number, they countered by saying your number plus one, that is, unless you said infinity. Because infinity plus one is still infinity, right?

Here is where ordinals come into play. The ordinal number \omega (ω, omega) is defined as the first number after ALL of the positive integers. No matter what normal number they might say, whether it’s ten billion or a googol, the ordinal number \omega is far, far larger. It is practically infinity.

But then you can add one to it, and it becomes an even bigger number. Add two, and it becomes even bigger.

\omega, \omega+1, \omega+2,..., 2\omega,...,3\omega,...,n\omega,....,\omega^2,...

What the heck is going on? If you count an infinite number of numbers after omega, you get two omega? Is this two times infinity? And then three omega? And then omega squared?

It turns out to keep on going. Eventually you will get \omega^\omega, and then \omega^{\omega^\omega}, etc. And then you reach \Omega (big omega), which is larger than all things that can be written in terms of little omegas. And then you can make bigger things than that, with no end.

So the next time someone claims infinity is the largest number, you can confidently reply, “infinity plus one.”

2. You Can’t Divide by Zero

Actually, under certain conditions, you can.

The field of complex analysis is largely based around taking contour integrals around poles. Another word for pole is singularity. And another word for singularity is something you get when you divide by zero.

Consider the function y = 1/x. When x is 1, y is 1, and when x is 5, y is 1/5. But what if x is 0? What happens? Well, 1/0 is undefined. However, if you look at a graph, you see that the function spikes up to infinity at x = 0.

What you do in complex analysis is integrate in a circle around that place where it spikes to infinity. The result in this case, if done properly, is 2\pi i. It’s quite bizarre.

3. You Can Only Understand Smooth Things

Actually, there is much theory on crazy, “pathological” functions, some of which are discontinuous at every point!

The image above is kind of misleading, as it is a graph of the Cantor function, which is actually continuous everywhere (!), but nonetheless manages to rise despite having zero derivative almost everywhere.

There is another function with the following properties: it is 1 whenever is x is rational and 0 whenever x is irrational. Yet this function is well understood and is even integrable. (The integral is 0.)

Then you have things that are truly crazy:

The boundary of that thing is nowhere smooth, and is one of the most amazing things that have ever been discovered. Yet it is generated by the extraordinarily simple function z^2 + c, which most people have seen and even studied in school.

4. You Must “Do the Math” and Not Draw Pictures

Actually, math people use pictures all the time. The Mandelbrot set (the previous picture) was not well understood until computer images were generated. There is no such thing as doing the math in a “correct” way. Some fields are quite based on pictures and visualizations.

How else would anyone have thought, for example, that the Mandelbrot set would be so complex? Without seeing that in pictures, how would we have realized the fundamental structure behind the self-similarity of nature?

Yeah, that’s a picture of broccoli. Not a mathematical function. Broccoli.

5. If It Doesn’t Make Sense, It’s Not True

Actually, many absurd things in math can be perfectly reasonable.

What’s the next number after 7?

8, you say. But why 8? What’s wrong with saying the next number after 7 is 0? In fact, I can define a “number” to only include 0, 1, 2, 3, 4, 5, 6, and 7. Basic operations such as addition and multiplication can be well defined. For example, addition is just counting forward that many numbers. So 6 + 3 = 1, because if you start at 6 and go forward 3, you loop back around and end up at 1.

Even weirder is the Banach-Tarski Paradox, which states a solid sphere can be broken up into a finite number of pieces, and the pieces can be reassembled to form TWO spheres of the exact same size as the original!

I hope this was understandable for everyone. May the reader live for ω+1 years!

Why Math?

As a math major, I am often asked the question, “Why math?” In particular, why theoretical math, when it doesn’t seem to be related to anything?

I often have trouble coming up with a full and satisfying answer on the spot. Math is one of the subjects whose material and categorization can be confusing. It spans several fields that many have not even heard of. When I say “topology” or “analytic number theory,” it often draws blank stares; in fact, topology is often misunderstood as “topography,” the study of terrain.

Well, here is my more thought-out answer to why I study math.

Is It Relevant?

Take a good look at the following equation:

\displaystyle \sum_{n} \frac{1}{n^s} = \prod_{p} {\frac{1}{1 - \frac{1}{p^s}}}

Through the cryptic jumble of symbols, you might ask yourself, how is this useful?

An even more relevant question for most readers is, what does it even mean?

As it turns out, this particular equation has very little practical value. Yet it is one of the most fundamental equations in the field of analytic number theory, and it is a remarkable statement about the prime numbers.

It basically links the infinite set of natural numbers (1, 2, 3, 4, 5,…) with the infinite set of prime numbers (2, 3, 5, 7, 11,…). The proof is relatively simple, but I am not going to give it here. It is known as the Euler product formula.

There is virtually no “useful” information given by this highly abstract formula. It doesn’t help with daily finance. It doesn’t solve traffic congestion. It doesn’t even help in landing a rover on Mars. But it does is provide us with an insight into the fundamental truth of nature. In a way, this equation exceeds the known universe, as according to current theory, the universe is finite. The equation, by contrast, deals with the infinite.

In fact, modern mathematics is full of statements and theorems that have currently no practical use. There are entire branches and fields of study that are, in essence, useless. Sometimes, useless things have applications in the far future. Complex numbers, for example, were invented centuries ago, but didn’t really find any use until modern electronics and physics were developed.

Maybe everything we know today in math will be applied somehow. But this cannot happen forever. The universe is finite, after all, and knowledge is infinite. Sooner or later, or perhaps even now, we will have found knowledge that serves no use in our universe. This leads to the next question.

Is Knowledge Worth Seeking?

Should we seek knowledge for the sake of knowledge?

Is a culture with more knowledge inherently richer than one without?

Historically, knowledge in the form of technology had the power to save oneself, one’s family, and even one’s country. Entire civilizations were wiped out due to the technological superiority of the invaders. Knowledge has for a long time acted as a defense tool.

So perhaps we should embrace new knowledge for the sake of defending against a future alien force. But what about afterwards?

Assuming humans survive long enough to establish a galactic presence, and have enough technology to be virtually indestructible as a species, so that survivability is no longer an issue, what will be the point of further knowledge? What will be the point of knowledge for the sake of knowledge?

That picture above is the Mandelbrot set, a fractal generated by the fairly simple quadratic function

z \mapsto z^2 + c

where c is a complex number.

There could easily be no purpose to this fractal, yet it certainly holds some value. It is aesthetically pleasing, and the ability to zoom in on the image forever raises some old philosophical questions. In this sense, it is almost like art, only the rules are completely different.

In essence, knowledge for the sake of knowledge is what math is all about. There is no intrinsic need for math to apply to the real world, nor does any topic in mathematics need an analogy in real life. Math is knowledge at the abstract level.

Recently someone asked me what classes I was taking, and when I mentioned topology, he asked if that was a map making course. Topology and topography sound quite similar, I suppose.

In any case, topology is a great example of what pure math is about. It is the underlying foundation behind geometry. Geometry is highly applicable in real life, because shapes, sizes, and angles of things all affect the way they work. But in topology, sizes and angles do not matter. A line is the same thing as a curve, a square is really the same thing as a triangle or a hexagon, and a sphere is really the same thing as a cube or amoeba.

And a donut is really the same thing as a coffee mug.

These fields of math are totally alien to the math taught at the pre-college level. Geometry, basic algebra, and calculus are about sizes of things and comparing objects to determine their shapes, lengths, volumes, etc.

But when you get to the higher fields, such as analysis, number theory, abstract algebra, and topology, everything completely changes. They feel like entirely different subjects than the math taught in middle school and high school.

Previously, you were told that dividing by zero is impossible and that it is pointless to think of infinity. But in complex analysis, you can actually “cancel out” zeroes and infinities provided certain properties are counted, and you actually care about where functions hit infinity and how often they do so. And in set theory, you discover that there are actually different sizes of infinity. These facts are much more interesting than, say, the quadratic equation, which is taught in every high school algebra course.

The fact that zeros can actually cancel out infinities, or that there are different sizes of infinities, is much more interesting than such a formula.

This graph, showing a region of the gamma function, generalizes the notion of factorial (i.e., 5! = 5 * 4 * 3 * 2 * 1) to complex numbers.

The gamma function is also closely related to the equation at the very top of the page, with the natural numbers on one side and the prime numbers on the other. Those two expressions also define the Riemann zeta function.

You might be able to see some relation between the two images. It turns out that the trivial zeroes of the zeta function, which can be seen as the strange color mismatches on a line going from the center to the left, are the result of the poles of the gamma function, which are the vertical spikes in the other picture.

Basically, that is why I study math. The point is not to memorize formulas or to calculate quickly. It is to discover fundamental truths out of ridiculous-sounding things, and to make sense out of them. In a way, this is what people do in other academic fields as well. Sometimes math goes over the top and seems completely useless. This is bound to happen. But some things, like art and mathematics, don’t need a practical purpose to exist. Such things are valuable in their own right.