The Legacy of 2012: The Fall of Superstition

This is similar to my Legacy of 2009 post. I didn’t write one for 2010 or 2011, but the theme was similar. In 2010, social media really exploded, and by 2011 it was all but set in stone. But as this was happening, another explosion was occurring: the smartphone. In early 2012, smartphones hit 50% saturation in the United States, and thus statistically they are near peak sales (though perhaps holiday season 2012 might be the final spike).



The global smartphone market is rising rapidly as well. With more people than ever having the sum total of human knowledge within arm’s reach, this leads to the year’s real legacy:

2012: The Fall of Superstition

On December 21, 2012 the world was supposed to come to an end, thought 10% of the world’s population. But on that day, nothing happened. The universe continued as normal. Perhaps a massively failed doomsday can help awaken the world from superstition.


At 12%, the most scientifically and technologically sophisticated country in the world, the United States, was shockingly above average in the 2012 doomsday belief. Yet, this fact may not be too surprising, for among that of the developed countries, the US has been and is the most religious by far.

But this may change in several more years. From a 2012 Pew Research survey, non-religion is quickly growing in the United States, gaining roughly 5 percentage points in 5 years.

rise of no religion pew graph

More importantly, with the strong correlation between non-religion and younger age, the growth of non-religion is poised to accelerate in the upcoming years. In one of my previous posts, I predicted that secularism will be one of the next sociopolitical movements, following the previous Civil Rights and feminist movements, and also the current LGBT movement.

Along with a shock to superstition came several great advancements in science. The Curiosity rover landed with high precision in a daring and suicidal-looking sky crane maneuver.


Another significant achievement is the confirmation of the Higgs boson, whose existence in turn confirms the Standard model, one of the most intricate scientific theories to date. Other great scientific accomplishments of 2012 include:

With all the remarkable advancements, 2012 was not without disappointments. In Italy, six scientists were sentenced to prison for an earthquake. Wake up Italy, you’re not the 1600’s anymore.

In 2012 the expiring Kyoto Protocol was extended to 2020, but did global carbon dioxide levels actually fall? Nope. The went up 58%, in huge part due to China’s industrial growth combined with its disregard for the environment.


Meanwhile, the United States is doing okay, with its CO2 production in 2012 at the lowest in 20 years. However, to have meaningful impact in reducing climate change, it will take a true global green movement, which unfortunately will be at least a decade away.


Despite being a year filled with progress, 2012 had its setbacks. Religious extremists violently demonstrated the fundamental tenets of their “religion of peace” in response to a satirical film, and also attempted to assassinate a 15 year old girl who just wanted education for everyone.

Elsewhere in the Middle East, the Israel-Hamas conflict set the world on edge for eight days. And though it paled in comparison to many other issues in the world, the Sandy Hook shooting shook and saddened America, and should lead to gun control being a more important issue.

Overall though, 2012 was a good year. With extraordinary scientific advancements (above), social advancements (though not met worldwide such as in this case or this one), as well as the reelection of President Obama, the year 2012 ends with a world that is more smart, aware, and progressive than it ever was, more potent than ever before to deal with religious extremism, war, and environmental destruction.

Talent Is Overrated

“Talent” is a word that is tossed around all too often, whether for top musicians or businessmen, or even just a person who creates popular Youtube videos. The idea of talent is in nearly every case taken for granted. As a young member of a very supportive family and community, I had heard the saying myself many times. But is talent a correct or even useful explanation for high-level performance?

Talent Is Overrated

I recently read a very intriguing book by Geoff Colvin. It was really a lucky buy—I was actually reading through reviews of Josh Waitzkin’s The Art of Learning, when the ever-so-omniscient Amazon Recommendations pointed me to a bizarre and blatantly absurd statement: Talent is Overrated.

With a plethora of examples, data, accumulation of research, and forcible writing, Colvin argues convincingly that the source of great performance in just about every field is best explained not by reference to the mysterious force known as talent, but by sheer amount and direction of deliberate practice.

My Personal Experience

First, a line from Colvin (193):

Their parents made them practice, as parents have always done, though it’s interesting to note that in these cases, when push came to shove and parents had to make a direct threat, it frequently played off the student’s intrinsic motivators. So it wasn’t “If you don’t do your piano practice we’ll cancel your allowance,” but rather “we’ll sell the piano.”  Not “If you don’t go to swimming practice you’ll be grounded Saturday night,” but rather “we’ll take you off the team.” If the child truly didn’t care about the piano or swimming, the threats wouldn’t have worked.

I was one of those kids who was, regarding the piano, totally immune to such a threat. As I wrote earlier, I absolutely dreaded playing the piano, and would have loved to see the piano disappear and find a bunch of cash in its place. But what I lacked in interest for the piano I made up for in my interest in chess. From 2003 to 2010, I competed in more than 70 rated chess tournaments. But looking back at the distribution of tournaments, I found that the majority of them occurred between 2003 and 2006, with one resurgence in 2008 [data]. It would be accurate to say that my tournament frequency was very closely correlated to how much time I spent on the game outside of tournaments in practice. As if to confirm Colvin’s thesis, here are my regular and quick rating graphs:


When the frequency of tournaments, and thus training, increased, my rating climbed. And when the frequency of tournaments and training decreased, my rating stagnated or declined. This seems to support the dedicated practice model argued in Colvin’s book. The performance in a given time period seemed to be determined by the amount of training in the same time period.

But what about compared to others? I am hardly an expert player, but my very first rating  after my first tournament, 1372, was in the 96-97th percentile of scholastic players at the time. By contrast, the current US chess champion Hikaru Nakamura, whose current USCF rating is a whopping 2834, started at a provisional rating of 684, an unimpressive statistic. However, he has played in 439 rated events over a period of 17 years, which is a hell of a lot more effort than I had ever thought about spending on the game. Thus even when you have an “advantage,” such as having a starting rating of 1372 versus 684, thinking of it in terms of talent is useless. If you do not follow it up with the necessary amount of work, the advantage will assuredly disappear.

There is a third point, to truly put the nail in the coffin of the talent model. In a two year span from 2006 to 2008, my rating stopped improving in the 1700s. Excuses aside, I simply didn’t practice the game much. But one thing I think could have happened is what Josh Waitzkin described, from Colvin (197):

The most gifted kids in chess fall apart. They are told that they are winners, and when they inevitably run into a wall, they get stuck and think they must be losers.

I don’t think it takes a gifted kid to run into the wall and get stuck (the 1372 initial rating was actually in part due to luck, as my first few tournaments were counted out of order, and a tournament that I had done really well in was incidentally the first one counted). For those two plateau years, I did feel the way that Waitzkin forewarned. I thought the high initial rating meant something special, i.e. talent, and that the 1700 plateau meant I was doomed. This thought process in terms of talent condemned me mentally to not advance. Even though I was still fairly high rated in my age group, I stopped practicing and reading as much, and as a result did not prepare myself adequately for tournament events. This caused my rating to drop.

How to Be a World-Class Performer

Colvin’s thesis works for far more than just chess. He applies it to the violin, piano, football, players, business, investment, management, art, teamwork, and just about anything, all while citing tremendous amounts of evidence for his claims. For music, the obvious counterexample is Mozart, yet early in the book Colvin disposes of this myth, as well as that of Tiger Woods. Mozart, for instance, had my years of intense, expert training starting at an early age, and Tiger Woods swung his first club at age seven… months, also trained by his father.

Another result of years of deliberate practice is the ability for an expert to see complex patterns that would completely elude an average person. A professional tennis player can return a serve of a ball traveling at a speed so high that a normal human should not even have time to react. Yet they are normal in this sense. But they don’t watch the ball, they watch their opponent’s body movements instead, and know approximately where the serve is going to land (or whether it will fault) before the racket even hits the ball. Similarly, a top stock trader can see signs that the average trader does not even consider to be relevant. A top manager sees the critical signs more so than an average one. And a master chess player can memorize an entire chess position in seconds and reproduce it perfectly, while the average person can recall only five or seven pieces. Most notably, this is not from better general memory, but by extensive training to be familiar with certain positions and patterns, so that they read a position by words instead of letters.

I would most certainly recommend this book to anyone. It breaks the shackle of “talent,” which although is a warm, comforting hope, it is no more than that, a beloved superstition with little evidence, and which discourages so many from even attempting something because they believe they “don’t have talent” or “divine spark” for it. But as it has repeatedly occurred, looking back at the backgrounds of top performers give little or no indication of any talent early on, but rather, what is common to all of them is an immense amount of training and dedicated practice. Perhaps this is the even more fascinating hope, that the world is within reach to everyone.

Are We in a Simulation? A Scientific Test

According to a recent article, scientists are planning a test to determine whether our universe is a computer simulation. This is pretty relevant to my blog as I have discussed this idea a number of times before [1] [2] [3] [4].

Soft Watch at the Moment of First Explosion

Of course, the must-read paper on this subject is philosopher Nick Bostrom’s article, “Are You Living in a Computer Simulation?” The implication, given a couple of premises, is that we are almost certainly living in a computer simulation. Not only that, but the argument posits that our simulators are themselves extremely likely to be in a simulation, and those simulators are likely too to be in a simulation, etc.

Indeed, how will scientists test for signs of a simulation?

“Currently, computer simulations are decades away from creating even a primitive working model of the universe. In fact, scientists are able to accurately model only a 100 trillionth of a metre, with work to create a model of a full human being still out of reach.”


Even so, there are limitations beyond technical ones that should be considered. If a test does not find any evidence of our being in a simulation, that does not rule out the possibility—in fact, a very well-designed simulation would be very difficult, if not actually impossible, to tell apart from a “reality” to its inhabitants.

Conversely, suppose a test that did find “evidence” that we are in a simulation. How would we judge this evidence? How can we know which way the evidence is supposed to point? After all, even if we find “glitches,” they could turn out to be part of a larger set of natural laws.

As Richard Feynman once thought, suppose we are observing a chess game but are not told what the rules are. After looking at various snapshots of a game, we can piece together some of the rules, and eventually we will learn that a Bishop must stay on the same color when it moves. But one snapshot later, we find that the only Bishop in the game is now on a different colored square. There would be no way of knowing, without looking at many more games, that there is a rule where a Pawn can promote into another piece, such as a Bishop, and that the old Bishop was captured. Without this knowledge, we might have thought that the Bishop changing color was a glitch.

Now back to the article.

“By testing the behaviour of cosmic rays on underlying ‘lattice’ frameworks governing rules of physics that could exist in future models of the universe, the researchers could find patterns that could point to a simulation.”

Many disciplines would have to come together here to prove something fundamentally “wrong” with our universe. It would be the junction point of computer science, physics, philosophy, mathematics, neuroscience, astronomy.

The plan given in the article is a noble one, but I do not expect it to grant any important experimental data soon. Rather, it is the tip of an immense iceberg that will be explored in not years or decades, but millennia to come.

On the Naming of Terms in Several Disciplines


Recently I watched an entertaining talk by Neil deGrasse Tyson in which he poked fun at the confounding complexity of biological and chemical terms, in contrast to the elegant simplicity of terms in astrophysics. The segment starts at 14:34 of the talk and goes till about 17:00.

[15:01] What do you call spots on the Sun? [Pause] Sunspots!


Indeed, terms like sunspot, red giant, supergiant, nova, supernova, ring, moon, black hole, pulsar, dark matter, dwarf planet, spiral galaxy, singularity, solar flare—it is immediately obvious what these things describe. Even terms like neutron star or Trans-Neptunian object are clear if one is familiar with neutrons or Neptune. Let us see what term sound like in other disciplines.


What do you call the most important molecule in your body that contains all your genetic information? Deoxyribonucleic acid. What do you call the energy molecule that your body runs on? Adenosine triphosphate. What do you call the most common liquid you drink (if you aren’t a college student)? Dihydrogen monoxide.


Things like these are what Tyson was getting at, where, without even going into the ideas or concepts, a student may be already confounded by the sheer terminology.

Granted, at the core all these names make sense and are very systematically denominated. For instance, “dihydrogen monoxide” describes exactly what the constituents of the molecule are: 2 hydrogen atoms and 1 oxygen atom. And “deoxyribonucleic acid” is really just a nucleic acid (a nucleotide chain) with deoxyribose as the sugar component. Even the term “deoxyribose” is well named, as it is the sugar obtained by removal (de-) of an oxygen atom (oxy) from a ribose sugar.

In this respect, I don’t think biochemical terms are really as confounding to a scientifically literate population as Dr. Tyson makes them out to be; however, I do see his point in that they would confuse the hell out of someone who is not scientifically literate. Even then, these terms would not cause an illiterate person to gain a wrong understanding.

I claim that while biochemical terms are quite abstract, at least they are not misleading.


“This allocation of resources is Pareto efficient.”

This term might have a positive connotation, as efficiency is associated with good, and so the masses may support a policy having anything having to do with it. However, it is possible that an allocation where the top 1% controls 99% of the resources is Pareto efficient. Indeed, an allocation where one person controls 100% of the resources is Pareto efficient, as the term only concerns whether the allocation could be changed such that one could benefit without harming any one else. Given the misleading connotation, it would be disastrous if this term were ever uttered by an economist—or worse yet, a politician—in public discourse.


It is especially misleading as economics generally has very simple, intuitive terms: supply, demand, goods, depression, inflation, market, labor force, bubble, money, wage, etc. These are all good terms. But sometimes, a term is just plain misleading: for instance, the fiscal cliff.

Medicine and Psychology

The terms disease and disorder are pretty misleading. A disease does not have to be infectious, and someone with a disorder could behave just as normally, whatever that means, as a “normal” person. Even sane and insane are notoriously difficult to tell apart.

one flew over the cuckoos nest scene

And what does it mean to cure someone?


That said, most psychology terms are pretty self-explantory, albeit sometimes difficult to test accurately.


A field like astrophysics in which the terms are extremely clear. The only term I find troubling is postmodern, which seems to imply something that it is not.


This is a very technical field, where speed and velocity mean different things, and if you are describing a scenario, you must use words like force, momentum, and energy very carefully. Technicality aside though, it is very obvious what the terms are about.


Given that linguistics should have something to do with this point, you might expect linguistics to have very intuitive terms. Depending on the subfield, however, there are some very non-obvious terms. What is a morpheme?

Computer Science

Like math, it is very unintuitive at times. For instance, computer scientists have no idea what a tree is supposed to look like.



Math terms are both super-technical and very non-obvious, given that half the terms are named after a person. Even for the half that are words in English, there are some issues. In topology, for instance, you might think that open is the opposite of closed, but in reality a set can be open, closed, neither open or closed, or both open and closed (in which case it is called clopen). What about injectionsurjection, and bijection—what is a “jection”?

The term rational numbers for fractions makes sense as fractions are ratios, but who came up with real, imaginary, or complex? It becomes worse in abstract algebra, where you have things like groups, rings, and fields. At least the word object is what you think it means: just anything. And measure theory makes a lot of sense. A measure is pretty much what you think it means, and almost means almost what you think it means.

I think math is the only subject where two renowned experts can have a discussion, each not having a clue what the other is talking about. In this respect, I think mathematics beats biochemistry in confusion of terminology.