The Stuff of Thought

My blurb on Steven Pinker’s The Stuff of Thought (2007).

the-stuff-of-thought

For me, this book was a brilliant journey through linguistics and how language shapes the mind. It is the third book in a trilogy, but it is completely readable on its own.

The book starts strong with a very captivating introduction, which does a good job at convincing somebody why linguistics matters. But, the second chapter, “Down the Rabbit Hole,” seemed very much like a textbook. It felt like I was in Intro to Linguistics again. Of course, I found it interesting that many words can only be used with certain phrasings, for example, “fill the glass with water” is fine whereas “fill water into the glass” is not; however, the treatment seemed a bit too drawn out for a general audience. In addition, the vast majority of technical terms introduced in this chapter were not used again for the rest of the book, nor was there much immediate followup. But after this, the book skillfully takes off, and in fact I went from chapter 3 to finish in one run.

The most interesting chapter to me was number 4: “Cleaving the Air.” This chapter deals with how our perception of space and time is shaped by our language. Due to the subject matter, the chapter includes a (masterful) mix of linguistics, philosophy, and physics. Referring to grammatical time, as opposed to physical time (190):

Sometimes the past and future are subdivided into recent and  remote intervals, similar to the dichotomy between here and there or near or far. But no grammatical system reckons time from some fixed beginning point… or uses constant numerical units like seconds or minutes. This makes the location of events in time highly vague, as when Groucho told a hostess, “I’ve had a perfectly wonderful evening. But this wasn’t it.”

In Chapter 6, “What’s in a Name?”, Pinker analyzes the art of naming, including his own (279):

…I repeatedly found myself surrounded by Steves. In school I was always addressed by an initial as well as a name, since every class had two or three of us, and as I furthered my education the concentration of Steveness just kept increasing. My graduate school roommate was a Steve, as was my advisor and another of his students (resulting in a three-Steve paper), and when I started my own lab, I hired two Steves in a row to run it.

Chapter 7, “The Seven Words You Can’t Say on Television,” is one that is both amusing and amazing. It is a short treatment of profanity in language, dealing with words that are considered obscene or taboo. Quite carefully, Pinker uses euphemisms even when discussing the use of euphemisms (345):

The dread of effluvia, of course, can also be modulated, as it must be in sex, medicine, nursing, and the care of animals and babies. As we shall see, this desensitization is sometimes helped along with the use of euphemisms that play down the repellence of the effluvia.

That paragraph is gold.

Finally, Pinker explores some game theory in Chapter 8: “Games People Play.” Namely, when does one say things directly vs indirectly, truthfully vs politely? When does one offer (or conceal) a bribe? When does one take a bribe? This and many more questions are explored. Not surprisingly, the results fit what an economic intuition would imply, but it is interesting to see such games from the perspective of linguistics.

Overall, this is a brilliant book. It reminded me of Gödel, Escher, Bach, which also was multi-disciplinary and made extensive use of humor.

Survival of the Selfish Gene

After reading The God Delusion, I decided to study some of Richard Dawkins’ earlier works. For this post, I read The Selfish Gene (and among the books on my queue are The Blind Watchmaker and The Greatest Show on Earth).

the-selfish-gene

Published in 1976, The Selfish Gene explores the phenomena at play regarding the behavior of replicators, namely genes and memes. I was expecting to see lots of biological arguments, and while there are many, I was shocked at what I found was the main tool used in the book: game theory.

Of course, once you think about it, it makes perfect sense that game theory is extremely important when talking about genes and how they spread from one generation to the next. And by game theory, I do not mean board games or video games, but economic game theory, applied to biology in what is now known as evolutionary game theory. In fact, this book would be an excellent read for people interested in mathematics or economics, in addition to the obvious group of those interested in biology. Dawkins uses concepts like Nash equilibria, though the term is not explicitly stated (consider the date of the book), and the Prisoner’s Dilemma, just for a couple examples, to explain many biological behaviors found in various animals, including humans. This kind of game-theoretic analysis followed largely from the work of John Maynard Smith.

In addition to having studied a bit of game theory, I have also studied dynamical systems, though from the perspective of pure math and not biology. Even so, the concepts in the book were very familiar. I do not think The Selfish Gene is controversial from an academic standpoint. The now 40-year old ideas are still relevant today, and the ideas are really not that difficult to understand, given a sufficient mathematical and scientific background.

Instead, the controversy around the book seems to come solely from the title itself, and perhaps the attached stigma to writing anything about evolution, which seems to be more of an issue today than it was in 1976. Dawkins notes this years later in the preface to the second edition:

This is paradoxical, but not in the obvious way. It is not one of those books that was reviled as revolutionary when published, then steadily won converts until it ended up so orthodox that we now wonder what the fuss was about. Quite the contrary. From the outset the reviews were gratifyingly favourable and it was not seen, initially, as a controversial book. Its reputation for contentiousness took years to grow until, by now, it is widely regarded as a work of radical extremism.

I do find this amusing. It seems to have not to do specifically with the theory of evolution itself, but with the unfortunate anti-intellectual sector of the US. (Of course, Dawkins is from the UK, but I am talking about American opinion of these kinds of books.)

In current society it seems like a fad to wear one’s ignorance on one’s sleeve, as if boastfully declaring, “My ignorance is just as good as your knowledge.” Of course I am not advocating that we should go the opposite direction and be ashamed for not learning, but we should be able to come together and agree that ignorance is not a virtue, especially not in the most scientifically advanced country in the world. I am not really sure how the United States is supposed to recover from this, other than that we become more reasonable over time. And that will take education, not ignorance.

The title of the book is somewhat misleading, only if one does not understand what the word “selfish” is describing. The “selfish gene” is not so much talking about a gene that causes selfishness in individuals (this is an ambiguous notion in itself), but rather, it describes the word “gene” directly, that genes themselves propagate themselves in a manner that appears selfish. The individual is merely a “survival machine” for the gene. There is a critical difference here between the two notions.

The selfish gene is merely a gene that, for practical reasons, has a higher chance of being passed on. It does not really contradict any current notion of evolution, and in fact, at the time of publication, it became the new and improved theory of evolution that is now the textbook standard. In any case, the message is that evolution works not by the survival of the fittest individuals, but by the survival of the fittest, or most selfish, genes.

When we look at the selfish gene, there are situations (as demonstrated in the book) where the intrinsically selfish thought appears on the outside as altruistic. Mutual back-scratching benefits both individuals, and moreover, benefits the gene for it, thus making the gene more likely to spread. So while the behavior of back-scratching seems altruistic, it may be nothing more than concealed selfishness. This idea can be extrapolated to many phenomena. Often people put on acts and fake displays of kindness only for the selfish benefit of “seeming” nice. Or they are so “humble” that they announce their humbleness everywhere they please and make you feel bad for not being as humble as they are. The list goes on. However, I will not comment too much on this as this goes under cultural behavior and not strictly genetic behavior, although they are related.

The controversy around this book also seems to stem from perceived personal offense. Included in The Selfish Gene is an interesting quote from Simpson regarding historical developments in explaining how the current species on Earth came to be:

Is there a meaning to life? What are we for? What is man? After posing the last of these questions, the eminent zoologist G. G. Simpson put it thus: ‘The point I want to make now is that all attempts to answer that question before 1859 are worthless and that we will be better off if we ignore them completely.’

While this statement is perfectly true in trying to understanding biology, I can see how religious people might take offense. To declare that all mythological ideas in this area before Darwin’s The Origin of Species are worthless is a bold claim, even when it is correct.

Regarding the actual content of the book, I have already mentioned that Dawkins makes extensive use of game theory. There are many numbers in some of the more technical chapters, making the book possibly difficult to read in real-time unless the reader is versed in mental mathematics. Though, with some deliberate thought on these chapters, any reader should be able to get through them.

The Selfish Gene is a remarkable book, giving clear explanations of basic biology and evolutionary game theory for the layman. It is a shame that such educational material is viewed as controversial. I wish I could succinctly summarize the fascinating interplay of evolutionary game theory in a single post, but it would be better to leave it to you to pick up this book and think about it for yourself. If you do not like evolution, however, you have been warned.

For Science: Neil deGrasse Tyson’s “Death by Black Hole”

Death By Black Hole

Death by Black Hole is an epic read. What makes this stand out from the average science essay collection is Neil deGrasse Tyson’s unwavering expertise in combination with his remarkably down-to-Earth explanations of not only how things happen, but also of how we discovered how things happen.

For instance, everyone today knows there is a constant speed of light, and we actually encounter it, sometimes in latency in the Internet. But as far as our intuition goes, light moves infinitely fast, i.e. it is instantaneous. In fact, I still remember Bill Nye the Science Guy trying to outrun a beam of light in his show. After many tries, he was never able to succeed.

Tyson reveals to us that even Galileo, in 1638, thought that light was instantaneous, when his lantern experiment failed to yield a measurable delay. It was not until Ole Rømer who first saw and interpreted correctly the evidence that light is not instant. In “Speed Limits”:

Years of observations had shown that, for Io, the average duration of one orbit—an easily timed interval from the moon’s disappearance behind Jupiter, through its re-emergence, to the beginning of its next disappearance—was just about forty-two and a half hours. What Rømer discovered was that when Earth was closest to Jupiter, Io disappeared about eleven minutes earlier than expected, and when Earth was farthest from Jupiter, Io disappeared about eleven minutes later.

Rømer reasoned that Io’s orbital behavior was not likely to be influenced by the position of Earth relative to Jupiter, and so surely the speed of light was to blame for any unexpected variations. The twenty-two-minute range must correspond to the time needed for light to travel across the diameter of Earth’s orbit. From that assumption, Rømer derived a speed of light of about 130,000 miles a second. That’s within 30 percent of the correct answer—not bad for a first-ever estimate…. (p. 120)

That someone deduced the speed of light with 1600’s technology is remarkable.

In addition, Tyson enlightens us with the exciting information we all want to know. Antimatter, for instance, annihilates on contact with normal matter, releasing tremendous amount s of energy. In Dan Brown’s Angels and Demons, a tiny vial of antimatter explodes with the violence of a nuclear bomb. But what if a Sun made out of antimatter collided with our own Sun? How big would the blast be? According to Tyson in “Antimatter Matters,” the explosion would be frighteningly large:

If a single antistar annihilated with a single ordinary star, then the conversion of matter to gamma-ray energy would be swift and total. Two stars with masses similar to that of the Sun (each with about 1057 particles) would be so luminous that the colliding system would temporarily outproduce all the energy of all the stars of a hundred million galaxies. (p. 106)

While this anthology is comprised of essays which are all distinct and divided into categories, it is still possible enough to read it like a normal book from start to finish if you are a science enthusiast.

However, given the sheer variety of different topics, there are wide jumps of topics and some overlap of subject material between essays that might alienate a some readers. This was not too much of an issue for me, but I did find the lack of an overall thesis sort of strange, and this this forced me to read it in a different manner than for most books. For someone interested in a popular book on astrophysics that was originally intended as a book, I would highly recommend Michio Kaku’s Physics of the Impossible, which is more coherent and packs more punch than Death by Black Hole.

This is not to say that Death by Black Hole is without merit. It is one of the few books to explain not just the contents of scientific discoveries, but also the discovery process itself, which can oftentimes be more fascinating to learn about than the results. Neil deGrasse Tyson is one of the finest communicators of science in our time, and I always find his talks on YouTube fascinating. As an essay collection on science, Death by Black Hole is unmatched.

Why I Approve of Richard Dawkins’s “The God Delusion”

I have heard a variety of reports on this book, ranging from brilliant to demonic. As one who realizes the social and political importance of the secular movement in the years to come, I had to pick up The God Delusion by Richard Dawkins, to examine the book myself.

The-God-Delusion

This may be one of the most influential books to contemporary society. Contrary to my expectation, Dawkins’ overarching thesis is not a single argument or even a set of arguments against the existence of God (or gods). Though he does make many strongly supported biological arguments and includes many other types of arguments that have been echoed over the centuries, the main point, I could tell, was not to provide other atheists with arguments against the existence of God. A plethora of such arguments can be found on the Internet, at your local library, in your classroom, or even in the thoughts of your brain.

The Special Treatment of Religion

The real point, which makes this book stand out from others on atheism and religion, is the argument that, whether religion is right or wrong, we as a society need to change our special treatment of religion.

There is an undeserved respect of religion in our culture. In daily life it is considered perfectly okay to argue about our favorite sports teams, our differences of taste in food and music, and even our political beliefs. But the moment religion is brought up, it suddenly becomes “rude” or “offensive” to disagree with a believer or to even slightly question his or her beliefs. This, of course, is prime hypocrisy as many religions downright treat agnostics and atheists as subhuman or fools: “The fool hath said in his heart, ‘There is no God.'” (Psalm 14:1). Imagine the public outcry that would occur if, in some atheist meeting, the members called all religious believers “fools.” Yet when religious people call all atheists “fools,” it’s perfectly okay, because you got to respect their religious beliefs. I suppose when religious people call blacks or women inferior, you’re supposed to respect that too? Does the religiosity of a belief make it immune to criticism?

Dawkins argues that the discussion of religion, like any other topic, should not be taboo, and that when a religious person makes an absurd proclamation (all 3 examples in the last half-year), you have every right in the world to criticize it, and moreover you should be able to criticize it without ever having to worry about “offending” them or their religion or anyone else’s religion.

Christianity and Islam

While Dawkins primarily targets Christianity, since it is the dominant religion in Western culture, he also mentions the even more undeserved respect for Islam that arises simply because it is is a minority in places like the US and the UK. In response to a Danish newspaper in 2006 which satirized the Islamic prophet Muhammad, demonstrators burned Danish flags, trashed embassies and consulates, boycotted Danish products, physically threatened Westerners, burned Christian churches (with no Danish or European connections at all), and killed 9 at the Italian consulate in Benghazi. This series of events would be tragically repeated in 2012. From Dawkins, on the 2006 incident:

A bounty of $1 million was placed on the head of ‘the Danish cartoonist’ by a Pakistani imam – who was apparently unaware that there were twelve different Danish cartoonists, and almost certainly unaware that the three most offensive pictures had never appeared in Denmark at all (and, by the way, where was that million going to come from?). In Nigeria, Muslim protesters against the Danish cartoons burned down several Christian churches, and used machetes to attack and kill (black Nigerian) Christians in the streets. One Christian was put inside a rubber tyre, doused with petrol and set alight. Demonstrators were photographed in Britain bearing banners saying ‘Slay those who insult Islam’, ‘Butcher those who mock Islam’, ‘Europe you will pay: Demolition is on its way’ and, apparently without irony, ‘Behead those who say Islam is a violent religion’. Fortunately, our political leaders were on hand to remind us that Islam is a religion of peace and mercy. (p. 47-48)

Dawkins doesn’t explicitly say it, but I think the message is pretty clear. He sympathized with the Christians in the larger religious conflict. Similar sentiments are echoed by Sam Harris, who has stated, quite explicitly, that of these two Abrahamic religions, Christianity is the lesser of the two evils.

Again, the political refrain from criticizing the response of Islamic extremists demonstrates undeserving respect of religion in our society. Politicians, always fearful of losing their constituency, feel to afraid denounce such violence. As a result, we let it go on. Until we as a society allow ourselves to discuss religion openly, we will always be at the hands of its extremists who thrive on the inability of our leaders to take meaningful action.

Faith is Not a Virtue

Another undeserved respect we give to religion is accepting its dogma that faith is a virtue. Faith, by definition, is believing in something with insufficient evidence, and oftentimes in practice, it means believing in something without a shred of evidence. Dawkins argues that faith is in fact the opposite of virtuous:

…what is really pernicious is the practice of teaching children that faith itself is a virtue. Faith is an evil precisely because it requires no justification and brooks no argument. Teaching children that unquestioned faith is a virtue primes them—given certain other ingredients that are not too hard to come by—to grow up into potentially lethal weapons for future jihads or crusades. Immunized against fear by the promise of a martyr’s paradise, the authentic faith-head deserves a high place in the history of armaments, alongside the longbow, the warhorse, the tank and the cluster bomb. If children were taught to question and think through their beliefs, instead of being taught the superior virtue of faith without question, it is a good bet that there would be no suicide bombers. Suicide bombers do what they do because they really believe what they were taught in their religious schools: that duty to God exceeds all other priorities, and that martyrdom in his service will be rewarded in the gardens of Paradise. And they were taught that lesson not necessarily by extremist fanatics but by decent, gentile, mainstream religious instructors, who lined them up in their madrasas, sitting in rows, rhythmically nodding their innocent little heads up and down while they learned every word of the holy book like demented parrots. Faith can be very very dangerous, and deliberately to implant it into the vulnerable mind of an innocent child is a grievous wrong. (p. 347-348)

This is an important point to make. What can be more dangerous than people who have the capacity to do great harm, who have been taught that doing so is justified, but without the capacity to question their thoughts? What is more dangerous than one who destroys the lives of others while believing without question that they are doing the right thing? Intriguingly, Dawkins also brings up the fact that many extremists were not raised by extremists, but by well-meaning parents or perhaps even a well-meaning community, but whose individual determination went too far. This is an important point for “liberal” and “moderate” religious people to consider. It is the majority of otherwise non-fundamentalists that enable the extremists.

Group Selection

In addition to the social commentary, which to me is the most important point of this book, Dawkins uses his expertise as an evolutionary biologist to explain the origin and early persistence of religion in some of the middle chapters. The main thesis here is that evolution early on favored brains who would unquestioningly accept what their parents or their elders spoke. For instance, the child who obeyed “Don’t punch a sleeping bear” probably had a higher chance of survival than the one who didn’t obey. Hence, the unquestioning acceptance of dogmatic belief and passing on that dogmatic belief could actually be hardwired in our brains.

But, as Dawkins points out, it is not that simple. If an elder said “Don’t punch a sleeping bear, and every month we must sacrifice a goat,” a child is not able to process that one statement is sensible and the other is absurd, and hence accepts both of them. Since it works (or at least seems to work), the child later passes on the knowledge to his or her own children, and the cycle repeats. The useless monthly sacrificing of a goat is a freeloader that is passed on into the next society without helping it at all. This is not unlike how many useless DNA mutations arise in genetic drift.

Some religious ideas survive because they are compatible with other memes that are already numerous in the meme pool—as part of a memeplex. (p. 231)

After all, Richard Dawkins is the originator of the term “meme.”

Overall

Indeed, religion has been unjustly immune to criticism for far too long. Even by claiming that we should be allowed to openly discuss religion, Dawkins has been denounced as offensive to religious belief, when the unquestioning belief itself is what should offend a modern society. Many say that it is the extremists who are harmful and that most moderates don’t do any harm—and while this is true in that they don’t cause any damage directly, the religious moderates and even liberals comprise the enormous base of support who enable the extremists. When 46% of the United States, the most technologically and scientifically advanced country in the world, believes in creationism and 73% of our population is Christian, it is difficult to criticize the democratically elected Rep. Paul Broun’s statement that “All that stuff I was taught about evolution and embryology and the Big Bang Theory, all that is lies straight from the pit of Hell.” (This guy is a part of the congressional science advisory board.) Instead, many religious “moderates” and “liberals” don’t denounce Broun’s ideology at all, and merely state that he is too literally interpreting the Bible or something, as if they know how to interpret the Bible better than he does. They play this interpretation game instead of dealing with the actual problem, the religion itself, because in the end they are on the same side as Rep. Broun. Until we address this root cause, we cannot move forward as a society.

The God Delusion, published in 2006, is likely to be the most important book of its decade. This timing is especially crucial because the 2000’s is the same decade in which the Internet engulfed everything and people became closer together through social networks. With the increasing interconnections and intercultural frictions that have arisen, it more important than ever that we stand by reason and not by superstition, that we stand by tolerance and not by dogma, and that we stand by progress towards the future and not by ancient myths of the past.

Free Will

When I choose a book to read, am I really making a choice, or do the events that led up to my choosing a book already determine which book I am about to read? According to the book that I ended up reading, Free Will (2012) by neuroscientist Sam Harris, the answer is the second one.

Free Will

Sam Harris argues that free will is simply an illusion. Our decisions arise from background causes which our conscience often does not notice. For instance, he asks if the presence of brain tumors in criminals affects our perception of their crimes, then what about other neurological disorders? And even non-neurological ones?

If a man’s choice to shoot the president is determined by a certain pattern of neural activity, which is in turn the product of prior causes—perhaps an unfortunate coincidence of bad genes, an unhappy childhood, lost sleep, and cosmic-ray bombardment—what can it possibly mean to say that his will is “free”? (3)

In fact, the strength of this book is that its argument is based on a well-researched neuroscience. Granted, Harris brings up the more speculative conjectures of philosophy, but only after discussing research of the brain at length.

The physiologist Benjamin Libet famously used EEG to show that activity in the brain’s motor cortex can be detected some 300 milliseconds before a person feels that he has decided to move…. More recently, direct recordings from the cortex showed that the activity of merely 256 neurons was sufficient to predict with 80 percent accuracy a person’s decision to move 700 milliseconds before he became aware of it. (8)

In fact, the science seems very well established, and it is the public perception that needs to catch up. Before reading this book and subsequently researching what neuroscientists and philosophers think of free will and determinism, I expected there be serious debate and the sides roughly equally sized. But as it turns out, only 14.9% of philosophers did not lean towards one of compatibilism, libertarianism, or no free will. The majority of them actually know what is going on. Neuroscience is even more strongly against free will, as its experiments directly contradict it.

It kind of reminds of me a post I wrote called On Giving Too Much Legitimacy to the Inferior Position, where I argued that on certain issues, even trying to point out that there is “debate” over something sometimes distracts or even draws people away from the truth. This is a case in point, as I had always thought I was in the minority when I argued determinism instead of free will, but it turns out I was in the academic majority.

In addition, as an atheist and humanist, I must applaud Harris for the following passage:

Despite our attachment to the notion of free will, most of us know that disorders of the brain can trump the best intentions of the mind. This shift in understanding represents progress toward a deeper, more consistent, and more compassionate view of our common humanity—and we should note that this is progress away from religious metaphysics. Few concepts have offered greater scope for human cruelty than the idea of an immortal soul that stands independent of all material influences, ranging from genes to economic systems. Within a religious framework, a belief in free will supports the notion of sin—which seems to justify not only harsh punishment in this life but eternal punishment in the next. And yet, ironically, one of the fears attending our progress in science is that a more complete understanding of ourselves will dehumanize us. (55)

Indeed, the concept of free will is very related to religion and its morally abhorrent idea of sin. Dispelling mythological concepts such as the soul or sin is a necessary step in the advancement of the human species. And at some point, free will too must go.

Talent Is Overrated

“Talent” is a word that is tossed around all too often, whether for top musicians or businessmen, or even just a person who creates popular Youtube videos. The idea of talent is in nearly every case taken for granted. As a young member of a very supportive family and community, I had heard the saying myself many times. But is talent a correct or even useful explanation for high-level performance?

Talent Is Overrated

I recently read a very intriguing book by Geoff Colvin. It was really a lucky buy—I was actually reading through reviews of Josh Waitzkin’s The Art of Learning, when the ever-so-omniscient Amazon Recommendations pointed me to a bizarre and blatantly absurd statement: Talent is Overrated.

With a plethora of examples, data, accumulation of research, and forcible writing, Colvin argues convincingly that the source of great performance in just about every field is best explained not by reference to the mysterious force known as talent, but by sheer amount and direction of deliberate practice.

My Personal Experience

First, a line from Colvin (193):

Their parents made them practice, as parents have always done, though it’s interesting to note that in these cases, when push came to shove and parents had to make a direct threat, it frequently played off the student’s intrinsic motivators. So it wasn’t “If you don’t do your piano practice we’ll cancel your allowance,” but rather “we’ll sell the piano.”  Not “If you don’t go to swimming practice you’ll be grounded Saturday night,” but rather “we’ll take you off the team.” If the child truly didn’t care about the piano or swimming, the threats wouldn’t have worked.

I was one of those kids who was, regarding the piano, totally immune to such a threat. As I wrote earlier, I absolutely dreaded playing the piano, and would have loved to see the piano disappear and find a bunch of cash in its place. But what I lacked in interest for the piano I made up for in my interest in chess. From 2003 to 2010, I competed in more than 70 rated chess tournaments. But looking back at the distribution of tournaments, I found that the majority of them occurred between 2003 and 2006, with one resurgence in 2008 [data]. It would be accurate to say that my tournament frequency was very closely correlated to how much time I spent on the game outside of tournaments in practice. As if to confirm Colvin’s thesis, here are my regular and quick rating graphs:

chess-rating-graphs

When the frequency of tournaments, and thus training, increased, my rating climbed. And when the frequency of tournaments and training decreased, my rating stagnated or declined. This seems to support the dedicated practice model argued in Colvin’s book. The performance in a given time period seemed to be determined by the amount of training in the same time period.

But what about compared to others? I am hardly an expert player, but my very first rating  after my first tournament, 1372, was in the 96-97th percentile of scholastic players at the time. By contrast, the current US chess champion Hikaru Nakamura, whose current USCF rating is a whopping 2834, started at a provisional rating of 684, an unimpressive statistic. However, he has played in 439 rated events over a period of 17 years, which is a hell of a lot more effort than I had ever thought about spending on the game. Thus even when you have an “advantage,” such as having a starting rating of 1372 versus 684, thinking of it in terms of talent is useless. If you do not follow it up with the necessary amount of work, the advantage will assuredly disappear.

There is a third point, to truly put the nail in the coffin of the talent model. In a two year span from 2006 to 2008, my rating stopped improving in the 1700s. Excuses aside, I simply didn’t practice the game much. But one thing I think could have happened is what Josh Waitzkin described, from Colvin (197):

The most gifted kids in chess fall apart. They are told that they are winners, and when they inevitably run into a wall, they get stuck and think they must be losers.

I don’t think it takes a gifted kid to run into the wall and get stuck (the 1372 initial rating was actually in part due to luck, as my first few tournaments were counted out of order, and a tournament that I had done really well in was incidentally the first one counted). For those two plateau years, I did feel the way that Waitzkin forewarned. I thought the high initial rating meant something special, i.e. talent, and that the 1700 plateau meant I was doomed. This thought process in terms of talent condemned me mentally to not advance. Even though I was still fairly high rated in my age group, I stopped practicing and reading as much, and as a result did not prepare myself adequately for tournament events. This caused my rating to drop.

How to Be a World-Class Performer

Colvin’s thesis works for far more than just chess. He applies it to the violin, piano, football, players, business, investment, management, art, teamwork, and just about anything, all while citing tremendous amounts of evidence for his claims. For music, the obvious counterexample is Mozart, yet early in the book Colvin disposes of this myth, as well as that of Tiger Woods. Mozart, for instance, had my years of intense, expert training starting at an early age, and Tiger Woods swung his first club at age seven… months, also trained by his father.

Another result of years of deliberate practice is the ability for an expert to see complex patterns that would completely elude an average person. A professional tennis player can return a serve of a ball traveling at a speed so high that a normal human should not even have time to react. Yet they are normal in this sense. But they don’t watch the ball, they watch their opponent’s body movements instead, and know approximately where the serve is going to land (or whether it will fault) before the racket even hits the ball. Similarly, a top stock trader can see signs that the average trader does not even consider to be relevant. A top manager sees the critical signs more so than an average one. And a master chess player can memorize an entire chess position in seconds and reproduce it perfectly, while the average person can recall only five or seven pieces. Most notably, this is not from better general memory, but by extensive training to be familiar with certain positions and patterns, so that they read a position by words instead of letters.

I would most certainly recommend this book to anyone. It breaks the shackle of “talent,” which although is a warm, comforting hope, it is no more than that, a beloved superstition with little evidence, and which discourages so many from even attempting something because they believe they “don’t have talent” or “divine spark” for it. But as it has repeatedly occurred, looking back at the backgrounds of top performers give little or no indication of any talent early on, but rather, what is common to all of them is an immense amount of training and dedicated practice. Perhaps this is the even more fascinating hope, that the world is within reach to everyone.

Do Androids Dream of Electric Sheep?

Do Androids Dream of Electric Sheep?

A great work of imagination, with some very intriguing questions. The story is set in a post-apocalyptic future: owning animals is a sign of social status though many animals are fake, i.e., electric; a radioactive dust cloud envelopes Earth, causing many to emigrate; and bounty hunters find and “retire” illegal androids. The novel focuses on the day of one bounty hunter, Rick Deckard.

The Will to Live

Yet, the dark fire waned; the life force oozed out of her, as he had so often witnessed before with other androids. The classic resignation. Mechanical, intellectual acceptance of that which a genuine organism—with two billion years of the pressure to live and evolve hagriding it—could never have reconciled itself to.

An android coldly accepts death. It is programmed. A human fights to live. It is evolved.

But does this alone mean an android is less alive than a human? Is the will to live a prerequisite to life? It seems not. Androids, we learn, are capable of committing suicide via holding their breath. But human beings at times, when the cause is sufficient, sacrifice themselves as well. In Orson Scott Card’s Ender’s Shadow (parallel of Ender’s Game), for example, we learn that the only reason Ender is able to defeat the Buggers is that the Bugger queen thought humans, as sentient beings, were incapable of self-sacrifice. His final attack was a mass sacrifice.

In Do Androids Dream of Electric Sheep?, the androids die for different reasons, however, than humans. Some androids, on learning they are to be retired, give that “mechanical, intellectual acceptance.” They don’t fight back or argue for the truth. It would be analogous to a criminal being ordered the death sentence. Rarely do they immediately accept death.

“Will you kill me in a way that won’t hurt? I mean, do it carefully. If I don’t fight; okay? I promise not to fight. Do you agree?”

“I can’t stand the way you androids give up.”

Artificial Intelligence, and the Turing Test

The Turing Test is an abstract, hypothetical test on artificial intelligence. If a computer can successfully pass off as a human, it passes the test. If not, it fails.

In Do Androids Dream of Electric Sheep?, two such tests exist. The more prominent is the Voigt-Kampff Empathy Test. The tester asks the subject various questions drawing emotional responses to determine whether the subject is an android. More specifically, it measures response times in the eye. A human responds much faster to emotional stimuli than does a android. This is how Rick determines whether Rachael is an android.

The other is the Boneli Reflex-Arc Test, which is only mentioned, not used. According to another bounty hunter, Phil Resch, this test is “simpler” in that it does not require a tester to ask questions. It is fairly automatic and tests the inner biology of the subject. In a way, it almost cheating, and is not truly a Turing Test.

Intelligence

Right now, humans are still far more intelligent than computers. Douglas Hofstadter in Gödel, Escher, Bach points out that we humans are able to “jump out” of our thinking, thus starting a process of meta-thinking. For instance, we might be in the middle of calculating an 3-digit by 3-digit multiplication problem in our heads, and half way through, we suddenly realize, why don’t we just search the answer on Google? A computer calculating this problem, however, would never (at least our current generation of computers) think of doing that; it would simply go through the calculation. But at least it can do it in a split second.

Perhaps a more relevant example is the game of chess. A human grandmaster can look at a position, pick out three or four moves that seem good, analyze a few moves deep into one line, and then based on intuition, decide that the line is not worth analyzing any further, and then switch to analyzing a different line. The computer isn’t so smart. It has to go through EVERY possible move in the position, calculating EVERY possible reply to that move, and then EVERY possible reply to that too, and so on. The number of positions to calculate rises exponentially with each step, and eventually the computer is forced by programing to a stop. The computer, when analyzing an unpromising line, doesn’t say, “Oh, this looks bad, I won’t analyze it any more.” Instead, it will do as it’s been programmed to. The human will. The human can jump out of the current thinking process (analyzing one line) into a higher level of thought (this line is bad, so I’ll look at a different one).

In the same chess example, humans can jump out even further. Supposing the game is lasting very long, the human might need to go to the restroom at some point. At that point, the human’s subconscious, which machines don’t yet have, will tell him to do something other than stare at the chessboard. What if a fire starts? Our current machine won’t even notice. It’ll just continue analyzing the position. The human player would have long been gone. The human has jumped completely out of chess thought. The computer can’t.

In Do Androids Dream of Electric Sheep?, the androids are often distinguishable from humans by this trait, that they cannot think at higher levels as humans can. Only one time I remember does an android demonstrate this human-like feat (correct me if I’m wrong):

“When I used the word ‘human,”‘ Roy Baty said to Pris, “I used the wrong word.”

Roy Baty realizes that by using the word “human,” he has betrayed the fact that he is an android, he catches himself. But other than that, androids seem to be characterized by their straightforward, mechanical thinking.

What is Deckard?

Is Rick Deckard himself an android? We have no idea. I strongly suspect he is. At one point, he asks himself a question from the Voigt-Kampff test and tells the other bounty hunter, Phil Resch, to watch the degree of the emotional response but not his reaction time. And as we know, Rick earlier used the method of measuring reaction time on Rachael to determine whether she was an android. Plus, Rick does not show much emotion in the book. The androids he retires seem to be more lively than him.

Blade Runner

Blade Runner

Blade Runner (1982) is the film adaptation of Do Androids Dream of Electric Sheep?. It’s brilliant. 9/10.

It keeps the spirit of the book but changes much of the story, completely leaving out some themes. But that was necessary, and the filmography is excellent—they’ve created a convincing new world. Screenplay isn’t supposed to be the same thing as the original (I’m reading Syd Field’s Screenplay right now).

Blade Runner makes the question of whether Rick is a human or android even more prominent. It does so via an origami unicorn that Rick remembers from his dreams. In the end he sees one in front of his door. If he were an android with implanted memories, it would make sense how somebody knew about the origami unicorn dream.

I actually watched the film first. The book is Cornell University’s summer reading assignment, and the first time, I believe, I have ever read a science fiction book for school. Anyway, both the book and the film are outstanding.