An Atheist’s View on Morality

This is in response to my previous article, “Ethical Dilemmas and Human Morality.” In that article I listed several questions in several situations and asked you, the reader, what you would do in each case. At the end, I promised to explain my own moral principles as well. So, this post is my own view of ethics and morality, from an atheist.

What Is the End Goal?

First of all, what is the goal of morals? To create a better society is a satisfactory explanation to many, but what then? If a nearly perfect society were to exist some time in the future, would morals still matter? My answer is Yes.

I am optimistic in the future of humanity, and I hope there will be a time when humans can peacefully explore the stars, the galaxies, and the universe. When we are at this stage of civilization, we will be long past the petty conflicts that determine morals today.

Thus, a more long term goal is needed. I propose the following primary objective:

  • To preserve life in the universe.

There is no pure logical reason to put this directive above all others. However, if we start with this assertion, that a universe with life is better than a universe with no life, then many moral questions can be answered in a systematic way.

A Moral Hierarchy

It is systematic enough to put morals into a hierarchy:

Levels
6. Preservation of Life
5. Preservation of Intelligent Species
4. Preservation of Diversity of Species
3. Preservation of Well-Being of the Species
2. Preservation of Self
1. The Following of Social Norms/Cultures/Religion/Laws
0. Natural Instinct and Personal Wants

The way to read this is for any action, start at the bottom and see if it fits with the statement at that level. Then an action is morally justified if it fits a given level and to the best of your knowledge, it does not contradict a higher level. On the other hand, an action is morally wrong if it fails to fit the highest level that you are knowledgeable of.

Examples

Perhaps this hierarchy is a bit confusing, so I will give a few examples.

Example 1: You see a dollar bill is on the ground and nobody else is around. Is it right or wrong to take the dollar bill?

  • According to Level 0, you are allowed the action of taking the dollar bill. You go up one level, to Level 1, and the action is still allowed by society. You don’t believe it will affect any of the higher levels. So, the decision to take the bill is morally justified.

Example 2: Someone has $1,000. Is it morally right or wrong to steal the money from this person?

  • The action fits Level 0, but it fails at Level 1, as it is against the law. You do not believe it will affect any higher level. Since Level 1 is the highest relevant level to your knowledge, the action is morally wrong.

Example 3: Thousands of nuclear weapons around the world are about to explode, and the only way to stop them is to extract a certain code from a captured terrorist. However, the terrorist will not speak. Is it morally justified to torture the terrorist?

  • Torture is against social norms and the law, so the action fails at Level 1. But, Level 3 and Level 4 are very relevant, as the large number of nuclear detonations would kill billions, collapse ecosystems, and cause catastrophic changes to the environment. It would not only threaten human civilization (Level 3), but also wipe out many, many species (Level 4). It could even wipe out humans (Level 5). Thus, to preserve Level 3, Level 4, and Level 5, the action is morally justified.

Example 4: An alien species is about to create a super-massive black hole that will devour millions of galaxies and eventually the whole universe. The only way to prevent this is to preemptively wipe out this alien species.

  • Killing the alien species is against the law, so the action fails at Level 1. Even worse, it would kill an entire species, an act of xenocide, so it fails at Level 4. However, it satisfies the highest objective, Level 6, as it prevents a case where all life in the universe could be destroyed. So, wiping out this alien species is morally justified.

Reasoning

The reasoning for each level is as follows:

  • Level 1 overrides Level 0: The society most likely has a better chance to function  with rules than without rules. This gives it a higher chance to advance.
  • Level 2 overrides Level 1: An individual should be allowed to preserve one’s own life regardless of what other people assert, as long as the individual believes the actions necessary do not contradict any of the higher levels. This is because an individual may discover truth that is contradictory to the rest of the society.
  • Level 3 overrides Level 2: It is justified for an individual to sacrifice one’s own life to improve the quality of living for the species. This increases the chance that the society will be able to preserve itself.
  • Level 4 overrides Level 3: It is justified to lower the quality of living of a species to preserve the diversity of life, i.e., the number of species. This way, if some catastrophe wipes out one species, there are a large number of species remaining to preserve life.
  • Level 5 overrides Level 4: An ecosystem has a better chance to survive if the most intelligent and advanced species is alive. For instance, if a massive asteroid is on a collision path with Earth, it will require Space Age technology (achieved only by humans) to preserve life on Earth, so humans are more important to Earth’s ecosystem than any other species.
  • Level 6 overrides Level 5: It is better for a technologically advanced species to sacrifice itself if it allows life to continue in the universe if life would otherwise be destroyed.

The Role of Knowledge

This hierarchy of morality is strange in that the determination of whether an action is morally justified depends partially on the knowledge of the individual.

For example, suppose that someone were brainwashed when they were young by a society or religion, and that he is led by it to an action that contradicts one of the higher levels. On Earth, for instance, it is common for many of the popular religions to contradict Level 2: Preservation of Self, and Level 3: Preservation of Well-Being of the Species. Level 3 is particularly relevant in today’s age, when the understanding gained from stem cell research, particle accelerators, and evolution have the result of giving life on Earth a much higher chance to survive potential global or cosmic catastrophes.

When someone who is brainwashed by a religion commits an act that contradicts Level 2 or 3, then according to this moral system, the person is not to blame—the fault is with the religion, and with the society for allowing that particular religion to be so pervasive.

Who Exactly to Blame?

Imagine a massive asteroid that will crash into Earth in the year 2050.

At the rate of advancement of our current technology, with a few years of advance warning, we as a species will be able to send multiple rockets armed with nuclear weapons to knock the asteroid off course and save not only our lives, but the lives of all species on Earth, and all of Earth’s children. But say religion had been more prominent and had delayed the onset of the Renaissance and the Scientific Revolution by just 100 years. Then when the asteroid hits, we would only have what we know as 1950 technology, and likely all of humanity, and all life on Earth, could be destroyed. Surely this is not the fault of any person, but the fault of religion.


The corollary to this question is, What if an asteroid had crashed into Earth in the year 1850? There would have been absolutely nothing we humans could have done at that time to stop it. If that were the case, then we could not blame anyone in that time period. Instead, we would blame the Dark Ages, for practically halting the advancement of technology for a thousand years.

Ethics in Religion

If we value life, and if we want life to prosper in the universe, then humanity as a whole needs to adopt a new form of ethics. Maybe not the one above, but it must embrace one that is based on the existence and diversity of life, not based on myths that were invented in an ancient past.

This is why, among religions, a tolerant religion such as Buddhism is better for the future of humanity than an heavily indoctrinated one such as Christianity or Islam. Religions of the latter category only claim to be “tolerant,” but in practice are often not. See Galileo, the Salem witch trials, or the recent anti-free speech protests in the Middle East. These kinds of religions are fundamentally resistant to change. Whereas, truly tolerant religions are always open to change.

If science proves some belief of Buddhism wrong, then Buddhism will have to change.

-Dalai Lama

All the world’s major religions, with their emphasis on love, compassion, patience, tolerance, and forgiveness can and do promote inner values. But the reality of the world today is that grounding ethics in religion is no longer adequate. This is why I am increasingly convinced that the time has come to find a way of thinking about spirituality and ethics beyond religion altogether.

-Dalai Lama

Sure, the less tolerant religions may teach values they consider to be good, but for life to survive, sometimes the rules must adapt. Say a powerful alien species abducts you and gives you two options: (1) to kill a fellow human and the aliens will befriend the human race and help us advance, or (2) to refuse to kill a human but then the aliens will destroy the entire Earth. You could blindly follow “Thou shalt not kill” as in option (2) and let all the millions of species on Earth die, or you could rationalize that the survival of millions of species, including your own, is more valuable than any single individual member of the species, and instead advance life as in option (1).

Some Concluding Remarks

To preserve life and to let it flourish through the stars, and eventually throughout the universe, we must use an ethics system that adapts to the given situation, not one that proclaims to be absolute and forever-lasting.

Some nations, particularly many of those in Europe, have already realized this. When the United States finally realizes this as well—and hopefully before it’s too late—the rest of humanity will follow, and then finally, the human species will be one of progress, discovery, and peace.

Utopia vs Dystopia: A Matter of Semantics?

After witnessing the dystopian societies of 1984, Brave New World, and The Hunger Games, I wondered to myself, what would a Utopia really be? What differentiates a Utopia from a Dystopia? Is there always a fine line?

If you have learned of a Utopia as a perfect society, you might naively think that a Dystopia would be the opposite, or a failed society.

Yet this could not be further from the truth. The societies of 1984, Brave New World, and The Hunger Games are stable, successful, self-sustaining worlds, yet they are considered to be Dystopias. None of the three societies are failures. They merely contain different moral systems and social classes than what we are used to today. Yet they are considered repulsive and to be avoided at all costs.

1984

In 1984, the world is run by three superpowers locked in constant warfare. This way, since each individual power is always at war, each government can maintain permanent martial law and rule with an iron fist. Any dissent is dealt with ruthlessly, as seen in the plot. The system works. It is, I daresay, perfect.

In Brave New World, the government does not rule with an iron fist, but rather, by providing so many distractions and recreations to the common people (analogous to TV or drugs in our world) that the average person is too amused to worry about any oppression by the government. There is a propagandized doctrine of happiness, that there are no problems as long as everyone is happy. The work is done by genetically engineered stupid people (the Epsilons) that serve as slaves to the other castes. Indeed, the way it runs, this society can be thought of as perfect as well.

The only major difference about the presentation of the Dystopia in The Hunger Games is that it presents an overly dramatic story of a rebel going through an elaborate system (the game itself) to rebel. It is also the only one so far that presents any hope to the rebels. In 1984 and Brave New World, by contrast, the government wins at the end.

In this respect, the government in The Hunger Games is nowhere near as successful as those in 1984 and Brave New World. Despite its running the games for 74 years, the government faces decadence and imperfection, which didn’t seem to affect the other two Dystopias. So in a way, the government in The Hunger Games is not a true Dystopia—it does not have lasting power, so it is not perfect. In 1984, the government could turn people against each other, and in Brave New World, everyone is happy so no one has reason to rebel. In The Hunger Games, however, people are unhappy, and these unhappy people unite together, posing a real threat to the government.

So the society in The Hunger Games is more akin to a short-lived Middle Eastern or South American state undergoing rapid regime changes, as a large amount of discontent exists and is significant. By contrast, the societies in 1984 and Brave New World are more like the former Soviet Union/the current United States. The people are either squashed in rebellion or are too mesmerized to rebel.

Where does a Utopia fit in all of this? A Utopia is supposed to be perfect, but how are the societies of 1984 and Brave New World not perfect? Sure, in 1984, the main character is tortured, but you could make the argument that if he had just listened to the government and did what it asked for, he would not have been hurt at all. Indeed, when he is brainwashed at the end, the society seems perfect to him.

And if you are a thinking human being in Brave New World, there is little reason you would want anything else from society. You are provided with all the joy you could possibly want. Sure, the lower class Epsilons are treated unfairly, but they are made dumb biologically. They might not have a consciousness as we have. They are basically machines.

You could say that in a true Utopia, everyone would be treated fairly. But how can a society actually function if this were the case? There has to be someone or a group of people in charge. Even in Plato’s Republic, containing the first proposal of a utopian society, there are social classes with clearly defined rulers.

And even with powerful and rational people at the top, this does not create a Utopia. In Watchmen, set in the Cold War, the titled superheroes try to save humanity, but the smartest and most rational of them finds, to most people’s shock, that the only way to save humanity from nuclear destruction is to initiate a massive attack on the whole world, in order to unify the United States and the USSR. While this character is considered to be the main antagonist as he killed millions of people, he is, if viewed from a purely rational perspective, the hero of humanity. And from this perspective, he took steps in creating a Utopia, not a Dystopia.

Since these moral issues are so subjective, the line between a Utopia and a Dystopia and the definition of perfect are subjective as well, as shown in all of the examples above. Then is the distinction between a Utopia and a Dystopia any more than a matter a semantics? What are your thoughts?

My Mirror Behavior

One of the coolest things about college is all the new and fascinating people you meet. This probably has greatly to do with the diversity of American universities. The Cornell freshman class, for instance, has students from every state except Montana. Obviously a pretty important state, right?

It may disappoint you, however, that this article is not about any new or fascinating people who I have met at Cornell, though I certainly know many of them. It is also not about fascinating people whom I have known for a long time in Austin. In fact, this post is about a person who is, for most readers of this blog, neither new nor fascinating.

It is about me.

I noticed the strangest thing yesterday about myself. Well, maybe not strange, but certainly something I hadn’t noticed before. It’s that in any conversation, I act as a mirror. That is, when I chat with someone, I essentially acquire the attributes of that person.

It starts with the topic of the conversation. Yesterday, after my history class, I had the most extended profound discussion with someone, who is also quite a new and fascinating person: Elliot Casparian. It started out when he talked about our class as not learning history but learning about history, that the course was heavy on how history was done, that it was almost like the philosophy of history. We talked for perhaps an hour, including lunch. We ended up covering the following topics (don’t ask me what the transitions were, I don’t remember):

  • History and meta-history
  • The surprisingly advanced state of modern-day technology, including medical, space, electronic, and acoustic
  • The Internet and IPv4 exhaustion
  • Existence and the meaning of life: does it matter?
  • The multiverse, many-worlds, and simulation theories
  • Theoretical physics vs philosophy
  • Knowing and certainty
  • Mathematical proofs and the incompleteness theorem

The underlying thing is that every one of these topics has some philosophical undertone that we brought up in conversation. So unconsciously, we took the initial philosophical topic and ran with it for as far as we could. How this relates the my mirror aspect is that Elliot started the conversation, and I adjusted myself to philosophy right away.

Whether a conversation I have lasts ten seconds or an hour, and whether it is about philosophy, movies, books, computers, or whatever, I seem to always mirror the topic of discussion.

But that isn’t the interesting part.

What’s interesting is that in what I say in a conversation, the mirroring occurs not only in the content, but in the form as well.

I noticed this at first in online chats. If the person I’m chatting with tends to write proper English, i.e., capitalizing the first letter of each sentence and ending sentences with periods, I tend to do the same (thought there are exceptions).

if on the other hand the other person uses a more “normal” internet chat style, i find that i do the same

Also, if the other person uses CAPS a lot, I tend to use it as well, thought it’s usually like LOL, never an ANGRY MESSAGE.

It’s easy to notice online. But I found this occurs in real conversations just as well. For instance, I almost never curse. But if the people around me use profane language, I have a much higher chance of doing so too.

If someone is speaking very dryly and/or using elevated language, I usually do the same. If I am speaking to a person who is very argumentative, I tend to argue as well.

If people are being clever, I try to make witty retorts. If people are making puns, I go on a pun rampage. And of course, if people are being sarcastic, well, I’m already pretty sarcastic, so that makes it even worse (or better, as the case may be).

And there’s a lot more about speech that I can’t quite put into words—you know, all the subtle things that go on in a conversation. So I concluded that in conversation, I’m basically like a mirror.

But wait, there’s more?

I find that my mirror behavior doesn’t end with conversation. It extends to my daily life. And again, it’s a lot of those subtle things that I would normally never notice, that I noticed recently.

If we’re seated, how I sit is largely determined by how other people are sitting. In conversations, my gestures are different depending on whom I’m making gestures to.

My general demeanor is different, as I have found, around different people.

Exceptions

When I started thinking about this mirror behavior, I was alarmed because I had thought of myself as a nonconformist, and, well, doing what other people do isn’t exactly nonconformism. I thought to myself, oh my gosh, am I a robot?

So I began to look for exceptions. I thought of one immediately. In conversation, if the other people are quiet, I tend to be talkative, and when they are talkative, I tend to be quiet. But in those cases, I am consciously making a decision. If they are talking a lot, I feel the need to listen, and if they don’t, then I feel like saying something.

As I reflected further, I saw that exceptions in general were cases in which I was conscious of what I was doing. On the other hand, the cases where I had mirror behavior were the automatic ones.

Conclusions

Does this mean I’m still a nonconformist? Or a lesser one than I thought? Perhaps. But what interested me was whether there was a scientific basis for this. I came across “mirror behavior” in psychology, but that references a different phenomenon. Roughly, that is involved with how individuals behave when put in front of a mirror.

The closest psychology topic that I could find which still contained the word “mirror” was “mirror neuron,” which I had actually encountered before on a Scientific American article (specifically, the cover story of the Nov 2006 issue). I don’t have the magazine with me, but I remember that it linked a lack of mirror neurons to the condition of autism. If you mirror a lot, then do you have more mirror neurons? And does that make you more anti-autistic? I don’t know. Too much mirroring might have its problems too.

One last question: If I mirror other people, and this article is nominally about me, then who is it really about?

Life at Cornell

Because of the experiment I mentioned last post, I haven’t been posting much, so with this post I’d like to return to my normal posting schedule. Well, a “schedule” never really existed, so what I mean, then, is a more frequent schedule. Until my next experiment…

Question Mark
Still looking for ideas for my next experiment...

Anyway, on to life outside of WoW in the last 20 days. I’ve been doing okay in my classes overall. Here are my courses my order of easiest to most difficult:

  • CS 1610 (Computing in the Arts): We still have not had a prelim or received any grades yet. The content is pretty straightforward.
  • SOC 1101 (Intro to Sociology): I’m at an A- right now, but we just had the second prelim yesterday. I felt I didn’t do as well on it as on the first prelim, but that seems to be the general consensus, so with the curve, it may be similar.
  • HIST 2500 (Technology in Society): We don’t have prelims, but instead, essays. We have three such essays that each count for 25%, and the other 25% is participation. I received an A on the first essay, but admittedly, I pulled an all-nighter for it, and the grade was very hard earned. In contrast, I do barely any work or studying for Sociology.
  • ENGL 1170 (Short Stories): This class has a lot of reading and a lot of writing. By the end of this semester I’ll probably have written more in this class than in all my other classes combined, then doubled. Plus, all the writing is in the form of literary analysis, which is not exactly my favorite style. I think I have a B in it right now, and I doubt I will be able to raise it by very much.
  • MATH 2230 (Theoretical Linear Algebra and Multivariable Calculus): This is by far my hardest class. The class median score on the first prelim was a 47, which I happened to get. It curved up to a B. Not bad, but it is so different from high school, where I was used to A+’s in math without doing any work. Plus, I used to be able to understand the concepts without doing the homework, and now, in college, I am starting to not understand the concepts even though I am doing the homework. My old theory: Math is easy. New theory: Math is tough.

I should probably mention some other aspects of Cornell as well. The weather has recently turned cold. For example, it is, at the time of this post, 40° F, and according to the Weather Channel, this will drop to 33° F later tonight.

Snowman
Just one degree lower...

I hear that in Austin, the daytime temperatures are still reaching the 80s. Lucky! 😛

Moving on… One thing I love about Cornell are the libraries. My favorite ones so far are the Uris Library and the music library (in Lincoln Hall). Uris has the appearance of being old-fashioned, and for some reason, that makes my productivity increase dramatically (though the most important aspect is likely the quietness). On the other hand, the PCL at the University of Texas looks new and modern, and for some reason, I never had much productivity in it.

The music library at Cornell is quite modern as well (and despite the name, it is actually more quiet than say the Olin library). What makes it modern is, well, one day, I heard this mechanical sound, and saw, with my own eyes, one of the bookshelves moving! It was like a scene from a Harry Potter movie…

Andrew Dickson White Library
The Andrew Dickson White Library within Uris Library. It's not the one with moving bookshelves, but still...

I’ve probably spent more time in libraries in this semester so far at Cornell than during all of high school combined. I also find them very good for creative work.

Moving on again… Band! I will just have to say here once again that the BRMB (Big Red Marching Band) is amazing! It’s so much better than high school marching band. On October 8/9 (which was during the middle of my experiment), we traveled to Boston for the Cornell–Harvard game! Neither team was that great (I’m from Austin, so I am qualified to judge football competency), and we somehow managed to let Harvard catch two of their own punts. Seriously? (Harvard won 31–17.)

There are many things I would say about the trip, which was very interesting and eventful, but I am forbidden from saying anything about the bus ride. (What happens on Bus 5 stays on Bus 5.) I stayed, as did the majority of the trumpet section, with a couple (both in number and in marital relation) of Cornell band alumni on Friday night before the game. It was a fun night.

Wow, I’ve written nearly 800 words so far. It’s about time I get to the second, and what I originally intended as the main, subject of this post:

The Principles of Scientific Management

The what of what? Actually, most people whom I know in my audience have heard of this work before, as they have likely taken AP US History or a related history course at some point. When the course gets to economic progress the early twentieth century, the textbook mentions: Henry Ford and Frederick Winslow Taylor, the latter for whom the concept of “Taylorism” is named.

A refresher: Taylorism, or scientific management, is an economic theory that focuses above all on efficiency. It is concerned with maximizing productivity. That’s about all that’s mentioned in APUSH. (Here are Wiki links for Frederick Taylor and scientific management if you are interested.)

Frederick Taylor
Frederick Taylor (1856-1915)

In our HIST 2500 class, “Technology in Society,” we just read Taylor’s work that founded this theory: a treatise called The Principles of Scientific Management (1911). Near that period of time, labor and employers were generally not on friendly terms with each other. Remember all those labor strikes and unions you had to memorize for APUSH? Yeah…

Taylor was an engineer who proposed a solution, scientific management, to deal with this social issue. His goal was to resolve the management–labor conflict with a system that would be beneficial to both employers and workers. Scientific management, he argued, would enable workers to be much more efficient, and thereby more productive. This would allow a smaller number of specialized workers to produce much more than a larger number of normal workers, which would in turn allow the employer to raise wages and still increase profit.

We are not talking about minor improvements here. Taylor didn’t argue that 10-20% increases in productivity would solve the labor issue. His analysis in the book shows that in many industries the daily productivity of one worker could be doubled, and in some cases, tripled or even more. It means that not only were the employers gaining more revenue, but the workers were also earning higher wages. And, as Taylor implies, this increase in production would also lower the prices of manufactured goods, which helps the common people: they have more money and can buy cheaper goods. It’s a win-win-win situation.

So how exactly does this increase in productivity occur? The idea is to make every part of every task as efficient as possible. For a shoveler, a group of scientists carefully analyzed which type of person was most suited for shoveling. They also figured out the optimum load on the shovel (21 pounds—any more or less in one scoop would reduce the overall efficiency), which type of shovel should be used for different materials, and even what material the bottom of the container that is being shoveled from should be. They figured out how many rest breaks the workers should have, and for how long they should last, and when they are scheduled. And they analyzed each motion in shoveling as to figure out which ones are necessary and which ones are useless, which movements are faster and which are slower, and how to shovel as to move the greatest amount of material in the least amount of time.

Stopwatch
The single most important tool in scientific management.

My crazy idea is to apply the theory of scientific management to other things. Oh wait, that’s already been done. Often with unremarkable consequences.

What I really should do is to have some degree of scientific management in my life, that is, have a schedule. At college I am going pretty much without a schedule. Then again, NOT playing WoW is probably much more significant in productivity-increasing than whatever I could I apply from scientific management. Plus, the application of scientific management requires at least two people, so if I were to try to apply this, someone would need to be my “manager.” Interesting, but no thanks.

On Procrastination, as Written by a Procrastinator

(A true Procrastinator may read this later.)

Procrastination is the art of freeing up one’s current schedule by delegating tasks to later times, which are often more convenient. As the motto of the Procrastinator’s Club of America states, “Don’t do today what you can put off till tomorrow.”

Procrastination is shunned by many, praised by few. People who procrastinate tend to be less productive than those who do things today rather than put them off till tomorrow. But I know plenty of people who identify themselves as procrastinators, yet are highly productive, intelligent, and capable people. So, even if an inverse correlation between procrastination and productivity holds generally true, the exceptional cases show that there is not necessarily a cause-effect relationship between the two, especially not if procrastination is supposed to be the cause.

Most students reading this post will probably recall some time he or she procrastinated extensively on a homework assignment. I know that a select few of you almost never procrastinate—kudos to you. However, for the rest of us, procrastination is a part of our homework life. The cause of procrastination varies. We might simply not like the assignment. We might have a more interesting assignment we are trying to accomplish. We might be distracted or amused by a form of entertainment. Whatever the case, we tend to avoid things out till deadlines rise into plain sight.

The only negative side effect of procrastination in this case is the possibility of a large accumulation of assignments before a major project.

But what about the positive? First, we live a more natural life. Those who do every homework assignment the day it is assigned—they respond quickly, but nevertheless respond to the actions and decisions of others rather than try to self-motivate themselves to do it. We procrastinators like to mix things up; we might sometimes finish an assignment very early and with great effort if we find it interesting. Second, we are more efficient, and thus have more time. If we push an assignment to only a matter of days—or hours—before it is due, we often find ourselves working faster. True, the quality might suffer slightly, but that’s fine with us. We make up for that by occasional works of unusually high quality. Third, we are more carefree. We set our own goals and responsibilities rather than let someone else set them.

Perhaps a truer maxim for the procrastinator is, “Act when you want to.”

Concerning Football and Competitive Behavior

Last evening, the Texas–Alabama football game evoked impassioned feelings everywhere, especially from the city of Austin. I could see the excitement building everywhere. I would not consider myself a football fanatic, but I must admit that this game was intense. All year I watched like two college football games, and this was one of them. It showed just how people could become so competitive-minded, and yet, at the same time, still show exemplary sportsmanship. I use this as the springboard for today’s topic—competitive behavior.

What sparked this inquiry was Jooyeon’s blog post (on Tumblr) yesterday, before the game started, asking why there was so much hype:

I don’t know about anyone else, but I’m like dying from school right now. I just can’t get myself to focus or motivate myself to do well. And it makes me really wonder how I survived last year when I had so many other things on my plate. I think I might crash right after I finish this post.

Anyways, so tonight is the National Championship game, and my Facebook newsfeed is gonna be flooded with statuses about the game. Man, that’s gonna be annoying. I really don’t give a crap about football. People are making such a big deal about this, like it’s gonna be the end of their lives if UT doesn’t win tonight. And it’s kind of ridiculous. There are good citizens on both sides but people are treating this like war. You know, if I happened to be living in Alabama right now, it’d be the exact same case except with Alabama. And people also hurt each other in football. They hurt each other big time. Why would you cheer and cry out of happiness when you have just witnessed someone physically hurting someone else? I know “it’s fun” and all, but I guess the spirit of football has never really soaked into me. I’m aware that I am sort of being a hypocrite right now, because I’ve cried about many things other people wouldn’t give a crap about. So I guess it’s all about perspective. I apologize for my lack of spirit, but for me it looks like tonight’s just going to be another normal school night.

I agree—I’m not a huge supporter of football either. I watched the game because it was something out of the ordinary for me. It also might have been the last major Texas football game I ever watch while in Austin. I enjoyed it. But afterwards, I thought about your post, and realized that what underlies the hype is not the little details, but the big picture.

If we view the game as a bunch of large men running into each other and one of them holding a football, we won’t get very far. But that’s what football is! So why is it so popular, so fanatical, so compelling? The answer, I found, concerns competitive behavior.

Football is full of it. In fact, the hype for just about any competitive game, from football, to StarCraft, to chess, though varying in degree from game to game, rests upon the nature of human behavior.

But this behavior among humans (i.e. competitive behavior) was not originally for winning games amongst themselves; rather, it was for survival amongst nature’s hostilities. Because of this relationship with the surrounding environment, competition among early humans was not competition for the sake of competition, but rather, competition for the sake of life. So we weren’t consciously competitive—our intentions were just to live—but our actions gave the appearance of competition. In other words, in evolution, competition was an emergent property, and not a phenomenon in itself.

Humans changed that. Sure, we initially fought for our survival. But in early agricultural societies that created sufficiencies and surpluses, we began competing for other items besides food. Any survey of ancient civilization will tell you that. Times changed. By the Egyptian era, we had developed not only an appetite for tangible materials, but also knowledge. Fast forward again, and you have the Greeks, who greatly developed mathematics, history, philosophy, and politics.

Leap ahead, and we loom in the shadows of the Dark Ages. Competitions among religions were extreme. The Islamic expansions and the Christian Crusades demonstrated the use of war not as an instrument of survival, but as an instrument to spread divine beliefs. These were competitions of ideologies.

Jump again, this time to the Renaissance. Machiavelli is the prime point of investigation here. “The ends justify the means.” That changed the world. It might not be so true right now, as for a modern leader seeking power, almost each one of these “means” is closely followed and made public, but nonetheless, Machiavelli was the acknowledgment that competition was the ultimate war.

Today we find boundless examples of competition. Games (as aforementioned) are competitions. Politics is virtually a competition. The business world is an enormous competition. School, in many places (ahem), is a competition. Football games seem mild in comparison. Sure, they attract hundreds of thousands of fans, but the impacts of their results are undoubtedly nowhere near as relevant as those in politics or business.

Football is, of course, more entertaining than other competitions. It is a symbol of the human experience, for there are many lessons to be learned from it—yesterday’s game especially. The maxim of the game: Don’t give up. After losing Colt McCoy, and subsequently being 18 points down after the first half, Texas and its fans had every reason in the world to make excuses, blame Garrett Gilbert, or a combination of the two. The players must have been at a huge morale loss—they were against the number one ranked team, and they lost their star quarterback. What could they do? They could have given up, but instead, they fought as hard as they could, and nearly managed to bring back the game. They showed everyone that even if they lost, they lost it in style.

That is the essence of the big-picture perspective. In detail, football consists people running around on a giant field, but of course, there is much more it than that. In it, the highly-praised values of teamwork, dedication, sportsmanship are always there. It gives a sense of identity. It generates the feeling of community. It creates awe. For me, I don’t watch football very often, but I did learn something last night: The spirit of competition is greater than the competition itself.

2010s: The Decade of Solutions

I just wrote my first word-based post of 2010 a few moments ago. And now, some mysterious force compels to make another one. Except this time, specifically on 2010 and the decade that it starts.

Earth

First, reflections on 2009. In a post I made near the end of last year titled Reflections on 2009, I saw how I had basically become, at least in my perspective, more creative. I realized things for what they were, and I was able to look at the big picture. But now, I seek a much deeper task: to reflect on the entire 2000s decade, and then preview the next.

I actually do remember December 31, 1999. I was eight years old (born December 28, 1991) and of course had a disjointed, childish view of the world. But I remember that day, talking with a friend named Bobby, about 2000. We were watching Pokemon I believe. But we came to the conclusion that it was amazing to be able to live in two different millenniums. Basically, all I remember from the general populace was pure joy and excitement. (An eight-year-old had no idea what Y2K was.) Even if the year system was arbitrary, it was still exhilarating, at least in our childish minds, to be born in one millennium and to live our lives in the next.

2000–2009 was a remarkable decade. Before that, I did know what a computer was. But I think I touched a computer twice, at most, before 2000. Yet, I cannot even begin to estimate how many times I touched one in the 00s decade. Probably a couple thousand times.

I’m no tech expert, but I think not many people would disagree if I said the 00s were the decade of information technology. (See my post on The Legacy of 2009 for outside quotes on this.) Computers shrank, and became exponentially faster. Blogging rose to the forefront. Web 2.0 in 2004 was the “official” start of the enhanced Internet that we see today. Facebook launched in 2004, and by the end of the decade it contained 350 million users worldwide—a sizable chunk of the human population. YouTube rose to prominence this decade. Micro-blogging, e.g. Twitter, appeared. So many things happened this decade on the web that it revolutionized the world. It created a truly global society, and it changed how we think.

For myself, I probably can’t say anything of meaning. I mean, a lot of things happen between the ages 8 and 18. Nonetheless, this decade was incredible.

But the next decade, the 2010s, will contain even greater human achievements. Because at this point in time, the growth of digital technology will only continue to accelerate.

Take even the last decade for example. Web 2.0 and Facebook both came around in 2004, while YouTube, Twitter, the Nintendo Wii came around about 2006. And they have increased dramatically in the last few years. They are already, just after a few years, embedded into our daily vocabulary. Of course, Google has also been a key innovator throughout the decade.

The 2010s will see in digital technologies the increase in both scale and pace. This blog might be completely outdated in a few years, and if it does, then we will know that humanity is advancing—fast. I have no doubt that he 10s will be even more record-breaking in technology.

So far, so good. But if we turn away from technology, we find some pressing issues that the world has not dealt with. (Yes, I just ended a sentence with a preposition, but may you care less about it given the content of this paragraph.) Conflicts in the Middle East are not going to end anytime soon. The potential for global nuclear annihilation still exists. Poverty and hunger still rage throughout the world. Diseases still ravage poorer countries, and can ravage wealthier ones. Environmental consequences are sooner or later going to be felt—and when that happens, I’m afraid it will be too late.

I don’t pretend to have any foolproof solutions to these problems. But I will say, it would be a shame if we destroy ourselves out of greed, arrogance, or war. Future species millions of years in the future will be perplexed by our concurrent ingenuity and stupidity, for we had the capacity to sequence the entire human genome, only to have our genome be obliterated by our own futile quarrel.

These problems are by no means new. People have been warning about them for years—in some cases, decades. In our history, we pretty much let them slip by. In the 00s, we made symbolic acts to solve them. But we’re not doing anything. On paper and on television, we are supporting the green movement, yet we still endlessly consume trees and fossil fuels.

It would be indeed a huge shame if the wealth of technological achievements made in the last decade—or century—are destroyed by human apathy. But I have a message for everyone. If the human race is to act some time, the 2010s is the decade in which to do it. At this point, from the accomplishments we made in the previous decade, we have achieved an instantaneous, interactive global communications network. This is a tool that we never had before. And we must use it.

We must augment the advances in technology with applications to our real-world problems. Scientists and engineers will need to work extra-hard. Politicians must be courageous enough to make necessary changes. We will need to be able to not only see the problems, but understand them, and understand what we can do about them. We have had many decades of problems. Let this be the decade of solutions.