How Do Honor Killings Still Happen in 2014?

Earlier this week, Pakistani woman Farzana Parveen was beaten to death by her own family, an act justified as honor killing. Was it a rash response to some possibly offensive event, such as the 2006 Danish cartoon controversy or the 2012 embassy attacks in response to a film portrayal of Muhammad? (Not that offensiveness justifies murder in response, but many people at least partially blame the victims in these cases.) Nope. Much worse:

“I killed my daughter as she had insulted all of our family by marrying a man without our consent, and I have no regret over it,” Mujahid, the police investigator, quoted the father as saying.

Even worse, this is not an isolated incident. It is estimated that about 1,000 women die each year from honor killings in Pakistan alone. Globally, between 5,000 and 20,000 women suffer this fate every year. How does this kind of thing still happen? It’s not like this is a difficult engineering problem like sustainable energy. It’s an outdated socio-cultural norm. We’re not in the Dark Ages anymore.

Flag_of_the_United_Nations

The good news is, violence is overall gradually declining, and there is little in the way to stop this trend. However, we should always be wary of efforts to demodernize (the link is to a story of Sharia law’s being added into the British legal system, reinforcing this kind of religious discrimination against women) and lose the progress in civil rights that humanity has fought for centuries to achieve.

Edit: Additional stats (5/30/2014). “Four-in-Ten Pakistanis say honor killing of women can be at least sometimes justified.” This isn’t a fringe. This is a sizeable chunk of the population.

Observer Selection

Today was my graduation from Cornell, but since I’m not a fan of ceremony, the topic for today is completely different: a subset of selection bias known as observer selection.

Selection bias in general is selecting particular data points out of a larger set to distort the data. For example, using the government’s own NOAA website (National Oceanic and Atmospheric Administration), I could point out that the average temperature in 1934 was 54.10 degrees Fahrenheit, while in 2008 it was 52.29. Clearly from these data points, the US must be cooling over time. The problem with the argument is, of course, that the two years 1934 and 2008 were chosen very carefully: 1934 was the hottest year in the earlier time period, and 2008 was the coolest year in recent times. Comparing these two points is quite meaningless, as the overall trend is up.

us_temperature_thru_2013Observer selection is when the selection bias comes from the fact that someone must exist in a particular setting to do the observation. For instance, we only know of one universe, and there is life in our universe—us. Could it have been possible that our universe had no life?

The issue with trying to answer this question is that if our universe indeed had no life, then we wouldn’t exist to witness that.

“The anthropic principle: given that we are observing the universe, the universe must have properties that support intelligent life. It addresses the question “Why is our universe suitable for life?” by noting that if our universe were not suitable for life, then we wouldn’t be here making that observation. That is, the alternative question, “Why is our universe not suitable for life,” cannot physically be asked. We must observe a universe compatible with intelligent life.”

the multiverse

The point is, there may be millions, billions, or even an infinite number of universes. But even if only one in a trillion were suitable for life, we must exist in one of those. So our universe is not “fine tuned” for life, but rather, our existence means we must be in a universe that supports us.

A list of observer effects:

  • The anthropic principle, as above. Our universe must be suitable for life.
  • A planet-oriented version of the anthropic principle: Earth has abundant natural resources, is in the habitable zone, has a strong magnetic field, etc.
  • A species-oriented version of the anthropic  principle: Our species is very well adapted to survive. If we weren’t, then we wouldn’t be thinking about this.
  • There are no recent catastrophic asteroid impacts (the last one being 65 million years ago). If there were, then we again wouldn’t be observing that.
  • The same goes for all natural disasters. No catastropic volcano eruptions, no nearby supernovae or black holes, etc.
  • The same goes for apocalyptic man-made disasters. Had the Cold War led to a nuclear exchange that wiped out humanity, we would not be able to observe a headline that said, “Nuclear Weapons Make Humans Extinct.” Thus, we must observe non-catastrophic events in the past.
  • Individual life follows this as well. Say you had a life-threatening illness or accident in the past, but you’re alive now (of course, given that you’re reading this). Given that you’re alive now, you must have survived it, so to the question, “Are you alive?,” you can only answer yes.

All of these are strong observer effects, in that they are absolute statements and not probabilistic ones, i.e. “Our universe must have life,” and not “Our universe probably has life.”

There are numerous other observer effects that are probabilistic but can be still very significant. For example, given that you are reading this, you are more likely in a literate country than in less literate one. Moreover, the probability would be higher than that if I did not know anything about you.

In this post, I mentioned the example of democracy in political science. In summary, political science has a lot more to say on democracy than on any other form of government. Is this because we are personally biased towards democracy? Not necessarily. In a less open system, fields like political science might be forbidden from research (or academia is rated less important), and hence there are no (or few) pro-totalitarian political scientists. Hence, we end up seeming to favor democracy.

We also know that history is written by the victors. But a related historical example is the rise of strong states combined with the rise of liberalism  and progressive thoughts in the Modern era. Namely, states in which liberalism arose (England, France) tended to be strong states. A weak state adopting progressive measures would be wiped out by a stronger one. Hence, history is also analyzed by the victors.

So what can you do about observer selection? All we can do is try to be aware of it and introduce corrections to study a full set of possibilities rather than the subset we are in by being a particular observer. For instance, if we were just using historical data of natural disasters, we would be underestimating the actual probability of a catastrophic disaster, as we live in a time where none could have occurred for a while.

Thinking Like an Economist

DismalScience

I recently read two things related to economics: some economics blogs (particularly Marginal Revolution), and a list of economics jokes.

For someone like myself who doesn’t see everything in economic terms, the world of those who do is very bizarre. For instance, when we think about wealth inequality and how to reduce it, we inevitably come up with familiar concepts like increasing tax rates for the rich, capping their income, regulating investments, and so on. But the first article I stumbled upon, “Two Surefire Solutions to Inequality,” provided two strange solutions: increasing the fertility rate among the rich, and decreasing the fertility rate among the rich.

The tl;dr arguments are as follows: Increasing the fertility rate among the rich means that large wealthy families will be forced to divide their wealth every generation, thus lowering individual wealth slowly over time (of course, assortative mating slows this down).

On the other hand, decreasing the fertility rate among the rich means that the rich class will slowly disappear over time.

This seems really strange. Neither solution obviously solves any problem, and they might make make things worse in the short term. In addition, any government mandate on this would be hard to define and would be met by resentment on both sides in either situation. In other words, these solutions are absurd.

But in another sense, they are not absurd at all. They both make perfect logical sense. Assumptions were made, but not much more so than any other economic model. So why are these solutions so strange? Is it just social norms holding us back? A fear of anything resembling eugenics? A desire to not mess with peoples’ rights?

For a change of pace, here are some funny economics jokes, from the link given at the beginning:

An economist is someone who has had a human being described to him, but has never actually seen one.

When doctors make mistakes, at least they kill their patients. When economists make mistakes, they merely ruin them.

One night a policeman saw a macroeconomist looking for something by a lightpole. He asked him if he had lost something there. The economist said, “I lost my keys over in the alley.” The policeman asked him why he was looking by the lightpole. The economist responded, “It’s a lot easier to look over here.”

My College Experience

Cornell

Yesterday, I took my final final exam. Now, short of receiving a piece of paper, I am done with college and also with the formal education system (for at least the time being).

I’m not a sentimental person, but I am a reflective person, so I feel compelled to write about my experience.

Several other posts already covered various aspects of college and also of Cornell specifically:

There are in total 35 blog posts (as of writing this) under the College category, including the ones listed above. But the most important post comes from before any of these, before even stepping onto the Cornell campus, and it is related not directly to Cornell, but to the University of Chicago, a post on Andrew Abbott’s “The Aims of Education” speech.

Abbott’s main argument is that education is not a means to an end, but the end in itself. He goes through why education is not best viewed as a way to improve financial status, a way to learn a specific skill, a way to improve general life skills, or a way to survive in a changing world. Instead, “The reason for getting an education here—or anywhere else—is that it is better to be educated than not to be. It is better in and of itself.

This philosophical point I carried throughout my college experience. It is why I find it absurd to worry about the GPA of oneself and others so much: you’re here not to beat other people, but to be educated.

There is a lot of interest in the relation between academic study and the real-world job market. One hears jokes about English or psychology majors working in jobs having nothing to gain from an English or psych degree. But my situation is actually similar. As a math major pursuing a theoretical track (originally thinking about academia), I’ve encountered concepts that, at least currently, have no practical application. That’s a blessing and a curse. In the post I wrote about why I chose math, one of the pro points was precisely the abstraction of it. So, even though I will be working in a math-related area, it is almost certain that knowing that normal spaces are regular, or that the alternating group on 5 elements is simple, is useless.

Of course, it does help to know calculus and to have a good understanding of probability. But at least over the summer, we rarely ever used concepts that were outside my high-school understanding of probability or calculus. In other words, I could have majored in English and have been just as qualified.*

*(Perhaps taking many math classes trains you with a certain type of thinking, but this is hard to specify. I haven’t thought too much on this so if anyone has other ideas, please share them.)

Another thing I haven’t really talked about in other posts is socializing. I’m an introvert (INTP), and I could easily spend all day reading thought-provoking books or watching good movies without the slightest urge to unnecessarily talk to another person. I used to ponder this, but after reading Susan Cain’s wonderful book Quiet, I’ve decided to not worry.

Academically, I’ve expanded my horizons a lot since coming to Cornell, though not from math courses. While academia in general can be thought of as an ivory tower of sorts, math (and/or philosophy) is the ivory tower of ivory towers, so it is sometimes refreshing to take a class in a different subject that is only one step removed from reality.

In addition, I managed to keep this blog alive through college, though there was a period of time in late freshman/early sophomore year where there were few posts. By junior year, I was back in a weekly posting routine. And a couple of months ago, I started doing 2 posts per week, and that has been consistent so far.

Finally, I also subscribe to a quote allegedly by Mark Twain: “I have never let my schooling interfere with my education.” Even after college, I will always find opportunities to learn.

Overall, Cornell has been a great experience, and I would definitely recommend it, even if not for the reasons you were looking for. Enjoy, and keep learning!

The Hypercritical Condition?

liberalism-hegemony

Michael Roth, president of Wesleyan University, recently wrote a piece in The New York Times titled “Young Minds in Critical Condition.”

It happens every semester. A student triumphantly points out that Jean-Jacques Rousseau is undermining himself when he claims “the man who reflects is a depraved animal,” or that Ralph Waldo Emerson’s call for self-reliance is in effect a call for reliance on Emerson himself. Trying not to sound too weary, I ask the student to imagine that the authors had already considered these issues.

Instead of trying to find mistakes in the texts, I suggest we take the point of view that our authors created these apparent “contradictions” in order to get readers like us to ponder more interesting questions. How do we think about inequality and learning, for example, or how can we stand on our own feet while being open to inspiration from the world around us? Yes, there’s a certain satisfaction in being critical of our authors, but isn’t it more interesting to put ourselves in a frame of mind to find inspiration in them?

Being a student in the sciences, I don’t experience this kind of humanities phenomenon directly. But this ultra-critical mindset pervades everyday life, at least at an elite university. Students engage in this “intellectual one-upmanship” all the time without even realizing it. Try using Thomas Jefferson in a pro-freedom argument and you get the response that TJ owned slaves, thereby invalidating whatever moral or legal progress he allegedly made; therefore, the takeaway point is that the liberal notion of freedom was built on detestable foundations.

Also from Roth:

Liberal education in America has long been characterized by the intertwining of two traditions: of critical inquiry in pursuit of truth and exuberant performance in pursuit of excellence. In the last half-century, though, emphasis on inquiry has become dominant, and it has often been reduced to the ability to expose error and undermine belief. The inquirer has taken the guise of the sophisticated (often ironic) spectator, rather than the messy participant in continuing experiments or even the reverent beholder of great cultural achievements.

Even for my own blog posts, I sometimes run into critical comments which, instead of saying something substantive, completely miss the main point and belittle some small detail that I had usually already considered and addressed elsewhere in the article. One is powerless to defend against such criticisms, as preemptively placing ample amounts of caveats is no deterrent. It just changes the criticism from “The author does not consider X…” to “The author dismisses X…” followed by a pro-X argument, where X is a counterargument that the author has already considered.

Not that critical comments are bad—they’re quite useful. Constructive criticism is a hundred times more helpful than praise. Perhaps the issue is a self-fulfilling prophecy of blogging: since people don’t expect complex arguments with caveats, they assume that everything you say is absolute, even when that is clearly false. And it is not just in academia or blogging. Go to the comments page of any remotely controversial news story (I really enjoy reading CNN comments), and you can effortlessly predict which arguments and counterarguments are used.

Hilariously, one of the comments perfectly demonstrates the point of the article.

From user “reaylward”:

“Critical” in this context means close or analytical, not disparaging or condemnatory. Thus, a critical reading of a text means a close or analytical reading of the text, not a disparaging or condemnatory reading. The “historical critical method” of interpreting the Christian Bible, for example, means a close or analytical reading of the text, not a disparaging or condemnatory reading. “Critical thinking” doesn’t mean “exposing error”, it means thinking analytically. I think they need a dictionary at Wesleyan. And I mean that in the critical sense.

And a response by “Austin Uhler”:

Your comment is an example of the type of thinking that the author is discouraging. While you are correct about the strict meaning of “critical” in this context, your uncharitable reading means you are missing the author’s point: it is becoming more common for students to take critical thinking down negative, dismissive and unproductive paths.

This is probably the best comment-response pair I have ever seen for a NYT article.

Is the hypercritical condition a legacy of postmodernism? Is it simply a byproduct of the Internet? Are we becoming more cynical? I don’t know.

Being hypercritical is certainly a better problem to have than being uncritical. I appreciated Roth’s article nonetheless, for addressing the overly critical crowd.

Mechanisms vs Statistics

Last semester, our apartment had a debate over whether video games cause violence. It came down to arguing logical mechanisms, but without any use of statistics by either side. The argument basically turned into my word vs your word, since there was no objective basis on which to judge anything.

If your answer were yes, you might propose the mechanism: “People who play violent video games are likely to imitate the characters they play, thus becoming more aggressive in real life.” This statement might be logically sound, but without any supporting evidence, it has little credence.

You could easily propose a counter-mechanism: “People who would otherwise commit violent crimes satisfy their urges in video games and not in real life, thus decreasing the crime rate.” Again, this seems plausible, but without any data, we simply don’t know whether this effect outweighs the other. We need real stats.

Naively looking at statistics does not help either. Depending on which stats you look at and how they are presented, the conclusions can go either way (graph 1 and graph 2):

video-games-crimes

video-game-walsh

In any subject, one important concern is matching theories with empirical data. In the hard sciences, one tests the theory by experiment, and it is often possible to verify or deny claims with empirical data. But in the social sciences, experiments are sometimes impossible. To see what would happen if Germany had won World War II, we cannot simply recreate the circumstances of the war in a petri dish. So we must do the best we can with the limited data we have.

This lack of statistics affects many other issues, perhaps more important ones. For instance, in the public debate over gun control, there are clearly two competing mechanisms: “More guns = more shootings” and “More guns = more protection.” Each makes logical sense on its own, but the way to figure out the more accurate one is not by purely logical argumentation (which will lead nowhere), but by use of statistics, i.e. show the real effects of implementing or not implementing gun control laws. This would be much more fruitful than mindlessly yelling mechanisms across the void.

Twitter

I’m still trying to figure out how to use Twitter. It seems great for keeping up with specific people or organizations that regularly post. However, I am not sure how it is supposed to be useful if you are not a well-known person yourself. Facebook just seems better for keeping up with people you already know, so posting on Twitter feels strange. /rant

You can find me on Twitter as @nargaque.