Here is a brilliant passage on identity from Julia Galef’s The Scout Mindset: Why Some People See Things Clearly and Others Don’t:
The problem with identity is that it wrecks your ability to think clearly. Identifying with a belief makes you feel like you have to be ready to defend it, which motivates you to focus your attention on collecting evidence in its favor. Identity makes you reflexively reject arguments that feel like attacks on you or the status of your group.
To counteract this, Galef says to have lightly held identities:
Holding an identity lightly means thinking of it in a matter-of-fact way, rather than as a central source of pride and meaning in your life. It’s a description, not a flag to be waved proudly…
Holding an identity lightly means treating that identity as contingent, saying to yourself, “I’m a liberal, for as long as it continues to seem to me that liberalism is just.” Or “I’m a feminist, but I would abandon the movement if for some reason I came to believe it was causing net harm.” It means maintaining a sense of your own beliefs and values, independent of the tribe’s beliefs and values, and acknowledging—at least in the privacy of your own head—the places where those two things diverge.
So a belief taken to be identity is hard to abandon, because to abandon the belief is to abandon one’s identity. If, instead, a belief is held lightly, it is easy to update the belief upon seeing contradictory evidence.
One example I’ve seen was in the old culture wars of the late 2000s/early 2010s of New Atheism, and I have to admit, I used this argument myself. It went something like, “Shouldn’t Muslims be more willing to leave their religion, after seeing repeated evidence that suicide bombers profess their faith towards Islam right before killing themselves along with dozens of civilians?” Another axis was, “Shouldn’t liberal Muslims be more willing to leave their religion, given they personally support women’s rights and LGBT rights, but Islamic countries around the world have the most sexism and anti-LGBT discrimination?” Needless to say, this was an unconvincing argument.
The reason this wasn’t convincing is that it was psychologically equivalent to an attack on the identity of Muslims. In the New Atheists’ mind, they were trying to help people change to a better belief system, and saying, “Hey, I know you live your life following parts of wisdom in this ancient book, but look, here are some people who are really, really into following the same ancient book—the people who think about this book all the time—and they commit atrocities that are very directly because of their following of this book, so you should probably update to believing in this book a little less than you did otherwise.” But since religion is often not just a belief but an identity, this reinforces belief instead of weakening it.
Religion aside, I’ve always been fascinated by how communism is an acceptable belief amongst educated people—encountered this in both high school and college. Certainly communism isn’t a core part of many people’s identity, though maybe anti-capitalism is? When I mentioned the famines that killed tens of millions as a result of communist economic policies in the Soviet Union or China, the response was not to update to be more skeptical of communism, but instead, something like, “Well, they weren’t doing communism correctly; if we did it correctly, it wouldn’t kill tens of millions of people.” That last argument has a lot of sympathy from well-educated people, yet none of them would sympathize with “Fascism is inherently good because it builds national strength, it’s just that the Germans did it incorrectly by tacking antisemitism and eugenics on it; if we did it correctly…”
What identities would I self-describe as? If you asked me 10 years ago, “liberal” would probably have been at the top of my list. But in 2015, I became quite disenchanted with the left, starting with the now-standard response to the 2015 Charlie Hebdo shooting: “I think the shooting is wrong, but something about how France is a racist, colonialist empire and therefore somewhat deserved it.” (Which isn’t far from the more recent, “I think antisemitism is wrong, but….”) I thought I had a lot in common with liberals until the liberal discussion focused mostly on what the West did wrong to deserve a terror attack. This was basically the quote in Galef’s book, “I’m a liberal, for as long as….” (There’s some mental gymnastics that can be done by arguing, “Well, I’m still a liberal in the definitional sense, it’s the other people claiming to be liberals who are abandoning the core tenets of liberalism,” but that I’ll leave out.)
At this point I don’t carry around much identity, because in my experience, the people most focused on their identities seem to be the least amenable to reason. I guess if I had to pick one, I would consider myself an epistemic rationalist, in the sense that I think using logic and probability is the best way to form accurate beliefs about the world.
Relatedly, Galef mentions in her book the Ideological Turing Test, an idea from Bryan Caplan that in order to truly understand an opposing viewpoint, you should be able to convince someone you actually hold that opposing belief. This is hard to do—the book contains an example of someone claiming to do this but failing horribly. From Galef:
Measured against that standard [the Ideological Turing Test], most attempts fall obviously short. For a case in point, consider one liberal blogger’s attempt at modeling the conservative worldview. She begins, “If I can say anything at this bleak hour, with the world splitting at its seams, it’s this: conservatives, I understand you. It may not be something you expect to hear from a liberal, but I do. I understand you.” An earnest start, but her attempt to empathize with conservatives quickly devolves into caricature. Here’s her impression of the conservative take on various subjects:
On capitalism: “Those at the top should have as much as possible. That’s the natural order . . . It’s not a secret; just don’t be lazy. Why is everyone so poor and lazy?”
On feminists: “Those women make noise, make demands, take up space . . . Who do they think they are?”
On abortion: “What a travesty . . . women making these kinds of radical decisions for themselves.”
On queer and transgender people: “They shouldn’t exist. They’re mistakes. They must be. But wait, no. God doesn’t make mistakes . . . Oh dear. You don’t know what’s happening anymore, and you don’t like it. It makes you feel dizzy. Out of control.”
It’s hardly necessary to run this text by an audience of conservatives to be able to predict that it would flunk their ideological Turing test. Her “conservative” take on capitalism sounds like a cartoon villain. The language about women taking up space and making decisions for themselves is how liberals frame these issues, not conservatives. And her impression of a conservative suddenly realizing his views on transgender and queer people are internally inconsistent (“They’re mistakes . . . But wait, no. God doesn’t make mistakes.”) just looks like a potshot she couldn’t resist taking.
The ideological Turing test is typically seen as a test of your knowledge: How thoroughly do you understand the other side’s beliefs? But it also serves as an emotional test: Do you hold your identity lightly enough to be able to avoid caricaturing your ideological opponents? Even being willing to attempt the ideological Turing test at all is significant. People who hold their identity strongly often recoil at the idea of trying to “understand” a viewpoint they find disgustingly wrong or harmful. It feels like you’re giving aid and comfort to the enemy. But if you want to have a shot at actually changing people’s point of view rather than merely being disgusted at how wrong they are, understanding those views is a must.
I’ve encountered this so many times where people including myself have a ridiculous notion of what the other side of some ideological divide actually thinks. I’m not sure what I think the solution is. One thing I’ve tried recently and would recommend is to go on Reddit and subscribe to a subreddit—but importantly, not of an identity group you belong to, but of one you don’t belong to. On the home page, you can probably find at least one post that you don’t totally disagree with. And if you really internalize that post, you’d be on your way to passing an Ideological Turing Test. (It doesn’t have to be on Reddit. Follow someone you disagree with on Twitter, etc.)
That said, it is still incredibly hard to empathize with opposing views—as Galef said in the quote just above, “People who hold their identity strongly often recoil at the idea of trying to ‘understand’ a viewpoint they find disgustingly wrong or harmful.” It seems like people should, on the margin, hold their ideas less strongly.
Originally forbidden from public discussion for all of 2020, the lab leak hypothesis as the origin of SARS-CoV-2 (SARS2) has recently gained intellectual support, and President Biden just publicly announced he was investigating the hypothesis for months. Despite no direct evidence appearing for either side, it suddenly seems like it’s closer to 50/50 now, after a year of public discourse implying something like 1% lab leak/99% natural origin. One thing we’re all wondering is, how likely is it that this was a lab leak?
I am not a virologist. My day job is quant trading, and I regularly think about probabilities of one-off events, updating them in real time. So my methods might be alien, but with that, let’s guess the probability of lab leak vs natural origin. I attempted the more reliable solutions first, but given the lack of evidence for either side, most of this is guesswork.
I consider lab leak to include accidental escape of either engineered or nonengineered virus.
My current belief is 77% lab leak, 23% natural. All of the text below is not really a proof, just things to think about when forming your own opinion.
The first thing that comes to mind is: does there exist a prediction market? Prediction markets in general can be accurate when the outcomes are well defined and people can bet a lot of money on things—any inaccuracy can be “arbitraged” away since there is a financial incentive to trade for people who are good at guessing. For example, in the 2020 US presidential election, after briefly swinging towards Trump after Miami-Dade results, prediction markets were all heavily favoring Biden by the early morning after election day, at a time when it looked like Trump held a significant lead in most of the remaining states. While states like Pennsylvania and Georgia took multiple days for the counter to swing Democrat, the betting markets already knew that would happen less than 12 hours after polls closed. The point is, the market already corrected for the mail-in ballot effect, because if it didn’t, someone who correctly thought about the effect could make a lot of money trading in it, thus pushing the market towards a more accurate truth.
However, as far as I can tell, there is no liquid market for the lab leak hypothesis. Even if there were, the definition of such a contract would matter a lot—maybe in the case that a small group of people in China really did know there was a lab leak, the probability of conceding this could be very low, especially as they would have had ample time in the last 17 months to destroy evidence. So even a contract that says “The WHO thinks SARS2 was a lab leak by 2023” wouldn’t necessarily indicate the true probability of a lab leak, because you might expect there to be bias in which result would be more willingly revealed to the public—a bias which people trading the contract would certainly be aware of.
Even without an official prediction market, some people make bets openly on the internet. So one thing I could do is draw out a contract, put out a market, and solicit people to trade with me. But I think the issue is that the contract is too hard to define here. As opposed to a fixed event—e.g. “Will the Democratic Party win the 2024 presidential election” would have widespread agreement about who wins a presidential election—a contract like “Was COVID-19 a lab leak” is very subject to opinion and ambiguity. You could try to more well-define the contract, e.g.
“The WHO thinks SARS2 was a lab leak by 2022”, or
“I will ask Matt Yglesias on Jan Dec 31, 2022 if he thought it was more likely (no tie) a lab leak or natural, what will he say”, or
“Will the Chinese Communist Party announce a statement agreeing it was a lab leak by 2022”
All 3 of these should have different values. Probably the Chinese government admitting a lab leak is the least likely, so they would trade at different betting odds, despite that the origin of SARS2 has already happened a year and a half in the past, and nothing we do in terms of framing the question now will change what actually happened.
Between framing ambiguity, the taxes in prop betting, and worrying about counterparty risk, I would start with a very wide market and it wouldn’t be very helpful in answering the original question, So I put betting markets aside for the moment.
The next thing to find people who are really good at predicting things, and see what they think. The only reputable superforecaster I’ve found who actually publicly gave a straight probability is Nate Silver, who on 5/23/2021 assigned 60% chance to lab leak, and 40% to natural origin.
Note that in the tweet, Silver previously had 40% lab leak, and updated to 60% based on the recent WSJ article documenting that three Wuhan Institute of Virology researchers became sick enough to seek hospital care in November 2019.
I am probably missing many other public guesses. I will consider them if pointed out to me.
Poll the Virologists
For a number of reasons, I think polling virologists is futile for uncovering the origins of SARS2.
This is a highly politicized topic, so there are many selection biases to correct for, which would seem very difficult and subjective to do:
People with which political leanings are more likely to become virologists?
What kinds of people are the ones who would speak out publicly, especially if it could cost them their job, i.e. canceled for political leanings?
Is there a direct occupational conflict of interest? I.e., a virologist guesses that if they spoke favoring lab leak, more public scrutiny and distrust would occur for labs, possibly defunding them, and causing said virologist to lose their job. So they decide to stay quiet.
Are virologists able to accurately quantify beliefs in terms of probabilities?
In particular, thinking about the origins of SARS2 without hard evidence requires knowing about priors and Bayesian updating—I’m not sure virologists as a whole are the most equipped on how to think of weighting priors and evidence properly, which superforecasters are better equipped to do.
I haven’t really seen anyone throw out numbers, just noncommitive words like “likely” or “extremely unlikely”.
Self-organized groups that make a claim tend to be people with similar beliefs, could be a vocal minority.
Political correctness aspects of the debate—e.g. things like “In terms of policy, it doesn’t matter which origin hypothesis is correct, so we should publicly support zoonotic hypothesis because that would lead to less anti-Asian-American racism.”
There is already a discredit of many virologists who came out publicly in early 2020 to strongly argue that the evidence was overwhelmingly in favor of natural origin, yet now the mainstream belief is that the two hypotheses are close enough in probability to be very unclear.
There are definitely many cases—probably the vast majority of situations—to trust experts. If you wanted to find if ingesting 1mg of cyanide is fatal, probably toxicologists would have very good published answers. That question seems very easy to settle in a repeated experiment and doesn’t have much political or selection biases, whereas the situation with SARS2 is filled with them. With that, we turn to more handwavy methods.
Debiasing and the Chinese Government
It would take many books to spell out the details of all the human cognitive biases that could hamper our thinking about the origin of SARS2. I wanted to point out the main, super-important selection bias.
The main bias is that the Chinese government, which effectively controls the investigation into the origins of SARS2, has a huge incentive to cover up a hypothetical lab leak, and has precedent of doing major cover-ups in the past, though not necessarily related to biolabs that we know of. I feel like this might be obvious to some of you, but really, this is an huge effect. Especially if you’ve only lived in Western countries, you might not realize to what extent China censors facts in a 1984-esque way.
Think about the worst thing an American presidency did in 4 years, and compare it to this (emphasis mine):
The Great Leap Forward (Second Five Year Plan) of the People’s Republic of China (PRC) was an economic and social campaign led by the Chinese Communist Party (CCP) from 1958 to 1962. Chairman Mao Zedong launched the campaign to reconstruct the country from an agrarian economy into a communist society through the formation of people’s communes. Mao decreed increased efforts to multiply grain yields and bring industry to the countryside. Local officials were fearful of Anti-Rightist Campaigns and competed to fulfill or over-fulfill quotas based on Mao’s exaggerated claims, collecting “surpluses” that in fact did not exist and leaving farmers to starve. Higher officials did not dare to report the economic disaster caused by these policies, and national officials, blaming bad weather for the decline in food output, took little or no action. The Great Leap resulted in tens of millions of deaths, with estimates ranging between 15 and 55 million deaths, making the Great Chinese Famine the largest famine in human history
This was a horrific disaster for which no one took responsibility. For 20 years afterwards, the Chinese Communist Party’s official terminology for this period was the “Three Years of Natural Disasters.”
This is all to say, when China says something about SARS2 that makes themselves look better, i.e. discredit the lab leak hypothesis, it doesn’t update my belief much at all. So which pieces of evidence should change our belief?
Because the evidence surrounding the question is so scarce and opaque, we can’t use presumptions in a court of law like “we should assume 99.9% natural origin unless we find proof beyond reasonable doubt of lab leak,” or “we should assume 99.9% lab leak until we find proof of natural origin,” and then spend time debating which side has the burden of proof. There is just not enough evidence out there for either side. If we were trying to predict whether the Democrats or Republicans win the 2024 presidential election, it would be very weird to say, “There is currently a sitting Democratic president, so you need to convince me beyond reasonable doubt that a Republican to win for me to believe that Republicans have any chance of winning.” Instead, we can use polls, historical data, mathematical models to make educated guesses. The approach for guessing SARS2 origin should look much more like the latter.
We need to use Bayes rule—form a prior, and then update based on scarce evidence. The prior is some baseline probability for a novel virus being natural or manmade. But guessing the prior for SARS2 is exceedingly hard.
For many questions, a prior is easily formed by checking the base rate of events. For example, suppose you heard that a friend passed away last weekend, and the last thing you know is they drove to a weekend trip to camp in the wilderness, and you later found out there was a thunderstorm in the region where they camped. What would you think is more likely: they died of a car accident, or they died from being struck by lightning? You might think that wilderness + thunderstorm = fairly likely to be from lightning. You’d be very wrong. In actuality, you need to compare the base rates: 38,000 Americans die every year from auto-related accidents, while 49 die from being struck by lightning. So even knowing they might have been in an area with a thunderstorm, it’s probably still 99% likely that they died in a car accident on the way to the camp (that is, conditioned on that they died from either car accident or lightning). The prior is roughly that car accident death is 775x more likely than lightning (38000/49 = 775). Even if you add the evidence that your friend is a very, very careful driver who is 10x less likely to get in car accidents as the average person, now the updated odds are that a car accident is 77x more likely than lightning, as opposed to 775x, so it is still overwhelmingly likely to have been a car accident.
We want to figure out where a novel pathogen that has never spread en masse in human populations before came from. Unfortunately, we can’t really construct a base rate from looking at historical data, because as far as we know, there are zero known examples of lab leaks of novel pathogens!
Viruses causing the common cold Yersinia pestis (plague) E coli Salmonella
1918 Spanish flu 1976 Ebola virus 2003 SARS1 2012 MERS
1974 anthrax leak in the Soviet Union 1978 smallpox leak in the UK 2003-2004 SARS1 leaks in Taiwan, Singapore, and China 2019 brucellosis leak in China
None publicly known
Why are there no known examples? It could be that it has never happened before. But it could also be from selection bias: (1) labs housing pathogens are relatively new in human history, and especially new is our ability to engineer new pathogens, called “gain of function”, and (2) anyone in charge of such a lab during a leak has a strong incentive to deny it—they have a lot to lose from having their lab shut down, and the cost of cover-up is pretty low: a new random disease came out, and you clean up the lab, no one would be any the wiser as to where it came from.
Strictly speaking, our main question concerns only the right side of the table. But thinking about the left side is necessary to establish a base rate. We know that labs around the world store pathogens ranging from very deadly (e.g., ebola) to relatively harmless (e.g., the common cold). Some thoughts:
If you (you being an American living in the continental US in all these examples) get the common cold, what was the chance it came from natural spread vs a lab leak?
99.9% chance it was from natural spread, given that such a large percentage of the population catches the common cold. Though arguably, there might be some variant of the common cold that escaped and has a higher infectivity than a typical common cold, and maybe most of the cold cases now are from a lab escape. Maybe.
If you get ebola, what is the chance of natural origin vs lab leak?
Maybe 75% lab leak? It is much closer to 50-50 than the previous example, as ebola doesn’t often occur in the US, but there are occasionally outbreaks elsewhere. As the US does humanitarian aid and has lots of outgoing tourism, chances aren’t zero that it spread naturally. At the same time, we don’t know how many labs hold ebola that could have escaped from. It’s a tough question because we need to compare two probabilities that are very small. I honestly don’t know. And my guess is the answer for SARS2 is similar to this question.
Given this one is pretty close, it also depends a lot on other factors. For example, do you live right next to a biosafety-level-4 lab? If you did, then I’d guess it’s more like 95% lab leak. Things have escaped US labs, and I think it would be noncontroversial to claim that Chinese labs are lower in carefulness and safety.
if you get smallpox, an eradicated disease, what is the chance of natural origin vs lab leak?
99% chance it was a lab leak. Zero cases have been reported to have naturally occurred in the US in decades, though leaks around the world do happen sometimes.
So which of these would a novel coronavirus similar to? Well, none of them, since these are all known viruses that scientists stored in a lab. As SARS2 is a novel virus, we are guessing both the chance that a lab in Wuhan such as the Wuhan Institute of Virology could leak a pathogen, and the chance that the lab either brought in novel bat coronaviruses for study or engineered a more infectious virus. I mostly believe that one of the latter was likely to have happened, from discussion in Nicolas Wade’s recent article on the origins of Covid.
Based on my intuitions on probabilities of lab leaks for the 3 cases above (common cold, ebola, smallpox) and understanding of Wuhan lab involvement in storing and engineering coronaviruses, I assign a prior of 67% lab leak vs 33% natural origin.
Now we need to update the prior on the few pieces of evidence we have:
“Three researchers from China’s Wuhan Institute of Virology became sick enough in November 2019 that they sought hospital care”, from the WSJ.
This is an obvious update towards lab leak, but by how much? Possibly by a lot, but I don’t know the base rate of how many people typically get sick from the WIV in a random November. If the base rate is 0 or 1, then we should probably update a lot, maybe by 22%? If the base rate is 2 or more, then we should update by almost zero. I’ll guess 50-50 between these two situations, such that we should update towards lab leak by 11%.
Math: I’d guess that in the case where the base rate of researchers getting sick is 0 or 1, the ratio P(3 WIV researchers sick | lab leak) to P(3 WIV researchers sick | natural origin) is 4 to 1. Then use Bayes’ rule to get P(lab leak | 3 WIV researchers sick) = 0.89. [Since my prior was 67% to 33%, we compute 0.67 * 4 vs 0.33 * 1, or 2.68 vs 0.33. Then 2.68/(2.68+0.33)=0.89.] Going from 67% to 89% is a +22% update. Since I only believe this story halfway (50% that the base rate of number of researchers going to the hospital in November is 2 or higher), I apply only half the update, or +11%.
Note that if my prior were far less confident of lab leak, say 30% lab leak/70% natural origin, the revelation that 3 researchers fell ill should still update my belief by a lot! Using Bayes rule results in 63% lab leak, an update of +33%, and believing only half the update means +16.5%, which is still a big update—an even bigger update than in my actual prior. If I thought 30% lab leak/70% natural origin before, I should now think 46.5% lab leak/53.5% natural origin.
Note the 4-to-1 ratio of P(3 WIV researchers sick | lab leak) to P(3 WIV researchers sick | natural origin) is a bit arbitrary. I could see an argument for this being lower, like 2:1. I could also easily see this number being higher, maybe 10:1! In the latter case, the Bayesian update is really large—the output of Bayes’ rule on the 30%/70% prior is now 81%(!), for an update of +51%. Believing only half of that, we get that the 30%/70% now becomes 56%/44%.
In Jan 2020, China began draconian lockdowns of major cities (“draconian” just means it was far more strict than anything we did in the West).
I’m very uncertain about this claim, but I think it’s a small update towards believing lab leak. This is because P(super lockdown | Chinese government knows it was a lab leak where “gain of function” was involved) > P(super lockdown | Chinese government is not sure what happened). A tiny +1% update towards lab leak?
China repeatedly claims its internal investigations suggest no evidence of lab leak.
From what I argued before, this updates my belief by almost 0.
WHO investigation says lab leak is extremely unlikely.
Tiny update towards natural origin, maybe 1%? Though again, since China effectively controls investigation into the lab, and this investigation took place months after the fact, I don’t put much weight on it.
Chinese vaccines are not as effective as Pfizer/Moderna ones.
Tiny update (1%) towards natural origin. If Chinese virologists were engineering a novel virus, maybe they had intricate knowledge of the virus and would know how to inoculate against it? Though it also seems very plausible that the US just has far superior R&D on vaccine development especially in mRNA technology. This signifies that Chinese scientists are not on the cutting edge of understanding viruses compared to their US counterparts, though you don’t need to be on the cutting edge to accidentally release a virus.
Variants are now responsible for most cases
I don’t know enough about viruses to make a claim as to which direction this should go, but I’m guessing the net effect is small enough to not matter.
In total, these are a +10% adjustment (+11% from WSJ article on researchers becoming sick, +1% from lockdowns, -1% from WHO investigation, -1% from vaccines). So from the evidence mentioned, my belief in lab leak went from 67% to 77%.
I ended up with 77% lab leak, 23% natural origin. It’s almost certain at this point that my number is higher than most peoples’ estimates. The main things that account for this discrepancy are:
My prior is based on thought experiments on existing pathogens (ebola, smallpox)
I more heavily discount what China, WHO, and virologists say.
I update more strongly based on the evidence that 3 researchers went ill in Nov 2019.
77% is my just current belief, and it would update as there is new evidence.
I’m interested in seeing what other people’s probabilities are.
Imagine two tribes of hunter-gatherers, 50 people each. Tribe One believes that killing is always wrong, while Tribe Two thinks killing is okay–so long as it’s a member of another tribe. During a harsh winter with low food levels, the two tribes venture outside their usual zones and run into each other. Tribe Two kills half of Tribe One and takes some of their food.
Now Tribe One has only 25 people, while Tribe Two still has 50. So the percentage of total population that believes killing is justified went from 50% to 67% (50 out of 75 is 67%).
Okay, well maybe that’s kind of misleading. The belief increasing from 50% to 67% wasn’t the result of 17% of people being convinced it was right. It is because the people who didn’t believe it were selected out of the population. Assuming all else equal, both tribes will eventually increase in population until the total population reaches 100 once again, the end effect will be as if 17 people converted.
What is going on is that being willing to kill members of other tribes is an evolutionarily beneficial idea.
In our example, we didn’t need to start with two tribes. There could have been 1000 tribes–50% pacifist, 50% violent. What happens when they repeatedly interact with each other in the long run? Most of the population become violent.
Biological organisms aren’t the only things that evolve via natural selection. Ideas do too.
Propagation of Ideas by Natural Selection
We’d like to think our beliefs are correct. Near 100% of people used to believe the Sun went around the Earth. Now we mostly think the opposite. “Earth orbits the Sun” is a factually correct idea that seemed to spread due to the merit of its accuracy.
Being correct is one way that an idea could gain traction. Having traits to help become naturally selected is another. “We should care about our own tribe more than others” seems like not a factually correct belief, or at least not an obviously correct one. It is popular because it was an evolutionarily advantageous belief–when there were collisions between believers and nonbelievers, those who did believe it were more inherently more likely to gain from collision.
Evolutionarily Advantaged Ideas
Here are four ways to increase the % of population that has a particular belief X:
Decrease the population of people who don’t believe X
Increase the population of people who believe X
Convince people who don’t believe X to believe X
Deter new people from believing alternatives to X
Ideas that inherently do one or more of these will be favored in selection. An idea is inherently advantaged if acting out on that idea causes the % of people with that idea to increase. Heliocentrism does not inherently spread, whereas tribalism does–via killing off those who are not tribal. More examples:
Any belief that creates advantages in war
An emphasis on science & technology. Between two countries all-else-equal, the technology-loving country has an advantage.
Nationalism and strong national identities. This should work in similar ways to tribalism.
Policies like having a standing army or draft.
Racism in the old-fashioned way–straight-up “people of X color are subhuman/shouldn’t exist”. This is essentially the same example as tribalism.
Family centrism. This is more of a biological trait than a psychological one, but I’ll mention it here. Suppose 50% of people would sacrifice the lives of two strangers to save their child, and the other 50% would sacrifice their child to save the lives of two strangers. Assuming there is some genetic component to this belief, you’d expect the population to converge to 100% of the population being willing to sacrifice two strangers to save their own child, because that gene would be selected.
“Have lots of children” is an obvious one. If 50% of the population believed everyone should have lots of children, and 50% believed no one should have children, what % of the population will have each belief in 100 years?
Mainstream economics. Given that you’re reading this, you are likely living in an above-average wealthy country, and wealth countries tend to have strong growth policies.
Countries which prioritize growth over sustainability gain a military advantage, in addition to directly increasing the % of population that supports growth.
“My country shouldn’t worry about climate change”–A country that worries a lot about climate change needs to sacrifice growth, thus putting it at a disadvantage compared to other countries, and after some time it could lose % population of the world, and also it might have economic troubles that cause ideas from rich countries which don’t care about climate change to seep in.
Anti-euthanasia. We take this for granted, but “You should live your life, even if you are suffering” is an evolutionarily advantaged belief. Let’s say there is a disease so permanently crippling and painful that 90% who get it really, really beg to be euthanized (and somehow succeed in convincing their doctors), while the other 10% still experience pain but really, really believe in suffering through the pain. Now if you conduct a poll on “Is this disease so bad you’d want to die? Let’s ask some patients and find out”, you’d find that a large percentage wants to carry out living.
The punishment for apostasy can range from social stigma to death, deterring people from believing competing ideas. There is also the threat of eternal suffering for nonbelief.
The first three commandments are about deterring people from thinking about competing ideas.
Religions tend to have some form of evangelism.
“Be fruitful and multiply” is growth-oriented.
Simple, easy-to-explain ideas. It is easy to spread simple ideas, difficult to spread complex ones.
Ideas that human brains are particularly good at remembering. E.g., a catchy slogan or song.
In general, I think we should be marginally more skeptical of all of these ideas. They are popular ideas, not necessarily because they are right, but because they have beneficial selection traits. The idea could still be right, just not because “a bunch of other people believe this idea, so it must has a high likelihood to be correct.”
Evolutionarily Disadvantaged Ideas
The converse is that we should be more accepting of evolutionarily disadvantaged ideas, or evolutionary dead-ends. A very basic list is just the opposite of the previous:
Ideas that don’t lead to strong militaries, e.g. not focusing so much on science and technology
Treating all humans equally. This sounds obvious and easy, but it is really not! Who would value a stranger’s child as equal to their own child?
Sustainability-oriented ideas, or even population/economic-shrinking ideas, as opposed to permanent growth.
Antinatalism. Already, more people especially in the west are choosing to be childfree.
Environmentalism. Note the most radical forms like the Voluntary Human Extinction Movement.
More strongly, suicide. Suicide is the most extreme evolutionary dead-end. Yet a lot of people commit suicide every year. Maybe the idea that life sucks/isn’t worth living is more valid than people give it credit for, and a lot of people needlessly suffer their entire lives. It is hard to have a good two-sided discussion between two opposing sides because the people most agreeing with the idea of suicide are dead. Of course, raising the status of this is a social danger because it would cause more people to die of suicide.
Anti-religion. Note this mostly applies to the Abrahamic religions. Buddhism is kind of a weird one because it is somewhat antinatalist, so we would have expected it to be selected out of the population.
To correct for selection, we should marginally lower the acceptance of advantaged ideas and raise the acceptance of disadvantaged ideas. And when considering which ideas are the most popular, we need to make sure we’re not falling to selection effects.
A future post will contain a counterargument to all this–why we shouldn’t care about idea selection and just use whatever ideas are easy to propagate.
The title of the TIME article says blatantly, “Misogyny Didn’t Turn Elliot Rodger Into a Killer,” and the first sentence reads, “Yes, Elliot Rodger was a misogynist — but blaming a cultural hatred for women for his actions loses sight of the real reason why isolated, mentally ill young men turn to mass murder.”
Besides this acknowledgement, the articles all present evidence that furthers their own theories while not considering evidence that might support other theories. It’s very difficult to dig up an article that discusses, for instance, with nuance how much of it was caused by misogyny and how much by mental illness, or how the two factors behave in tandem. (Or whether there is a third factor: this article (Salon) talks about the role of race in Rodger’s motives.)
In case you’ve already made your mind on which side of the misogyny vs mental illness debate you fall on, here is a simpler, non-politically-charged example. Suppose we want a theory to predict where there is snow and where there isn’t snow. The first theory I’ll propose is the latitude theory: higher latitudes are colder and should thus have more snow (assuming we’re in the Northern hemisphere). If this theory were completely true, the snow distribution might look something like this.
Everywhere north of the latitude line, there is snow, and everywhere south, there is no snow. Clearly this isn’t true.
Here is another theory: water proximity theory. Snow needs water to freeze, so snow will form near bodies of water. If this theory were completely true, then we should only find snow near water. Clearly this isn’t true either.
As one can see, neither theory is true as an absolute statement. The correct way to think of these theories is as probabilistic theories. That is, the more north you go, the higher the chance you will encounter snow. The same goes for being near bodies of water, to a lesser extent. Even then, snow cover cannot be explained as a combination of these two factors alone: mountainous regions have more snowfall as well.
The debates in our current-day media are akin to one side saying that latitude determines everything and the other side that proximity to water determines everything. Neither side is willing to look rationally at the cold facts around them.
History is another subject where it is more clear that everything has multiple causes. In just less than two months from today, it will have been 100 years since the beginning of World War I. One might argue that the cause of WWI was the assassination of an archduke, but this simplistic explanation misses all the political tensions and alliances at the time. Similarly, one could argue that it was purely due to the political landscape and that war would have broken out regardless of the assassination. Both causes were necessary to an extent. If Franz Ferdinand had been assassinated in a less tense time, war might have been averted. Similarly, if no assassination had occurred, the great powers might not have had a proper excuse to actually go to war.
So why can’t we use scientific or historical reasoning on sociological issues?
Religion is a great example of this single-cause mentality. The honor killing of Pakistani woman Farzana Parveen last week was unanimously condemned in the US, similarly to the Elliot Rodger shooting. However, whenever someone tried to posit a cause that could have contributed to the honor killing, the other side would knock it down, saying it couldn’t be the right cause, and they give examples. For instance, if you go to the comment section of any major news story about this event, you’ll invariably find that someone criticizes Islam for condoning honor killings and promoting misogyny, and then someone else responds by pointing out that honor killings sometimes happen in other cultures (e.g. Hindu) as well.
Both sides make decent points but such conversations are useless since they are both saying true things but ignoring what the other side is saying. Just as “more north = more snow” is not always true, it is also not false. So sure, Islam might not be the only reason that honor killings occur so much in Pakistan, but it’s a pretty strong factor. Just because a cause is not the only cause does not mean that it is not a cause at all.
With religion in general, people very often make absurdly simplistic statements themselves and assume other people’s views of religion are absurdly simplistic (perhaps by projection). This might also be reflected in the general media and American culture as a whole. We love simple answers to complex problems. I’m not advocating that we personally conduct full academic research for every problem we face, but we are clearly too far on the simplistic side. The problem is that we’re thinking too little, not too much.
Elliot Rodger’s event, just like any other event, has a variety of causes. Both misogyny and ill-handling of mental illness are to blame. Snow cover depends on several conditions. World War I had a complex background, as do honor killings and suicide bombings.
Solutions to oversimplification of causes?
Prefer depth of news, not breadth. Instead of gaining a superficial understanding of many stories, try to understand one story really well. Read 10 different articles on Elliot Rodger and look at the issue from all sides.
Look at the statistics yourself. Numbers don’t oversimplify themselves.
Acquire more information. Have an opinion on Russia’s involvement with Ukraine? See if your opinion changes if you read up on past involvements.
Read the comments section of the article. While 90% of it may be trash, someone might point out something worthwhile.
Today was my graduation from Cornell, but since I’m not a fan of ceremony, the topic for today is completely different: a subset of selection bias known as observer selection.
Selection bias in general is selecting particular data points out of a larger set to distort the data. For example, using the government’s own NOAA website (National Oceanic and Atmospheric Administration), I could point out that the average temperature in 1934 was 54.10 degrees Fahrenheit, while in 2008 it was 52.29. Clearly from these data points, the US must be cooling over time. The problem with the argument is, of course, that the two years 1934 and 2008 were chosen very carefully: 1934 was the hottest year in the earlier time period, and 2008 was the coolest year in recent times. Comparing these two points is quite meaningless, as the overall trend is up.
Observer selection is when the selection bias comes from the fact that someone must exist in a particular setting to do the observation. For instance, we only know of one universe, and there is life in our universe—us. Could it have been possible that our universe had no life?
The issue with trying to answer this question is that if our universe indeed had no life, then we wouldn’t exist to witness that.
“The anthropic principle: given that we are observing the universe, the universe must have properties that support intelligent life. It addresses the question “Why is our universe suitable for life?” by noting that if our universe were not suitable for life, then we wouldn’t be here making that observation. That is, the alternative question, “Why is our universe not suitable for life,” cannot physically be asked. We must observe a universe compatible with intelligent life.”
The point is, there may be millions, billions, or even an infinite number of universes. But even if only one in a trillion were suitable for life, we must exist in one of those. So our universe is not “fine tuned” for life, but rather, our existence means we must be in a universe that supports us.
A list of observer effects:
The anthropic principle, as above. Our universe must be suitable for life.
A planet-oriented version of the anthropic principle: Earth has abundant natural resources, is in the habitable zone, has a strong magnetic field, etc.
A species-oriented version of the anthropic principle: Our species is very well adapted to survive. If we weren’t, then we wouldn’t be thinking about this.
There are no recent catastrophic asteroid impacts (the last one being 65 million years ago). If there were, then we again wouldn’t be observing that.
The same goes for all natural disasters. No catastropic volcano eruptions, no nearby supernovae or black holes, etc.
The same goes for apocalyptic man-made disasters. Had the Cold War led to a nuclear exchange that wiped out humanity, we would not be able to observe a headline that said, “Nuclear Weapons Make Humans Extinct.” Thus, we must observe non-catastrophic events in the past.
Individual life follows this as well. Say you had a life-threatening illness or accident in the past, but you’re alive now (of course, given that you’re reading this). Given that you’re alive now, you must have survived it, so to the question, “Are you alive?,” you can only answer yes.
All of these are strong observer effects, in that they are absolute statements and not probabilistic ones, i.e. “Our universe must have life,” and not “Our universe probably has life.”
There are numerous other observer effects that are probabilistic but can be still very significant. For example, given that you are reading this, you are more likely in a literate country than in less literate one. Moreover, the probability would be higher than that if I did not know anything about you.
In this post, I mentioned the example of democracy in political science. In summary, political science has a lot more to say on democracy than on any other form of government. Is this because we are personally biased towards democracy? Not necessarily. In a less open system, fields like political science might be forbidden from research (or academia is rated less important), and hence there are no (or few) pro-totalitarian political scientists. Hence, we end up seeming to favor democracy.
We also know that history is written by the victors. But a related historical example is the rise of strong states combined with the rise of liberalism and progressive thoughts in the Modern era. Namely, states in which liberalism arose (England, France) tended to be strong states. A weak state adopting progressive measures would be wiped out by a stronger one. Hence, history is also analyzed by the victors.
So what can you do about observer selection? All we can do is try to be aware of it and introduce corrections to study a full set of possibilities rather than the subset we are in by being a particular observer. For instance, if we were just using historical data of natural disasters, we would be underestimating the actual probability of a catastrophic disaster, as we live in a time where none could have occurred for a while.
For someone like myself who doesn’t see everything in economic terms, the world of those who do is very bizarre. For instance, when we think about wealth inequality and how to reduce it, we inevitably come up with familiar concepts like increasing tax rates for the rich, capping their income, regulating investments, and so on. But the first article I stumbled upon, “Two Surefire Solutions to Inequality,” provided two strange solutions: increasing the fertility rate among the rich, and decreasing the fertility rate among the rich.
The tl;dr arguments are as follows: Increasing the fertility rate among the rich means that large wealthy families will be forced to divide their wealth every generation, thus lowering individual wealth slowly over time (of course, assortative mating slows this down).
On the other hand, decreasing the fertility rate among the rich means that the rich class will slowly disappear over time.
This seems really strange. Neither solution obviously solves any problem, and they might make make things worse in the short term. In addition, any government mandate on this would be hard to define and would be met by resentment on both sides in either situation. In other words, these solutions are absurd.
But in another sense, they are not absurd at all. They both make perfect logical sense. Assumptions were made, but not much more so than any other economic model. So why are these solutions so strange? Is it just social norms holding us back? A fear of anything resembling eugenics? A desire to not mess with peoples’ rights?
For a change of pace, here are some funny economics jokes, from the link given at the beginning:
An economist is someone who has had a human being described to him, but has never actually seen one.
When doctors make mistakes, at least they kill their patients. When economists make mistakes, they merely ruin them.
One night a policeman saw a macroeconomist looking for something by a lightpole. He asked him if he had lost something there. The economist said, “I lost my keys over in the alley.” The policeman asked him why he was looking by the lightpole. The economist responded, “It’s a lot easier to look over here.”
Last semester, our apartment had a debate over whether video games cause violence. It came down to arguing logical mechanisms, but without any use of statistics by either side. The argument basically turned into my word vs your word, since there was no objective basis on which to judge anything.
If your answer were yes, you might propose the mechanism: “People who play violent video games are likely to imitate the characters they play, thus becoming more aggressive in real life.” This statement might be logically sound, but without any supporting evidence, it has little credence.
You could easily propose a counter-mechanism: “People who would otherwise commit violent crimes satisfy their urges in video games and not in real life, thus decreasing the crime rate.” Again, this seems plausible, but without any data, we simply don’t know whether this effect outweighs the other. We need real stats.
Naively looking at statistics does not help either. Depending on which stats you look at and how they are presented, the conclusions can go either way (graph 1 and graph 2):
In any subject, one important concern is matching theories with empirical data. In the hard sciences, one tests the theory by experiment, and it is often possible to verify or deny claims with empirical data. But in the social sciences, experiments are sometimes impossible. To see what would happen if Germany had won World War II, we cannot simply recreate the circumstances of the war in a petri dish. So we must do the best we can with the limited data we have.
This lack of statistics affects many other issues, perhaps more important ones. For instance, in the public debate over gun control, there are clearly two competing mechanisms: “More guns = more shootings” and “More guns = more protection.” Each makes logical sense on its own, but the way to figure out the more accurate one is not by purely logical argumentation (which will lead nowhere), but by use of statistics, i.e. show the real effects of implementing or not implementing gun control laws. This would be much more fruitful than mindlessly yelling mechanisms across the void.