Lightly Held Identities and Ideological Turing Tests

Here is a brilliant passage on identity from Julia Galef’s The Scout Mindset: Why Some People See Things Clearly and Others Don’t:

The problem with identity is that it wrecks your ability to think clearly. Identifying with a belief makes you feel like you have to be ready to defend it, which motivates you to focus your attention on collecting evidence in its favor. Identity makes you reflexively reject arguments that feel like attacks on you or the status of your group.

To counteract this, Galef says to have lightly held identities:

Holding an identity lightly means thinking of it in a matter-of-fact way, rather than as a central source of pride and meaning in your life. It’s a description, not a flag to be waved proudly…

Holding an identity lightly means treating that identity as contingent, saying to yourself, “I’m a liberal, for as long as it continues to seem to me that liberalism is just.” Or “I’m a feminist, but I would abandon the movement if for some reason I came to believe it was causing net harm.” It means maintaining a sense of your own beliefs and values, independent of the tribe’s beliefs and values, and acknowledging—at least in the privacy of your own head—the places where those two things diverge.

So a belief taken to be identity is hard to abandon, because to abandon the belief is to abandon one’s identity. If, instead, a belief is held lightly, it is easy to update the belief upon seeing contradictory evidence.

One example I’ve seen was in the old culture wars of the late 2000s/early 2010s of New Atheism, and I have to admit, I used this argument myself. It went something like, “Shouldn’t Muslims be more willing to leave their religion, after seeing repeated evidence that suicide bombers profess their faith towards Islam right before killing themselves along with dozens of civilians?” Another axis was, “Shouldn’t liberal Muslims be more willing to leave their religion, given they personally support women’s rights and LGBT rights, but Islamic countries around the world have the most sexism and anti-LGBT discrimination?” Needless to say, this was an unconvincing argument.

The reason this wasn’t convincing is that it was psychologically equivalent to an attack on the identity of Muslims. In the New Atheists’ mind, they were trying to help people change to a better belief system, and saying, “Hey, I know you live your life following parts of wisdom in this ancient book, but look, here are some people who are really, really into following the same ancient book—the people who think about this book all the time—and they commit atrocities that are very directly because of their following of this book, so you should probably update to believing in this book a little less than you did otherwise.” But since religion is often not just a belief but an identity, this reinforces belief instead of weakening it.

Religion aside, I’ve always been fascinated by how communism is an acceptable belief amongst educated people—encountered this in both high school and college. Certainly communism isn’t a core part of many people’s identity, though maybe anti-capitalism is? When I mentioned the famines that killed tens of millions as a result of communist economic policies in the Soviet Union or China, the response was not to update to be more skeptical of communism, but instead, something like, “Well, they weren’t doing communism correctly; if we did it correctly, it wouldn’t kill tens of millions of people.” That last argument has a lot of sympathy from well-educated people, yet none of them would sympathize with “Fascism is inherently good because it builds national strength, it’s just that the Germans did it incorrectly by tacking antisemitism and eugenics on it; if we did it correctly…”

What identities would I self-describe as? If you asked me 10 years ago, “liberal” would probably have been at the top of my list. But in 2015, I became quite disenchanted with the left, starting with the now-standard response to the 2015 Charlie Hebdo shooting: “I think the shooting is wrong, but something about how France is a racist, colonialist empire and therefore somewhat deserved it.” (Which isn’t far from the more recent, “I think antisemitism is wrong, but….”) I thought I had a lot in common with liberals until the liberal discussion focused mostly on what the West did wrong to deserve a terror attack. This was basically the quote in Galef’s book, “I’m a liberal, for as long as….” (There’s some mental gymnastics that can be done by arguing, “Well, I’m still a liberal in the definitional sense, it’s the other people claiming to be liberals who are abandoning the core tenets of liberalism,” but that I’ll leave out.)

At this point I don’t carry around much identity, because in my experience, the people most focused on their identities seem to be the least amenable to reason. I guess if I had to pick one, I would consider myself an epistemic rationalist, in the sense that I think using logic and probability is the best way to form accurate beliefs about the world.

Relatedly, Galef mentions in her book the Ideological Turing Test, an idea from Bryan Caplan that in order to truly understand an opposing viewpoint, you should be able to convince someone you actually hold that opposing belief. This is hard to do—the book contains an example of someone claiming to do this but failing horribly. From Galef:

Measured against that standard [the Ideological Turing Test], most attempts fall obviously short. For a case in point, consider one liberal blogger’s attempt at modeling the conservative worldview. She begins, “If I can say anything at this bleak hour, with the world splitting at its seams, it’s this: conservatives, I understand you. It may not be something you expect to hear from a liberal, but I do. I understand you.” An earnest start, but her attempt to empathize with conservatives quickly devolves into caricature. Here’s her impression of the conservative take on various subjects:

On capitalism: “Those at the top should have as much as possible. That’s the natural order . . . It’s not a secret; just don’t be lazy. Why is everyone so poor and lazy?”

On feminists: “Those women make noise, make demands, take up space . . . Who do they think they are?”

On abortion: “What a travesty . . . women making these kinds of radical decisions for themselves.”

On queer and transgender people: “They shouldn’t exist. They’re mistakes. They must be. But wait, no. God doesn’t make mistakes . . . Oh dear. You don’t know what’s happening anymore, and you don’t like it. It makes you feel dizzy. Out of control.”

It’s hardly necessary to run this text by an audience of conservatives to be able to predict that it would flunk their ideological Turing test. Her “conservative” take on capitalism sounds like a cartoon villain. The language about women taking up space and making decisions for themselves is how liberals frame these issues, not conservatives. And her impression of a conservative suddenly realizing his views on transgender and queer people are internally inconsistent (“They’re mistakes . . . But wait, no. God doesn’t make mistakes.”) just looks like a potshot she couldn’t resist taking.

[…]

The ideological Turing test is typically seen as a test of your knowledge: How thoroughly do you understand the other side’s beliefs? But it also serves as an emotional test: Do you hold your identity lightly enough to be able to avoid caricaturing your ideological opponents? Even being willing to attempt the ideological Turing test at all is significant. People who hold their identity strongly often recoil at the idea of trying to “understand” a viewpoint they find disgustingly wrong or harmful. It feels like you’re giving aid and comfort to the enemy. But if you want to have a shot at actually changing people’s point of view rather than merely being disgusted at how wrong they are, understanding those views is a must.

I’ve encountered this so many times where people including myself have a ridiculous notion of what the other side of some ideological divide actually thinks. I’m not sure what I think the solution is. One thing I’ve tried recently and would recommend is to go on Reddit and subscribe to a subreddit—but importantly, not of an identity group you belong to, but of one you don’t belong to. On the home page, you can probably find at least one post that you don’t totally disagree with. And if you really internalize that post, you’d be on your way to passing an Ideological Turing Test. (It doesn’t have to be on Reddit. Follow someone you disagree with on Twitter, etc.)

That said, it is still incredibly hard to empathize with opposing views—as Galef said in the quote just above, “People who hold their identity strongly often recoil at the idea of trying to ‘understand’ a viewpoint they find disgustingly wrong or harmful.” It seems like people should, on the margin, hold their ideas less strongly.

What is the Probability of the Lab Leak Hypothesis?

Originally forbidden from public discussion for all of 2020, the lab leak hypothesis as the origin of SARS-CoV-2 (SARS2) has recently gained intellectual support, and President Biden just publicly announced he was investigating the hypothesis for months. Despite no direct evidence appearing for either side, it suddenly seems like it’s closer to 50/50 now, after a year of public discourse implying something like 1% lab leak/99% natural origin. One thing we’re all wondering is, how likely is it that this was a lab leak?

I am not a virologist. My day job is quant trading, and I regularly think about probabilities of one-off events, updating them in real time. So my methods might be alien, but with that, let’s guess the probability of lab leak vs natural origin. I attempted the more reliable solutions first, but given the lack of evidence for either side, most of this is guesswork.

I consider lab leak to include accidental escape of either engineered or nonengineered virus.

My current belief is 77% lab leak, 23% natural. All of the text below is not really a proof, just things to think about when forming your own opinion.

Betting Markets

The first thing that comes to mind is: does there exist a prediction market? Prediction markets in general can be accurate when the outcomes are well defined and people can bet a lot of money on things—any inaccuracy can be “arbitraged” away since there is a financial incentive to trade for people who are good at guessing. For example, in the 2020 US presidential election, after briefly swinging towards Trump after Miami-Dade results, prediction markets were all heavily favoring Biden by the early morning after election day, at a time when it looked like Trump held a significant lead in most of the remaining states. While states like Pennsylvania and Georgia took multiple days for the counter to swing Democrat, the betting markets already knew that would happen less than 12 hours after polls closed. The point is, the market already corrected for the mail-in ballot effect, because if it didn’t, someone who correctly thought about the effect could make a lot of money trading in it, thus pushing the market towards a more accurate truth.

However, as far as I can tell, there is no liquid market for the lab leak hypothesis. Even if there were, the definition of such a contract would matter a lot—maybe in the case that a small group of people in China really did know there was a lab leak, the probability of conceding this could be very low, especially as they would have had ample time in the last 17 months to destroy evidence. So even a contract that says “The WHO thinks SARS2 was a lab leak by 2023” wouldn’t necessarily indicate the true probability of a lab leak, because you might expect there to be bias in which result would be more willingly revealed to the public—a bias which people trading the contract would certainly be aware of.

Even without an official prediction market, some people make bets openly on the internet. So one thing I could do is draw out a contract, put out a market, and solicit people to trade with me. But I think the issue is that the contract is too hard to define here. As opposed to a fixed event—e.g. “Will the Democratic Party win the 2024 presidential election” would have widespread agreement about who wins a presidential election—a contract like “Was COVID-19 a lab leak” is very subject to opinion and ambiguity. You could try to more well-define the contract, e.g.

  • “The WHO thinks SARS2 was a lab leak by 2022”, or
  • “I will ask Matt Yglesias on Jan Dec 31, 2022 if he thought it was more likely (no tie) a lab leak or natural, what will he say”, or
  • “Will the Chinese Communist Party announce a statement agreeing it was a lab leak by 2022”

All 3 of these should have different values. Probably the Chinese government admitting a lab leak is the least likely, so they would trade at different betting odds, despite that the origin of SARS2 has already happened a year and a half in the past, and nothing we do in terms of framing the question now will change what actually happened.

Between framing ambiguity, the taxes in prop betting, and worrying about counterparty risk, I would start with a very wide market and it wouldn’t be very helpful in answering the original question, So I put betting markets aside for the moment.

Ask a Superforecaster

The next thing to find people who are really good at predicting things, and see what they think. The only reputable superforecaster I’ve found who actually publicly gave a straight probability is Nate Silver, who on 5/23/2021 assigned 60% chance to lab leak, and 40% to natural origin.

Note that in the tweet, Silver previously had 40% lab leak, and updated to 60% based on the recent WSJ article documenting that three Wuhan Institute of Virology researchers became sick enough to seek hospital care in November 2019.

I am probably missing many other public guesses. I will consider them if pointed out to me.

Poll the Virologists

For a number of reasons, I think polling virologists is futile for uncovering the origins of SARS2.

  • This is a highly politicized topic, so there are many selection biases to correct for, which would seem very difficult and subjective to do:
    • People with which political leanings are more likely to become virologists?
    • What kinds of people are the ones who would speak out publicly, especially if it could cost them their job, i.e. canceled for political leanings?
    • Is there a direct occupational conflict of interest? I.e., a virologist guesses that if they spoke favoring lab leak, more public scrutiny and distrust would occur for labs, possibly defunding them, and causing said virologist to lose their job. So they decide to stay quiet.
  • Are virologists able to accurately quantify beliefs in terms of probabilities?
    • In particular, thinking about the origins of SARS2 without hard evidence requires knowing about priors and Bayesian updating—I’m not sure virologists as a whole are the most equipped on how to think of weighting priors and evidence properly, which superforecasters are better equipped to do.
  • I haven’t really seen anyone throw out numbers, just noncommitive words like “likely” or “extremely unlikely”.
  • Self-organized groups that make a claim tend to be people with similar beliefs, could be a vocal minority.
  • Political correctness aspects of the debate—e.g. things like “In terms of policy, it doesn’t matter which origin hypothesis is correct, so we should publicly support zoonotic hypothesis because that would lead to less anti-Asian-American racism.”
  • There is already a discredit of many virologists who came out publicly in early 2020 to strongly argue that the evidence was overwhelmingly in favor of natural origin, yet now the mainstream belief is that the two hypotheses are close enough in probability to be very unclear.

There are definitely many cases—probably the vast majority of situations—to trust experts. If you wanted to find if ingesting 1mg of cyanide is fatal, probably toxicologists would have very good published answers. That question seems very easy to settle in a repeated experiment and doesn’t have much political or selection biases, whereas the situation with SARS2 is filled with them. With that, we turn to more handwavy methods.

Debiasing and the Chinese Government

It would take many books to spell out the details of all the human cognitive biases that could hamper our thinking about the origin of SARS2. I wanted to point out the main, super-important selection bias.

The main bias is that the Chinese government, which effectively controls the investigation into the origins of SARS2, has a huge incentive to cover up a hypothetical lab leak, and has precedent of doing major cover-ups in the past, though not necessarily related to biolabs that we know of. I feel like this might be obvious to some of you, but really, this is an huge effect. Especially if you’ve only lived in Western countries, you might not realize to what extent China censors facts in a 1984-esque way.

Quick history lesson—The United States has done many things in the past that no American alive today should be proud of. They might not be the things most emphasized—Native American genocide, slavery, chemical weapons, shootings of student protestors—however, as we live in a democracy with freedom of the press, you can easily find online articles (wiki pages are linked) or books or documentaries or movies about them. It’s very easy to argue that the bad things done by the modern Chinese government were much worse—the “Great Leap Forward” which killed between 15 million and 55 million people, or the Tiananmen Square Massacre where hundreds to thousands of student protestors were killed by the government. In China, events like these are censored—you can’t search them.

Think about the worst thing an American presidency did in 4 years, and compare it to this (emphasis mine):

The Great Leap Forward (Second Five Year Plan) of the People’s Republic of China (PRC) was an economic and social campaign led by the Chinese Communist Party (CCP) from 1958 to 1962. Chairman Mao Zedong launched the campaign to reconstruct the country from an agrarian economy into a communist society through the formation of people’s communes. Mao decreed increased efforts to multiply grain yields and bring industry to the countryside. Local officials were fearful of Anti-Rightist Campaigns and competed to fulfill or over-fulfill quotas based on Mao’s exaggerated claims, collecting “surpluses” that in fact did not exist and leaving farmers to starve. Higher officials did not dare to report the economic disaster caused by these policies, and national officials, blaming bad weather for the decline in food output, took little or no action. The Great Leap resulted in tens of millions of deaths, with estimates ranging between 15 and 55 million deaths, making the Great Chinese Famine the largest famine in human history

This was a horrific disaster for which no one took responsibility. For 20 years afterwards, the Chinese Communist Party’s official terminology for this period was the Three Years of Natural Disasters.”

Sound familiar?

This is all to say, when China says something about SARS2 that makes themselves look better, i.e. discredit the lab leak hypothesis, it doesn’t update my belief much at all. So which pieces of evidence should change our belief?

Bayesian Reasoning

Because the evidence surrounding the question is so scarce and opaque, we can’t use presumptions in a court of law like “we should assume 99.9% natural origin unless we find proof beyond reasonable doubt of lab leak,” or “we should assume 99.9% lab leak until we find proof of natural origin,” and then spend time debating which side has the burden of proof. There is just not enough evidence out there for either side. If we were trying to predict whether the Democrats or Republicans win the 2024 presidential election, it would be very weird to say, “There is currently a sitting Democratic president, so you need to convince me beyond reasonable doubt that a Republican to win for me to believe that Republicans have any chance of winning.” Instead, we can use polls, historical data, mathematical models to make educated guesses. The approach for guessing SARS2 origin should look much more like the latter.

We need to use Bayes rule—form a prior, and then update based on scarce evidence. The prior is some baseline probability for a novel virus being natural or manmade. But guessing the prior for SARS2 is exceedingly hard.

For many questions, a prior is easily formed by checking the base rate of events. For example, suppose you heard that a friend passed away last weekend, and the last thing you know is they drove to a weekend trip to camp in the wilderness, and you later found out there was a thunderstorm in the region where they camped. What would you think is more likely: they died of a car accident, or they died from being struck by lightning? You might think that wilderness + thunderstorm = fairly likely to be from lightning. You’d be very wrong. In actuality, you need to compare the base rates: 38,000 Americans die every year from auto-related accidents, while 49 die from being struck by lightning. So even knowing they might have been in an area with a thunderstorm, it’s probably still 99% likely that they died in a car accident on the way to the camp (that is, conditioned on that they died from either car accident or lightning). The prior is roughly that car accident death is 775x more likely than lightning (38000/49 = 775). Even if you add the evidence that your friend is a very, very careful driver who is 10x less likely to get in car accidents as the average person, now the updated odds are that a car accident is 77x more likely than lightning, as opposed to 775x, so it is still overwhelmingly likely to have been a car accident.

We want to figure out where a novel pathogen that has never spread en masse in human populations before came from. Unfortunately, we can’t really construct a base rate from looking at historical data, because as far as we know, there are zero known examples of lab leaks of novel pathogens!

Known pathogenNovel pathogen
Natural originViruses causing the common cold
Yersinia pestis (plague)
E coli
Salmonella
1918 Spanish flu
1976 Ebola virus
2003 SARS1
2012 MERS
Lab leak1974 anthrax leak in the Soviet Union
1978 smallpox leak in the UK
2003-2004 SARS1 leaks in Taiwan, Singapore, and China
2019 brucellosis leak in China
None publicly known

Why are there no known examples? It could be that it has never happened before. But it could also be from selection bias: (1) labs housing pathogens are relatively new in human history, and especially new is our ability to engineer new pathogens, called “gain of function”, and (2) anyone in charge of such a lab during a leak has a strong incentive to deny it—they have a lot to lose from having their lab shut down, and the cost of cover-up is pretty low: a new random disease came out, and you clean up the lab, no one would be any the wiser as to where it came from.

Strictly speaking, our main question concerns only the right side of the table. But thinking about the left side is necessary to establish a base rate. We know that labs around the world store pathogens ranging from very deadly (e.g., ebola) to relatively harmless (e.g., the common cold). Some thoughts:

  • If you (you being an American living in the continental US in all these examples) get the common cold, what was the chance it came from natural spread vs a lab leak?
    • 99.9% chance it was from natural spread, given that such a large percentage of the population catches the common cold. Though arguably, there might be some variant of the common cold that escaped and has a higher infectivity than a typical common cold, and maybe most of the cold cases now are from a lab escape. Maybe.
  • If you get ebola, what is the chance of natural origin vs lab leak?
    • Maybe 75% lab leak? It is much closer to 50-50 than the previous example, as ebola doesn’t often occur in the US, but there are occasionally outbreaks elsewhere. As the US does humanitarian aid and has lots of outgoing tourism, chances aren’t zero that it spread naturally. At the same time, we don’t know how many labs hold ebola that could have escaped from. It’s a tough question because we need to compare two probabilities that are very small. I honestly don’t know. And my guess is the answer for SARS2 is similar to this question.
    • Given this one is pretty close, it also depends a lot on other factors. For example, do you live right next to a biosafety-level-4 lab? If you did, then I’d guess it’s more like 95% lab leak. Things have escaped US labs, and I think it would be noncontroversial to claim that Chinese labs are lower in carefulness and safety.
  • if you get smallpox, an eradicated disease, what is the chance of natural origin vs lab leak?
    • 99% chance it was a lab leak. Zero cases have been reported to have naturally occurred in the US in decades, though leaks around the world do happen sometimes.

So which of these would a novel coronavirus similar to? Well, none of them, since these are all known viruses that scientists stored in a lab. As SARS2 is a novel virus, we are guessing both the chance that a lab in Wuhan such as the Wuhan Institute of Virology could leak a pathogen, and the chance that the lab either brought in novel bat coronaviruses for study or engineered a more infectious virus. I mostly believe that one of the latter was likely to have happened, from discussion in Nicolas Wade’s recent article on the origins of Covid.

Based on my intuitions on probabilities of lab leaks for the 3 cases above (common cold, ebola, smallpox) and understanding of Wuhan lab involvement in storing and engineering coronaviruses, I assign a prior of 67% lab leak vs 33% natural origin.

Now we need to update the prior on the few pieces of evidence we have:

  • “Three researchers from China’s Wuhan Institute of Virology became sick enough in November 2019 that they sought hospital care”, from the WSJ.
    • This is an obvious update towards lab leak, but by how much? Possibly by a lot, but I don’t know the base rate of how many people typically get sick from the WIV in a random November. If the base rate is 0 or 1, then we should probably update a lot, maybe by 22%? If the base rate is 2 or more, then we should update by almost zero. I’ll guess 50-50 between these two situations, such that we should update towards lab leak by 11%.
      • Math: I’d guess that in the case where the base rate of researchers getting sick is 0 or 1, the ratio P(3 WIV researchers sick | lab leak) to P(3 WIV researchers sick | natural origin) is 4 to 1. Then use Bayes’ rule to get P(lab leak | 3 WIV researchers sick) = 0.89. [Since my prior was 67% to 33%, we compute 0.67 * 4 vs 0.33 * 1, or 2.68 vs 0.33. Then 2.68/(2.68+0.33)=0.89.] Going from 67% to 89% is a +22% update. Since I only believe this story halfway (50% that the base rate of number of researchers going to the hospital in November is 2 or higher), I apply only half the update, or +11%.
      • Note that if my prior were far less confident of lab leak, say 30% lab leak/70% natural origin, the revelation that 3 researchers fell ill should still update my belief by a lot! Using Bayes rule results in 63% lab leak, an update of +33%, and believing only half the update means +16.5%, which is still a big update—an even bigger update than in my actual prior. If I thought 30% lab leak/70% natural origin before, I should now think 46.5% lab leak/53.5% natural origin.
      • Note the 4-to-1 ratio of P(3 WIV researchers sick | lab leak) to P(3 WIV researchers sick | natural origin) is a bit arbitrary. I could see an argument for this being lower, like 2:1. I could also easily see this number being higher, maybe 10:1! In the latter case, the Bayesian update is really large—the output of Bayes’ rule on the 30%/70% prior is now 81%(!), for an update of +51%. Believing only half of that, we get that the 30%/70% now becomes 56%/44%.
  • In Jan 2020, China began draconian lockdowns of major cities (“draconian” just means it was far more strict than anything we did in the West).
    • I’m very uncertain about this claim, but I think it’s a small update towards believing lab leak. This is because P(super lockdown | Chinese government knows it was a lab leak where “gain of function” was involved) > P(super lockdown | Chinese government is not sure what happened). A tiny +1% update towards lab leak?
  • China repeatedly claims its internal investigations suggest no evidence of lab leak.
    • From what I argued before, this updates my belief by almost 0.
  • WHO investigation says lab leak is extremely unlikely.
    • Tiny update towards natural origin, maybe 1%? Though again, since China effectively controls investigation into the lab, and this investigation took place months after the fact, I don’t put much weight on it.
  • Chinese vaccines are not as effective as Pfizer/Moderna ones.
    • Tiny update (1%) towards natural origin. If Chinese virologists were engineering a novel virus, maybe they had intricate knowledge of the virus and would know how to inoculate against it? Though it also seems very plausible that the US just has far superior R&D on vaccine development especially in mRNA technology. This signifies that Chinese scientists are not on the cutting edge of understanding viruses compared to their US counterparts, though you don’t need to be on the cutting edge to accidentally release a virus.
  • Variants are now responsible for most cases
    • I don’t know enough about viruses to make a claim as to which direction this should go, but I’m guessing the net effect is small enough to not matter.

In total, these are a +10% adjustment (+11% from WSJ article on researchers becoming sick, +1% from lockdowns, -1% from WHO investigation, -1% from vaccines). So from the evidence mentioned, my belief in lab leak went from 67% to 77%.

Meta Thoughts

I ended up with 77% lab leak, 23% natural origin. It’s almost certain at this point that my number is higher than most peoples’ estimates. The main things that account for this discrepancy are:

  • My prior is based on thought experiments on existing pathogens (ebola, smallpox)
  • I more heavily discount what China, WHO, and virologists say.
  • I update more strongly based on the evidence that 3 researchers went ill in Nov 2019.

77% is my just current belief, and it would update as there is new evidence.

I’m interested in seeing what other people’s probabilities are.

Beware Ideas that Are Beneficially Selected

The Murderous Tribe

Imagine two tribes of hunter-gatherers, 50 people each. Tribe One believes that killing is always wrong, while Tribe Two thinks killing is okay–so long as it’s a member of another tribe. During a harsh winter with low food levels, the two tribes venture outside their usual zones and run into each other. Tribe Two kills half of Tribe One and takes some of their food.

Now Tribe One has only 25 people, while Tribe Two still has 50. So the percentage of total population that believes killing is justified went from 50% to 67% (50 out of 75 is 67%).

Okay, well maybe that’s kind of misleading. The belief increasing from 50% to 67% wasn’t the result of 17% of people being convinced it was right. It is because the people who didn’t believe it were selected out of the population. Assuming all else equal, both tribes will eventually increase in population until the total population reaches 100 once again, the end effect will be as if 17 people converted.

What is going on is that being willing to kill members of other tribes is an evolutionarily beneficial idea.

In our example, we didn’t need to start with two tribes. There could have been 1000 tribes–50% pacifist, 50% violent. What happens when they repeatedly interact with each other in the long run? Most of the population become violent.

Biological organisms aren’t the only things that evolve via natural selection. Ideas do too.

Propagation of Ideas by Natural Selection

We’d like to think our beliefs are correct. Near 100% of people used to believe the Sun went around the Earth. Now we mostly think the opposite. “Earth orbits the Sun” is a factually correct idea that seemed to spread due to the merit of its accuracy.

Being correct is one way that an idea could gain traction. Having traits to help become naturally selected is another. “We should care about our own tribe more than others” seems like not a factually correct belief, or at least not an obviously correct one. It is popular because it was an evolutionarily advantageous belief–when there were collisions between believers and nonbelievers, those who did believe it were more inherently more likely to gain from collision.

Evolutionarily Advantaged Ideas

Here are four ways to increase the % of population that has a particular belief X:

  • Decrease the population of people who don’t believe X
  • Increase the population of people who believe X
  • Convince people who don’t believe X to believe X
  • Deter new people from believing alternatives to X

Ideas that inherently do one or more of these will be favored in selection. An idea is inherently advantaged if acting out on that idea causes the % of people with that idea to increase. Heliocentrism does not inherently spread, whereas tribalism does–via killing off those who are not tribal. More examples:

  • Any belief that creates advantages in war
    • An emphasis on science & technology. Between two countries all-else-equal, the technology-loving country has an advantage.
    • Nationalism and strong national identities. This should work in similar ways to tribalism.
    • Policies like having a standing army or draft.
  • Racism in the old-fashioned way–straight-up “people of X color are subhuman/shouldn’t exist”. This is essentially the same example as tribalism.
  • Family centrism. This is more of a biological trait than a psychological one, but I’ll mention it here. Suppose 50% of people would sacrifice the lives of two strangers to save their child, and the other 50% would sacrifice their child to save the lives of two strangers. Assuming there is some genetic component to this belief, you’d expect the population to converge to 100% of the population being willing to sacrifice two strangers to save their own child, because that gene would be selected.
  • Growth-oriented ideas
    • “Have lots of children” is an obvious one. If 50% of the population believed everyone should have lots of children, and 50% believed no one should have children, what % of the population will have each belief in 100 years?
    • Mainstream economics. Given that you’re reading this, you are likely living in an above-average wealthy country, and wealth countries tend to have strong growth policies.
    • Countries which prioritize growth over sustainability gain a military advantage, in addition to directly increasing the % of population that supports growth.
    • “My country shouldn’t worry about climate change”–A country that worries a lot about climate change needs to sacrifice growth, thus putting it at a disadvantage compared to other countries, and after some time it could lose % population of the world, and also it might have economic troubles that cause ideas from rich countries which don’t care about climate change to seep in.
  • Anti-euthanasia. We take this for granted, but “You should live your life, even if you are suffering” is an evolutionarily advantaged belief. Let’s say there is a disease so permanently crippling and painful that 90% who get it really, really beg to be euthanized (and somehow succeed in convincing their doctors), while the other 10% still experience pain but really, really believe in suffering through the pain. Now if you conduct a poll on “Is this disease so bad you’d want to die? Let’s ask some patients and find out”, you’d find that a large percentage wants to carry out living.
  • (Abrahamic) Religion
    • The punishment for apostasy can range from social stigma to death, deterring people from believing competing ideas. There is also the threat of eternal suffering for nonbelief.
    • The first three commandments are about deterring people from thinking about competing ideas.
    • Religions tend to have some form of evangelism.
    • “Be fruitful and multiply” is growth-oriented.
  • Simple, easy-to-explain ideas. It is easy to spread simple ideas, difficult to spread complex ones.
  • Ideas that human brains are particularly good at remembering. E.g., a catchy slogan or song.

In general, I think we should be marginally more skeptical of all of these ideas. They are popular ideas, not necessarily because they are right, but because they have beneficial selection traits. The idea could still be right, just not because “a bunch of other people believe this idea, so it must has a high likelihood to be correct.”

Evolutionarily Disadvantaged Ideas

The converse is that we should be more accepting of evolutionarily disadvantaged ideas, or evolutionary dead-ends. A very basic list is just the opposite of the previous:

  • Ideas that don’t lead to strong militaries, e.g. not focusing so much on science and technology
  • Treating all humans equally. This sounds obvious and easy, but it is really not! Who would value a stranger’s child as equal to their own child?
  • Sustainability-oriented ideas, or even population/economic-shrinking ideas, as opposed to permanent growth.
    • Antinatalism. Already, more people especially in the west are choosing to be childfree.
    • Environmentalism. Note the most radical forms like the Voluntary Human Extinction Movement.
  • Euthanasia
    • More strongly, suicide. Suicide is the most extreme evolutionary dead-end. Yet a lot of people commit suicide every year. Maybe the idea that life sucks/isn’t worth living is more valid than people give it credit for, and a lot of people needlessly suffer their entire lives. It is hard to have a good two-sided discussion between two opposing sides because the people most agreeing with the idea of suicide are dead. Of course, raising the status of this is a social danger because it would cause more people to die of suicide.
  • Anti-religion. Note this mostly applies to the Abrahamic religions. Buddhism is kind of a weird one because it is somewhat antinatalist, so we would have expected it to be selected out of the population.
  • Complex, hard-to-understand, hard-to-remember ideas.

Final Thoughts

To correct for selection, we should marginally lower the acceptance of advantaged ideas and raise the acceptance of disadvantaged ideas. And when considering which ideas are the most popular, we need to make sure we’re not falling to selection effects.

A future post will contain a counterargument to all this–why we shouldn’t care about idea selection and just use whatever ideas are easy to propagate.

Wildly Different Knowledge Levels

This is a topic I’ve wanted to write about since 7 years ago but was too lazy to find a good concrete example until recently.

The gist:

  1. Imagine three people of varying degrees of knowledge in a particular area: A is a random person with zero knowledge, B is someone with real amateur knowledge, and C is a professional. You pose a yes/no question. There are weird situations where A and C agree on the same answer, but B disagrees. Someone (B) who is definitively smarter than a layperson in one area might, with good reason, disbelieve what experts in the field consider the objectively right idea.
  2. In the above case on social media, person A might have no good argument or makes the default argument. B thinks they are pretty smart, and posts the standard reply to A. Then C makes a much more nuanced argument why B’s argument is wrong, and A is actually right. However, B doesn’t really understand or doesn’t read C’s argument, assumes C is just another dumb A and just repeats their flawed argument of why A is wrong.
  3. Also, is C actually right??? What if there is someone smarter, D, who agrees with B and figures out the nuanced response to C’s nuanced answer? Most people arguing on the internet are just A and B, and their arguments don’t even make sense compared to the real debate between C and D.

The most provable example of (1) is in chess. If you could see 3 moves ahead rather than 2, you would play a better move almost all of the time. But here is a weird exception. In the position below, should White capture the d5 pawn with the knight?

chess_trap_2

What players of increasing skill level might think:

  • A: “Who is a knight? What is a knight? Why is a knight?” [No]
  • B: “So knights move in an L-shape, so I can take the pawn, and taking pawns is good…” [Yes]
  • C: “If I take the pawn, then Black’s knight will take back and I lose a knight for a pawn, which is bad.” [No]
  • D: “Black’s knight is pinned, so if I take the pawn and Black’s knight takes back, then my bishop will capture the queen, which is really good. So that is a free pawn for me as the Black knight cannot take back.” [Yes]

In fact, winning a queen is *so good* in chess that, in almost all cases that end with one side losing a queen, it is a waste of time and mental energy to calculate any further. And yet…

  • E: “The knight pin is only a relative pin. Yes if I take the knight then Black will lose a queen after capturing back the knight… but wait! After losing the queen, Black can play a bishop check (Bb4+) and win White’s queen. After all the trades, Black is up a minor piece.” [No]

(For the chess enthusiast, the line is 1. d4 d5 2. c4 e6 3. Nc3 Nf6 4. Bg5 Nbd7 5. cxd5 cxd5 {diagram} 6. Nxd5 Nxd5 7. Bxd8 Bb4+ 8. Qd2 Bxd2+ 9. Kxd2 Kxd8 and Black is up a knight for a pawn.)

The weird thing is that player C is clearly better than B, but ends up with the correct answer only because of luck. Based on their thought process, C didn’t understand what was truly going on in the position, but rather, just happened to calculate a convenient number of moves ahead and stop. Weirder, D, who calculated more steps ahead than C, would play an objectively bad move here that C would have avoided! In some sense, D is just unlucky that they stopped calculating at the wrong level!

In addition, it happens that the “number of moves to look ahead” gap between D and E is quite large. In fact, the number of moves (using “ply” or “half-moves” in the chess term) to look ahead was:

  • A: N/A
  • B: 1
  • C: 2
  • D: 3
  • E: 6

We’ll come back to this later, but if this were an analogy for how society views something as knowledge increases over time, we could be at a plateau for a long time between D and E, thinking that we have the answer figured out, but in fact have the wrong answer.

Let’s replace the chess question with a more real-life one, say “Is the climate warming?

  • A: “I read online that it’s true so it’s true.” [Yes]
  • B: “You can’t just believe what you read on the internet. Plus it was really cold yesterday.” [No]
  • C: “One data point doesn’t define a trend. If you look at long-term graphs of temperature published by X, they go up over time.” [Yes]
  • D: “What is source X, is it reputable? Also, what about a long-term temperature graph going back hundreds of years–weren’t there unusually warm periods in the past as well? [No]
  • E: “Yes but not as drastic as the current warming period. And source X is the vast majority of scientists…” [Yes]
  • F: (If you’ve been on the internet before, you can imagine how this continues…)

If you see a twitter post where someone says a few words saying global warming is false, you often have no idea if they are person B (who might not be that smart) or person N (who is very smart but has maybe stopped at the “wrong” level).

If you see two strangers debating on the internet without any context, it might be non-obvious how far they are down this argument chain and how much they’ve thought about it. This is compounded by how most internet posts & comments so brief that you can’t really see any nuance.

Similarly, this is how popular debates can make one side look bad even when supported by all the facts. In the chess example, if C and D went on a public debate, D would win, yet C’s view is the objectively correct one. And on a larger network like Facebook or Twitter, you have people from all over the knowledge spectrum–though probably concentrated in the As and Bs–so any “debate” on such a medium is pointless. You can consider a twitter chess “debate” where E actually says the correct answer but doesn’t have room to post the full variation (or the energy to do it for the 1000th time), and then the various D’s of the world point out why E is wrong, thinking that E is just another A or C.

To pull the chess analogy even further, the knowledge gap between D and E makes this even harder. If E could teach D to think not 3 moves ahead, but 4 or 5 instead, D would still have the same wrong answer as before. They would need to think ahead 6 moves to realize E is right. Maybe once someone gets to think-5-moves-ahead, they think that’s sufficient for everything and stop calculating further.

A converse situation arises if you are a person at knowledge level E, and you run into someone who seems to disagree with you. You might be so used to teaching people to go from D to E that you assume they are arguing from the level of D. However, most people who are in D’s camp might be at knowledge level B. In chess, explaining the D-to-E step to someone at B might not make any sense. It could even make things worse, as from B’s perspective: “Someone is saying nonsense and also disagrees with me, therefore I should update my belief to be even stronger.”

Look for the context. Know what level you’re at.

If you disagree with someone, know that they might be thinking much further ahead, and you might not even know what the real debate is.

2017, Lists, and State of the Internet

  1. 2017 has been a busy year for this blog. I plan to eventually continue the topics.
  2. I’ve updated my Movies and Video Games ratings lists. Dunkirk and Star Wars: The Last Jedi were tied for the best movie of the year. I might make a TV shows list someday.
  3. Rick Webb on the current state of the Internet vs early utopian visions:

Being generous to the prophets Brand and Kelly et al, it’s entirely reasonable to argue that this version of a global village is not what they proposed or envisioned. Minorities are still denied equal voices on the internet — harassed off of it, or still unable to even get online. Massive amounts of data is still hidden behind firewalls or not online at all. Projects to bring more information online (such as Google Books) have foundered due to institutional obstruction or a change of priorities in those undertaking them. Governments still have secrets. Organizations such as Wikileaks that showed early promise in this regard have been re-cast as political tools through some mix of their own hubris and the adversarial efforts of the governments they seek to expose.

It’s quite easy to see the differences between the internet world we live in and the utopia we were promised. And a fair measure of that is because we didn’t actually make it to the utopia. The solution, then, the argument goes, is to keep at it. To keep taking our medicine even as the patient gets more sick, on the faith that we will one day reach that future state of total-information-freedom and equality of voices.

Tabula Rasa, Extinction, and Electricity

Chess

AlphaZero was one of the bigger headlines recently. Google’s new chess AI taught itself for 4 hours starting from a blank slate—no opening or endgame tables—and crushed Stockfish, the world’s previous best computer. See chess website articles here and here, a lichess.org collection of the games here, and the original research paper here via arXiv. This obviously has lots of real-world implications.

The most interesting thing is the way it won games. Ever since the early days of chess programming, we thought that chess computers could understand basic tactics but never deep positional play. Even in the pivotal 1997 Kasparov vs Deep Blue match, the human world champion famously said that Deep Blue must have been getting help from human grandmasters as it was playing non-computer-like moves.

Watching two chess AI’s play each other is typically a boring feat. But AlphaZero plays in a very human-romantic style, at least in the games that were revealed (and there’s definitely some selection bias there). AlphaZero often gave up lots of material for tempo, and it worked. One of the most talked-about positions is the following, where AlphaZero (white) abandons the Knight on h6 and plays Re1. It went on to win the game.

alphazero_game10

 

There’s lots of caveats in terms of how “real” of a result this is. Namely, the example games had Stockfish set on suboptimal settings. But still, it increases my opinion of the complexity of chess. As computers have gotten better, the way they play chess became more and more boring. But maybe the curve is not monotonic and we might have a stage where the game becomes more interesting again. Though I fear that eventually it will degenerate into optimal play from move one.

Political Correctness

People have been talking about the Sam Altman blog post.

Earlier this year, I noticed something in China that really surprised me.  I realized I felt more comfortable discussing controversial ideas in Beijing than in San Francisco.  I didn’t feel completely comfortable—this was China, after all—just more comfortable than at home.

That showed me just how bad things have become, and how much things have changed since I first got started here in 2005.

It seems easier to accidentally speak heresies in San Francisco every year.  Debating a controversial idea, even if you 95% agree with the consensus side, seems ill-advised.

And:

More recently, I’ve seen credible people working on ideas like pharmaceuticals for intelligence augmentation, genetic engineering, and radical life extension leave San Francisco because they found the reaction to their work to be so toxic.  “If people live a lot longer it will be disastrous for the environment, so people working on this must be really unethical” was a memorable quote I heard this year.

I don’t have any experience with the San Francisco discussion climate, but this seems weird. The fact that someone felt the need to write this post is a sign about the culture.

I’m probably way more in favor of politically incorrect ideas, mainly since I think the world vastly overvalues traditional ideas, and ironically because there is so much that you can’t say in China. Tyler Cowen points out, “…your pent-up urges are not forbidden topics any more.  Just do be careful with your mentions of Uncle Xi, Taiwan, Tibet, Uighur terrorists, and disappearing generals.”

So Altman’s general point about politically incorrect ideas is probably correct. I don’t have any problem with discussing unpopular ideas. But I just don’t see people moving form San Francisco to China as a reasonable solution. There are certain topics that we might be overly sensitive to, but the overall level of idea tolerance would seem very tilted in favor of the US.

Human Extinction

Obligatory shout out to 80000 Hours’ extinction risk article. The idea was to discuss various sources of extinction and estimate their chances of occurring.

What’s probably more concerning is the risks we haven’t thought of yet. If you had asked people in 1900 what the greatest risks to civilisation were, they probably wouldn’t have suggested nuclear weapons, genetic engineering or artificial intelligence, since none of these were yet invented. It’s possible we’re in the same situation looking forward to the next century. Future “unknown unknowns” might pose a greater risk than the risks we know today.

Each time we discover a new technology, it’s a little like betting against a single number on a roulette wheel. Most of the time we win, and the technology is overall good. But each time there’s also a small chance the technology gives us more destructive power than we can handle, and we lose everything.

And:

An informal poll in 2008 at a conference on catastrophic risks found they believe it’s pretty likely we’ll face a catastrophe that kills over a billion people, and estimate a 19% chance of extinction before 2100.

As a trader, the first thing that comes to mind is to create some betting markets on such events happening and have a bunch of people trade, but this leads to weird selection effects and the payout is too long-term. So looking at some polls and mentally adjusting is probably right.

xkcd_sun_exploded

In addition, their ordering of what to prioritize is interesting:

  1. AI safety
  2. Global priorities research
  3. Building the effective altruism community
  4. Pandemic prevention
  5. Improving institutional decision-making
  6. Nuclear security

Twitter Posts

I should maybe have a recurring Twitter section. Anyway, here is a tweet by Julia Galef, and I’ve also wondered about this topic a lot.

The thought experiment I want to run is to throw together a racially diverse set of kids in a bubble, and expose the kids to roughly no knowledge of real world history or any hints of racism outside, and otherwise act like everything is normal. In this bubble world, would they start becoming racist against each other? I would guess no.

I think an underrated explanation in general of why people do something is because everyone else around them does it or that parents or teacher early on in their life do it. Social/cultural norm is a really strong incentive/disincentive for activities.

Cryptocurrencies and Electricity

There are definitely people worrying about the massive amount of world electricity consumption from bitcoin mining. Newsweek extrapolates that bitcoin will take up the world’s electric output by 2020. It’s currently at 0.15% according to some website. This is not small, giving how quickly it has been growing. Wired worries it will become the paperclip machine:

That’s bad. It means Bitcoin emits the equivalent of 17.7 million tons of carbon dioxide every year, a big middle finger to Earth’s climate and anyone who enjoys things like coastlines, forests, and not dying of mosquito-borne diseases. Refracted through a different metaphor, the Bitcoin P2P network is essentially a distributed superintelligence utterly dedicated to generating bitcoins, so of course it wants to convert all the energy (and therefore matter) in the universe into bitcoin. That is literally its job. And if it has to recruit greedy nerds by paying them phantom value, well, OK. Unleash the hypnocurrency!

I also stumbled upon a more optimistic viewpoint, claiming that bitcoin mining will trigger increased development and adoption of clean energy:

But electricity costs matter even more to a Bitcoin miner than typical heavy industry. Electricity costs can be 30-70% of their total costs of operation. Also, Bitcoin miners don’t need to worry about the geography of their customers or materials shipping routes. Bitcoins are digital, they have only two inputs (electricity and hardware) and network latency is trivial as compared with a truck full of steel. This particular miner moved an entire GPU farm across the U.S. because of cheap hydroelectric power in the Pacific Northwest and, in his words, “it’s worth it!” That’s also why we see miners in Iceland. Aside from beautiful vistas you can find abundant geothermal and hydraulic power in the land of volcanoes and waterfalls.

If Bitcoin mining really does begin to consume vast quantities of the global electricity supply it will, it follows, spur massive growth in efficient electricity production—i.e. in the green energy revolution. Moore’s Law was partially a story about incredible advances in materials science, but it was also a story about incredible demand for computing that drove those advances and made semiconductor research and development profitable. If you want to see a Moore’s-Law-like revolution in energy, then you should be rooting for, and not against, Bitcoin. The fact is that the Bitcoin protocol, right now, is providing a $200,000 bounty every 10 minutes (the bitcoin mining reward) to the person who can find the cheapest energy on the planet. Got cheap green power? Bitcoin could make building more of it well worth your time.

It’s very unclear in bitcoin’s case how good the upside is for the world, but it doesn’t seem anywhere close to being an extinction risk.

Recommended is Tyler Cowen’s post on crytocurrencies and social value.

Progress

previously wrote that we take modern life improvements for granted and sometimes erroneously yearn for the hunter-gatherer life. Well here is a Quillette article on precisely the romanticization of that.  Here are some examples:

In his later work, Lee would acknowledge that, “Historically, the Ju/’hoansi have had a high infant mortality rate…” In a study on the life histories of the !Kung Nancy Howell found that the number of infants who died before the age of 1 was roughly 20 percent. (As high as this number is, it compares favorably with estimates from some other hunter-gatherer societies, such as among the Casiguran Agta of the Phillipines, where the rate is 34 percent.) Life expectancy for the !Kung is 36 years of age. Again, while this number is only about half the average life expectancy found among contemporary nation states, this number still compares favorably with several other hunter-gatherer populations, such as the Hiwi (27 years) and the Agta (21 years). Life expectancy across pygmy hunter-gatherer societies is even lower, ranging from about 16-24 years, although this may have as much to do with pygmy physiology as with the hunter-gatherer lifestyle.

And:

11 of these 15 societies have homicide rates higher than that of the most violent modern nation, and 14 out of the 15 have homicide rates higher than that of the United States in 2016. The one exception, the Batek of Malaysia, have a long history of being violently attacked and enslaved by neighboring groups, and developed a survival tactic of running away and studiously avoiding conflict. Yet even they recount tales of wars in the past, where their shamans would shoot enemies with blowpipes. Interestingly, Ivan Tacey & Diana Riboli have noted that “…the Batek frequently recount their nostalgic memories of British doctors, administrators and army personnel visiting their communities in helicopters to deliver medicines and other supplies,” which conflicts with the idea that hunter-gatherer societies would have no want or need of anything nation states have to offer. From 1920-1955 the !Kung had a homicide rate of 42/100,000 (about 8 times that of the US rate in 2016), however Kelly mentions that, “murders ceased after 1955 due to the presence of an outside police force.”

And:

So, what explains the popularity of this notion of an “original affluent society”? Why do people in societies with substantially greater life expectancy, reduced infant mortality, greater equality in reproductive success, and reduced rates of violence, romanticize a way of life filled with hardships they have never experienced? In wealthy, industrialized populations oriented around consumerism and occupational status, the idea that there are people out there living free of greed, in natural equality and harmony, provides an attractive alternative way of life.

I also definitely live in a bubble, as I don’t know anyone openly in favor of hunter-gatherer society.

This also reminds me of Joseph Stiglitz’s book, The Price of Inequality. Most of the book is very methodical or at least numbers-driven. Then comes this absurd passage on the Bhutanese (p. 155 of the Norton edition):

Bhutan, the remote Himalayan state to the northeast of India, for instance, is protecting its forests as part of a broader commitment to the environment. Each family is allowed to cut down a fixed number of trees for its own use. In this sparsely populated country, I asked, how could one enforce such an edict? The answer was simple and straightforward: in our jargon, social capital. The Bhutanese have internalized what is “right” when it comes to the environment. It would be wrong to cheat, and so they don’t.

I’ve been waiting for years to quote this paragraph, but here it is. There is in general some weird sacred reverence of non-Western cultures. Is this related to the Altman political correctness theme? Can I just pick a well-off small community in America and say “it would be wrong to cheat, and so they don’t”? Anyway, it’s really easy to say some society works pretty well, and then take all the modern improvements for granted.

Internet Context, Natalism, and the Me Too Movement

 

xkcd_wrong_on_the_internet
via xkcd

Random Posts on Facebook

previously wrote that there is a meaninglessness in most things on the Internet, particularly due to the lack of context:

A lot of “arguments” I see these days are made in short Facebook posts, tweets, or viral stock images with a sentence of text on them. This is actually fine in certain cases, precisely because there is context spanning much more than a sentence. If Nate Silver tweets one line about a something about an election, I can say “Hmm that’s interesting.” However, if the same tweet were made by a random person, I would immediately start thinking instead, “What are the credentials of this person? On what evidence is this claim based? Does this person have a political agenda? Do I expect certain biases to exist?” This isn’t to say that Nate Silver is a perfect being, but when I see a tweet from him, I really have much more to consider than just one sentence.

I generally consider most issues in the world to be very complicated; if they were simple, they would have been solved and we wouldn’t be talking about them. And threads on Facebook are fairly non-intellectual in this sense. You just can’t get into any complex substance. Ironically, I prefer reading Twitter—despite the 280 character limit, prominent posts on Twitter often come from public people whose motives and core beliefs are easy to contextualize. And thus, a single tweet can convey more content than an entire Facebook thread. (Or blog post.)

Generally Facebook debates aren’t worth getting into for this reason. Someone presents 1% of the argument for their side, and there’s so much missing context that you will basically have no idea what your real disagreement is about. And there’s also Poe’s Law, which says any sufficiently advanced satire is indistinguishable from serious argument, and which always leads to needless disagreement.

Anyway here is an opinion piece in the NYT with a similar point about how to read:

Many of these poor readers can sound out words from print, so in that sense, they can read. Yet they are functionally illiterate — they comprehend very little of what they can sound out. So what does comprehension require? Broad vocabulary, obviously. Equally important, but more subtle, is the role played by factual knowledge.

All prose has factual gaps that must be filled by the reader. Consider “I promised not to play with it, but Mom still wouldn’t let me bring my Rubik’s Cube to the library.” The author has omitted three facts vital to comprehension: you must be quiet in a library; Rubik’s Cubes make noise; kids don’t resist tempting toys very well. If you don’t know these facts, you might understand the literal meaning of the sentence, but you’ll miss why Mom forbade the toy in the library.

The article goes on to point out that having a broad knowledge base is incredibly helpful in reading comprehension. The knowledge can allow people with generally worse reading comprehension skills to outperform when the literature in question is on a familiar topic.

I had a lot of trouble understanding certain books when I was younger. One that particularly comes to mind is Great Expectations by Charles Dickens. In retrospect, I had a weird childhood and probably had a lot of trouble figuring out how any of the character interactions in that book made sense. On the other hand, I read Ender’s Game by Orson Scott Card at a much younger age and it made a lot of sense, and the childhood dynamic there is much different.

Another striking passage from the article:

First, it points to decreasing the time spent on literacy instruction in early grades. Third-graders spend 56 percent of their time on literacy activities but 6 percent each on science and social studies. This disproportionate emphasis on literacy backfires in later grades, when children’s lack of subject matter knowledge impedes comprehension. Another positive step would be to use high-information texts in early elementary grades. Historically, they have been light in content.

I strongly agree with this, considering broad-based knowledge in science and history in the general population seem really, really lacking. Moreover, I wonder if the time spent on “literacy activities” actually has a negative effect in popular discourse, in that students are so used to reading and answering questions about things they have no knowledge about, and that makes it socially acceptable to confidently and publicly make assertions in things which one lacks knowledge in—e.g., climate, vaccines, and economics.

Basic Income

Here is an article (via Medium) advocating a popular idea these days: universal basic income. While the arguments on the economics side are not new, I found this moral plea convincing:

There are many other questions, and most all have likely answers for those willing to spend the necessary time to study the available evidence, but for me personally, these questions are translated in my brain at this point to sound more like, “What are the potential downsides of abolishing slavery? Will cotton get more expensive? Will former slaves just kind of sit around reading and dancing all day? Will the tired, the poor, and the huddled masses yearning to breathe free decide to walk in greater numbers through our lamp-lit golden door?” This is what I hear as someone who already has a basic income, so it’s not to say such questions aren’t valid, it’s that the very fact we’re asking them is itself something to question.

I think capitalism is generally underrated (e.g. there’s a pretty obvious solution to house prices in the Bay Area), but the questions above highlight some of the problems.

This type of reasoning applies to many other areas. Solving climate change might cost the world some percent of GDP, but it’s also literally saving the world we live on.

To Be or Not To Be?

I’ve roughly never encountered the topic of natalism as a serious point of debate before, but I stumbled onto two articles in the past week, one against (New Yorker) and one for (Medium).

On the anti-natalist side:

David Benatar may be the world’s most pessimistic philosopher. An “anti-natalist,” he believes that life is so bad, so painful, that human beings should stop having children for reasons of compassion. “While good people go to great lengths to spare their children from suffering, few of them seem to notice that the one (and only) guaranteed way to prevent all the suffering of their children is not to bring those children into existence in the first place,” he writes, in a 2006 book called “Better Never to Have Been: The Harm of Coming Into Existence.” In Benatar’s view, reproducing is intrinsically cruel and irresponsible—not just because a horrible fate can befall anyone, but because life itself is “permeated by badness.” In part for this reason, he thinks that the world would be a better place if sentient life disappeared altogether.

Here’s another good excerpt for the anti-natalist:

Like everyone else, Benatar finds his views disturbing; he has, therefore, ambivalent feelings about sharing them. He wouldn’t walk into a church, stride to the pulpit, and declare that God doesn’t exist. Similarly, he doesn’t relish the idea of becoming an ambassador for anti-natalism. Life, he says, is already unpleasant enough. He reassures himself that, because his books are philosophical and academic, they will be read only by those who seek them out. He hears from readers who are grateful to find their own secret thoughts expressed. One man with several children read “Better Never to Have Been,” then told Benatar that he believed having them had been a terrible mistake; people suffering from terrible mental and physical afflictions write to say they wish that they had never existed. He also hears from people who share his views and are disabled by them. “I’m just filled with sadness for people like that,” he said, in a soft voice. “They have an accurate view of reality, and they’re paying the price for it.” I asked Benatar whether he ever found his own thoughts overwhelming. He smiled uncomfortably—another personal question—and said, “Writing helps.”

Meanwhile, the pro-natalist article doesn’t really put out any arguments in favor of natalism, though it repeatedly points out that the US fertility rate is dropping fast and assumes that readers are pro-natalist and would be as alarmed as the author is.

I am worried about fertility in 2017. I am very concerned about fertility in 2018. I am scared of what fertility numbers will be in 2019, especially if a recession hits somewhere in that period. Our fertility decline is on par with serious, durable fertility declines in other big, developed countries, and may be extremely difficult to reverse. I have no happy ending to this blog post.

I personally agree with parts of the anti-natalist view, and would identify as somewhere down the middle but closer to the anti-natalist side.

Conditional on reading this post, you’ve probably had a good life, a relatively good one among the lives that have been. But there are people now and people historically with far worse fates. Millions of people were marched into concentration camps to be brutally tortured and murdered. Billions throughout history lived at the subsistence level, repeating their lives day and night, all the while dealing with injury and disease. As a species we endured unfathomable pain in disease and in war, in confinement and in archaic laws. Hobbes wrote that life outside society was “nasty, brutish, and short.” But for many people, even within society, was it any better?

I generally consider myself a positive utilitarian, though I think it might be good to have a small but well-off society for a while to figure out how to make progress technologically and socially, and then resume normal population growth so we don’t lock in huge populations with terrible moral practices. In addition, I would venture that the reason most people have children is the combination of social norm and biological drive, and not because the parents thought, “Oh you know what would be positive utility for the world? If there was a smaller version of us!” I’m very unconcerned with any contemporary problem in fertility decline, as that might very likely have positive value for the world.

Me Too

Rebecca Traister (via The Cut) on the Me Too movement:

This is not feminism as we’ve known it in its contemporary rebirth — packaged into think pieces or nonprofits or Eve Ensler plays or Beyoncé VMA performances. That stuff has its place and is necessary in its own way. This is different. This is ’70s-style, organic, mass, radical rage, exploding in unpredictable directions. It is loud, thanks to the human megaphone that is social media and the “whisper networks” that are now less about speaking sotto voce than about frantically typed texts and all-caps group chats.

Really powerful white men are losing jobs — that never happens. Women (and some men) are breaking their silence and telling painful and intimate stories to reporters, who in turn are putting them on the front pages of major newspapers.

It’s wild and not entirely fun. Because the stories are awful, yes. And because the conditions that created this perfect storm of female rage — the suffocating ubiquity of harassment and abuse; the election of a multiply accused predator who now controls the courts and the agencies that are supposed to protect us from criminal and discriminatory acts — are so grim.

[…]

This is part of what makes me, and them, angry: this replication of hierarchies — hierarchies of harm and privilege — even now. “It’s a ‘seeing the matrix’ moment,” says one woman whom I didn’t know personally before last week, some of whose deepest secrets and sharpest fears and most animating furies I’m now privy to. “It’s an absolutely bizarre thing to go through, and it’s fucking exhausting and horrible, and I hate it. And I’m glad. I’m so glad we’re doing it. And I’m in hell.”

I can’t relate to this directly, but as someone who has gone through hard times in life, I hope there can be more people “seeing the matrix.” A lot of anecdotes are in the form of “one time this happened, and at the time it was weird, but only now are people talking about this and I realize how bad it was and I’m angry.” I can relate to that, but another time.

Neopets

Apparently many people (especially young women as the article points out) learned to code by playing Neopets (via Rolling Stone). This is carefully selected evidence of my crazy hypothesis that that video games are very good for society. Disclosure: I too in the early 2000’s learned some HTML by setting up a Neopets shop.