Internet Context, Natalism, and the Me Too Movement

 

xkcd_wrong_on_the_internet
via xkcd

Random Posts on Facebook

previously wrote that there is a meaninglessness in most things on the Internet, particularly due to the lack of context:

A lot of “arguments” I see these days are made in short Facebook posts, tweets, or viral stock images with a sentence of text on them. This is actually fine in certain cases, precisely because there is context spanning much more than a sentence. If Nate Silver tweets one line about a something about an election, I can say “Hmm that’s interesting.” However, if the same tweet were made by a random person, I would immediately start thinking instead, “What are the credentials of this person? On what evidence is this claim based? Does this person have a political agenda? Do I expect certain biases to exist?” This isn’t to say that Nate Silver is a perfect being, but when I see a tweet from him, I really have much more to consider than just one sentence.

I generally consider most issues in the world to be very complicated; if they were simple, they would have been solved and we wouldn’t be talking about them. And threads on Facebook are fairly non-intellectual in this sense. You just can’t get into any complex substance. Ironically, I prefer reading Twitter—despite the 280 character limit, prominent posts on Twitter often come from public people whose motives and core beliefs are easy to contextualize. And thus, a single tweet can convey more content than an entire Facebook thread. (Or blog post.)

Generally Facebook debates aren’t worth getting into for this reason. Someone presents 1% of the argument for their side, and there’s so much missing context that you will basically have no idea what your real disagreement is about. And there’s also Poe’s Law, which says any sufficiently advanced satire is indistinguishable from serious argument, and which always leads to needless disagreement.

Anyway here is an opinion piece in the NYT with a similar point about how to read:

Many of these poor readers can sound out words from print, so in that sense, they can read. Yet they are functionally illiterate — they comprehend very little of what they can sound out. So what does comprehension require? Broad vocabulary, obviously. Equally important, but more subtle, is the role played by factual knowledge.

All prose has factual gaps that must be filled by the reader. Consider “I promised not to play with it, but Mom still wouldn’t let me bring my Rubik’s Cube to the library.” The author has omitted three facts vital to comprehension: you must be quiet in a library; Rubik’s Cubes make noise; kids don’t resist tempting toys very well. If you don’t know these facts, you might understand the literal meaning of the sentence, but you’ll miss why Mom forbade the toy in the library.

The article goes on to point out that having a broad knowledge base is incredibly helpful in reading comprehension. The knowledge can allow people with generally worse reading comprehension skills to outperform when the literature in question is on a familiar topic.

I had a lot of trouble understanding certain books when I was younger. One that particularly comes to mind is Great Expectations by Charles Dickens. In retrospect, I had a weird childhood and probably had a lot of trouble figuring out how any of the character interactions in that book made sense. On the other hand, I read Ender’s Game by Orson Scott Card at a much younger age and it made a lot of sense, and the childhood dynamic there is much different.

Another striking passage from the article:

First, it points to decreasing the time spent on literacy instruction in early grades. Third-graders spend 56 percent of their time on literacy activities but 6 percent each on science and social studies. This disproportionate emphasis on literacy backfires in later grades, when children’s lack of subject matter knowledge impedes comprehension. Another positive step would be to use high-information texts in early elementary grades. Historically, they have been light in content.

I strongly agree with this, considering broad-based knowledge in science and history in the general population seem really, really lacking. Moreover, I wonder if the time spent on “literacy activities” actually has a negative effect in popular discourse, in that students are so used to reading and answering questions about things they have no knowledge about, and that makes it socially acceptable to confidently and publicly make assertions in things which one lacks knowledge in—e.g., climate, vaccines, and economics.

Basic Income

Here is an article (via Medium) advocating a popular idea these days: universal basic income. While the arguments on the economics side are not new, I found this moral plea convincing:

There are many other questions, and most all have likely answers for those willing to spend the necessary time to study the available evidence, but for me personally, these questions are translated in my brain at this point to sound more like, “What are the potential downsides of abolishing slavery? Will cotton get more expensive? Will former slaves just kind of sit around reading and dancing all day? Will the tired, the poor, and the huddled masses yearning to breathe free decide to walk in greater numbers through our lamp-lit golden door?” This is what I hear as someone who already has a basic income, so it’s not to say such questions aren’t valid, it’s that the very fact we’re asking them is itself something to question.

I think capitalism is generally underrated (e.g. there’s a pretty obvious solution to house prices in the Bay Area), but the questions above highlight some of the problems.

This type of reasoning applies to many other areas. Solving climate change might cost the world some percent of GDP, but it’s also literally saving the world we live on.

To Be or Not To Be?

I’ve roughly never encountered the topic of natalism as a serious point of debate before, but I stumbled onto two articles in the past week, one against (New Yorker) and one for (Medium).

On the anti-natalist side:

David Benatar may be the world’s most pessimistic philosopher. An “anti-natalist,” he believes that life is so bad, so painful, that human beings should stop having children for reasons of compassion. “While good people go to great lengths to spare their children from suffering, few of them seem to notice that the one (and only) guaranteed way to prevent all the suffering of their children is not to bring those children into existence in the first place,” he writes, in a 2006 book called “Better Never to Have Been: The Harm of Coming Into Existence.” In Benatar’s view, reproducing is intrinsically cruel and irresponsible—not just because a horrible fate can befall anyone, but because life itself is “permeated by badness.” In part for this reason, he thinks that the world would be a better place if sentient life disappeared altogether.

Here’s another good excerpt for the anti-natalist:

Like everyone else, Benatar finds his views disturbing; he has, therefore, ambivalent feelings about sharing them. He wouldn’t walk into a church, stride to the pulpit, and declare that God doesn’t exist. Similarly, he doesn’t relish the idea of becoming an ambassador for anti-natalism. Life, he says, is already unpleasant enough. He reassures himself that, because his books are philosophical and academic, they will be read only by those who seek them out. He hears from readers who are grateful to find their own secret thoughts expressed. One man with several children read “Better Never to Have Been,” then told Benatar that he believed having them had been a terrible mistake; people suffering from terrible mental and physical afflictions write to say they wish that they had never existed. He also hears from people who share his views and are disabled by them. “I’m just filled with sadness for people like that,” he said, in a soft voice. “They have an accurate view of reality, and they’re paying the price for it.” I asked Benatar whether he ever found his own thoughts overwhelming. He smiled uncomfortably—another personal question—and said, “Writing helps.”

Meanwhile, the pro-natalist article doesn’t really put out any arguments in favor of natalism, though it repeatedly points out that the US fertility rate is dropping fast and assumes that readers are pro-natalist and would be as alarmed as the author is.

I am worried about fertility in 2017. I am very concerned about fertility in 2018. I am scared of what fertility numbers will be in 2019, especially if a recession hits somewhere in that period. Our fertility decline is on par with serious, durable fertility declines in other big, developed countries, and may be extremely difficult to reverse. I have no happy ending to this blog post.

I personally agree with parts of the anti-natalist view, and would identify as somewhere down the middle but closer to the anti-natalist side.

Conditional on reading this post, you’ve probably had a good life, a relatively good one among the lives that have been. But there are people now and people historically with far worse fates. Millions of people were marched into concentration camps to be brutally tortured and murdered. Billions throughout history lived at the subsistence level, repeating their lives day and night, all the while dealing with injury and disease. As a species we endured unfathomable pain in disease and in war, in confinement and in archaic laws. Hobbes wrote that life outside society was “nasty, brutish, and short.” But for many people, even within society, was it any better?

I generally consider myself a positive utilitarian, though I think it might be good to have a small but well-off society for a while to figure out how to make progress technologically and socially, and then resume normal population growth so we don’t lock in huge populations with terrible moral practices. In addition, I would venture that the reason most people have children is the combination of social norm and biological drive, and not because the parents thought, “Oh you know what would be positive utility for the world? If there was a smaller version of us!” I’m very unconcerned with any contemporary problem in fertility decline, as that might very likely have positive value for the world.

Me Too

Rebecca Traister (via The Cut) on the Me Too movement:

This is not feminism as we’ve known it in its contemporary rebirth — packaged into think pieces or nonprofits or Eve Ensler plays or Beyoncé VMA performances. That stuff has its place and is necessary in its own way. This is different. This is ’70s-style, organic, mass, radical rage, exploding in unpredictable directions. It is loud, thanks to the human megaphone that is social media and the “whisper networks” that are now less about speaking sotto voce than about frantically typed texts and all-caps group chats.

Really powerful white men are losing jobs — that never happens. Women (and some men) are breaking their silence and telling painful and intimate stories to reporters, who in turn are putting them on the front pages of major newspapers.

It’s wild and not entirely fun. Because the stories are awful, yes. And because the conditions that created this perfect storm of female rage — the suffocating ubiquity of harassment and abuse; the election of a multiply accused predator who now controls the courts and the agencies that are supposed to protect us from criminal and discriminatory acts — are so grim.

[…]

This is part of what makes me, and them, angry: this replication of hierarchies — hierarchies of harm and privilege — even now. “It’s a ‘seeing the matrix’ moment,” says one woman whom I didn’t know personally before last week, some of whose deepest secrets and sharpest fears and most animating furies I’m now privy to. “It’s an absolutely bizarre thing to go through, and it’s fucking exhausting and horrible, and I hate it. And I’m glad. I’m so glad we’re doing it. And I’m in hell.”

I can’t relate to this directly, but as someone who has gone through hard times in life, I hope there can be more people “seeing the matrix.” A lot of anecdotes are in the form of “one time this happened, and at the time it was weird, but only now are people talking about this and I realize how bad it was and I’m angry.” I can relate to that, but another time.

Neopets

Apparently many people (especially young women as the article points out) learned to code by playing Neopets (via Rolling Stone). This is carefully selected evidence of my crazy hypothesis that that video games are very good for society. Disclosure: I too in the early 2000’s learned some HTML by setting up a Neopets shop.

Evil, Progress, and the Sun

Good vs Evil

I suspect no rationalist takes Hollywood movies seriously, but certain norms are worth talking about, and these norms definitely influence us subconsciously. The one on the chopping block today is good vs evil. There is so often an obviously good side and an obviously evil side. Fortunately, they’re more subtle today—compare the original Star Wars (1977), a painfully old-fashioned good vs evil story, with Rogue One (2016), where it’s not clear some of the protagonists are actually good. But you still know exactly who to root for.

Having thought recently about political polarization/internet bubbles/the attitudes of certain people on both sides of the spectrum, I think a lot of it comes down to people on both sides strongly having the belief they are on the good side and the other side is straight-up evil. I don’t mean someone thinking “the other side is well-meaning but doesn’t understand”, I mean “the other side is evil.” I mostly see the liberal side of this, and when people literally advocate for violence against conservatives, that’s a problem. They don’t see the other side as people to have discourse with, but as an evil menace.

Worse yet, they find no rationalization for the other side. After last year’s election, I saw some people genuinely express that they didn’t know a single Trump supporter and that they couldn’t possibly imagine anyone voting for Trump. Like there’s a dark, mysterious force out there. I voted for Clinton too, but I can very well explain why many people would vote for Trump.

There is a famous quote, “Never attribute to malice that which is adequately explained by stupidity,” which I think is an excellent guiding principle to understanding the other side.

Meanwhile, here is a recent New Yorker article on the root of cruelty by Paul Bloom (h/t Julia Galef), who argues that the long-held view that dehumanization is the cause of cruelty is wrong.

At some European soccer games, fans make monkey noises at African players and throw bananas at them. Describing Africans as monkeys is a common racist trope, and might seem like yet another example of dehumanization. But plainly these fans don’t really think the players are monkeys; the whole point of their behavior is to disorient and humiliate. To believe that such taunts are effective is to assume that their targets would be ashamed to be thought of that way—which implies that, at some level, you think of them as people after all.

And

If the worst acts of cruelty aren’t propelled by dehumanization, not all dehumanization is accompanied by cruelty. Manne points out that there’s nothing wrong with a surgeon viewing her patients as mere bodies when they’re on the operating table; in fact, it’s important for doctors not to have certain natural reactions—anger, moral disgust, sexual desire—when examining patients.

In fact, it is sometimes not the “evil” people, but the masses that dehumanize:

Early psychological research on dehumanization looked at what made the Nazis different from the rest of us. But psychologists now talk about the ubiquity of dehumanization. Nick Haslam, at the University of Melbourne, and Steve Loughnan, at the University of Edinburgh, provide a list of examples, including some painfully mundane ones: “Outraged members of the public call sex offenders animals. Psychopaths treat victims merely as means to their vicious ends. The poor are mocked as libidinous dolts. Passersby look through homeless people as if they were transparent obstacles. Dementia sufferers are represented in the media as shuffling zombies.”

Progress

I feel like I write many posts on progress, but here is one of the more harrowing articles in the past week (via Gizmodo), featuring polio and people who still rely on iron lungs:

Martha Lillard spends half of every day with her body encapsulated in a half-century old machine that forces her to breathe. Only her head sticks out of the end of the antique iron lung. On the other side, a motorized lever pulls the leather bellows, creating negative pressure that induces her lungs to suck in air.

In 2013, the Post-Polio Health International (PHI) organizations estimated that there were six to eight iron lung users in the United States. Now, PHI executive director Brian Tiburzi says he doesn’t know anyone alive still using the negative-pressure ventilators. This fall, I met three polio survivors who depend on iron lungs. They are among the last few, possibly the last three.

But what about before we had polio vaccines:

Children under the age of five are especially susceptible. In the 1940s and 1950s, hospitals across the country were filled with rows of iron lungs that kept victims alive. Lillard recalls being in rooms packed with metal tubes—especially when there were storms and all the men, women, adults, and children would be moved to the same room so nurses could manually operate the iron lungs if the power went out. “The period of time that it took the nurse to get out of the chair, it seemed like forever because you weren’t breathing,” Lillard said. “You just laid there and you could feel your heart beating and it was just terrifying. The only noise that you can make when you can’t breathe is clicking your tongue. And that whole dark room just sounded like a big room full of chickens just cluck-cluck-clucking. All the nurses were saying, ‘Just a second, you’ll be breathing in just a second.’”

This is yet another reminder of the immense amount of progress that society has made even in the recent past. I’ve written previously, “…many problems of the past we now don’t ever think about—the diseases that have been conquered, a scientific understanding of the world, advances in healthcare, access to modern technology, democratic society, much lower chance to be murdered, not taking months to communicate with someone on a different continent, instantaneously looking up information from the sum total of human knowledge from a device in your pocket, and so forth.” The fact that the problem is gone is precisely what makes it attractive. This is why anti-vaxxers are not scared of disease, why young people are not afraid of communism, why people want to go back to being hunter-gatherers, why people who have never seen a war are excited to go to war.

It’s hard to argue with certain groups that society has made progress, despite all the plain evidence lying around us. This is certainly one reason—we never stop to think about all the problems that have already been solved.

Heliocentrism

Copernican_theory.png

I never thought I’d be writing about how Earth orbits the Sun, but here is one of the most thought-provoking articles, on precisely that. The argument is that the popular story of “People thought everything orbited Earth, and then Copernicus figured out it was the other way, and by the way, people were very resistant to change so nobody believed him for a while until Newton appeared,” is wrong. Namely, if we threw a modern-day rationalist in the time, our rationalist might, even with the evidence, side with what everyone else generally thought and not the Copernican theory.

I generally agree with the argument though there is a lot of oversimplification that is at times misleading. In particular, the way the author cites the Coriolis effect in response to the “tower argument” is extremely misleading. In addition, I do agree that the author’s account is “less wrong” than the 1-paragraph popular account (so it certainly fits with the website), but the author fails to apply the “less wrong” mentality to the astronomical models in question. The basic argument is that the (old) Ptolemaic model was wrong, and the Copernican model was also wrong. But the point is that the Copernican model was less wrong (the author admits “Ptolemy’s system had required huge epicycles, and Copernicus was able to substantially reduce their size”), and to claim that they are equally wrong is really really wrong!

Still, the author poses two very good questions:

  1. If you lived in the time of the Copernican revolution, would you have accepted heliocentrism?
  2. How should you develop intellectually, in order to become the kind of person who would have accepted heliocentrism during the Copernican revolution?

To (1), I would suspect no with very high probability, because few people at the time would have known enough mathematics and astronomy to even understand the debate in question. But if  it’s not a “random person” and it specifies that I know enough math to understand, then it’s still not clear. Even if I thought the data were in favor of heliocentrism, would I dare defy religious authority in writing?

As for (2), you would need to both know enough advanced math to know what you’re even debating in the first place (how many people do you personally know could do this even today?), and have a strong disregard for authority. Other than that, I agree with the author that it would have been very hard to be someone who actually supported heliocentrism at the time.

Even today the question is not obvious. If you kept all other knowledge of science and just forgot astronomy, which theory would you believe (based on first principle and not data)? It might be impossible to say, though there could be some version of an argument where you know most of our energy today is generated via fossil fuels, which eventually got their energy from plant photosynthesis which came from the Sun, and combined with other things like solar power, infer that the Sun must provide a lot more energy than the Earth, and therefore the Sun must be much more massive. And then apply center of gravity to that. But it’s still not obvious.

In addition, even assuming everything in the article to be true, the narrative of science winning over dogma is still the same. The winning just didn’t happen until later (Kepler & Newton, rather than Copernicus).

Microtransactions, Speed Learning, and Absent-Mindedness

Reddit Threads

Last week was a historic one for Reddit. A reply by Electronic Arts on unlocking heroes in Star Wars Battlefront 2 became the most downvoted post in history, currently standing at -675k. It was in response to a customer’s complaint about microtransactions (paying for individual things within the game). Here is the original post:

ea_reddit_battlefront

And the infamous response:

ea_reddit_downvotes

I am generally on the side of EA, mainly because (1) money fuels game development, and (2) the Internet and especially sites like Reddit are prone to witch hunts. That said, I think the timing and bluntness of this particular response are mistakes. A vocal portion of the gaming community hates microtransactions, and defending the concept on a public platform like Reddit seems like an ill-conceived move.

As someone who plays a lot of videogames, I thought this drama really illustrated the debate on microtransactions. I used to be on the fence about it but now I’m clearly in favor of them.

  1. Microtransactions reduce the cost elsewhere. Games like Dota, League of Legends, and Heroes of the Storm (and just last week, Starcraft II) are free to play, where you aren’t at an in-game disadvantage if you are playing for free (though for some of them you don’t fully get the option to play all the heroes at a given time). Without microtransactions, the games would cost money up-front, and would be less popular and probably make less money overall.
  2. Companies make profits. That’s what they do. If a company spends a lot of money on a game and releases everything for free, you might have a good one-time bargain, but the company will go out of business and the long-term equilibrium state will be worse as there will be fewer competitors. On the flipside, if a company charges too much and nobody buys it, then it will also go out of business, and therefore the invisible hand pushes prices towards reasonable levels.

On the more social side, we all know that we live in political bubbles on social media, and it’s really the way the platform works. Facebook perpetuates bubbles by connecting real life groups of people together, and generally real life networks lean towards one side. Tumblr makes it very hard to argue a dissenting opinion. In addition, Facebook and Tumblr have only upvotes and no downvotes. Reddit is better in comparison, but even so, the combined effects of selection bias of who visits a subreddit and who cares enough to vote leads to populist inquisitions like this.

Speed Learning

The Wall Street Journal has an interesting article on “speed learner” Max Deutsch who, despite being a chess novice, challenged world champion Magnus Carlsen to a game with one month to learn the game. Carlsen accepted.

I’ve always thought of chess as an interesting game to think about in the context of learning, both human learning and machine learning. Like many other activities, it takes a lot of practice and mistakes to intuitively spot recurring patterns and motifs. Humans are very limited by computation speed, so a lot of the human aspect of the game is having good intuition, whereas computers can roughly calculate everything. I’ve also thought chess is something where a human cannot just read the rules, think about it logically for a long time, and then be really good at it. A thought experiment I always imagined is the following:

Take a young, smart person who doesn’t know how to play chess, and lock them in a room for 20 years. Give them access to all chess rules, chess books, articles, and so forth, but with the caveat that they are not allowed to play a single practice game. Provide them with sustenance and somehow incentivize them so that their only goal is to become the best person in the world at chess. At the end of the 20 years, sit them down across the board from Carlsen and go.

I would expect this person to get crushed in the game. Even if you change Carlsen to a lower ranking grandmaster, or even down to an IM, I think the no-experience person would be overwhelmingly likely to lose. The game requires so much practice and the brain version of muscle memory—reading about it theoretically only gets you so far.

In the article, Max Deutsch had only 1 month, though he could play games with people or computers. As expected, he got totally crushed. The WSJ article makes it sound like an interesting game, as if Deutsch had the advantage for a few moves, but that is nonsense. Carlsen played an odd opening to get Deutsch out of opening book (so by construction Deutsch had an “advantage”), and once Deutsch was out of theory, he makes multiple blunders that immediately sealed the game. It was clearly over on move 14. The blunders were the kind that most tournament players would have caught just from experience.

Chess is a game where, from a human vs human perspective, experience and intuition vastly outweigh theory. For now.

Absent-Mindedness

Here’s an article that argues the “absent-minded professor” is really a social dominance behavior. (h/t Marginal Revolution.)

All of this has persuaded me that absent-mindedness should be viewed in much the same way that Talcott Parsons viewed illness. At its root, it is a form of social deviance. Basically, everyone would love to be absent-minded, because it allows you to skip out on all sorts of social obligations. (Again, I have colleagues who miss meetings all the time, or show up hours late saying “I could have sworn we agreed to meet at 5pm…” No one ever shows up early because they forgot what time the meeting was at.) More generally, remembering things involves a certain amount of effort, it’s obviously much easier just to be lazy and forget things. The major reason that we don’t all act this way is that most people get sanctioned for it by others. Absent-mindedness, after all, is just another form of stupidity, and when ordinary people do things like forget where they parked their cars, they get punished for it. People say things to them like, “what are you, stupid?” It’s in order to avoid being seen as stupid or incompetent by others that they feel motivated to make the effort to do things like remember where they parked their car.

Becoming a university professor, however, is a pretty good way of exempting oneself from suspicion of outright or base stupidity. When university professors do stupid things, people don’t say to them “oh my god, you’re so stupid,” or “stop being such an idiot,” instead they start making excuses, like “there he goes with his head in the clouds again,” or “he must have more important things on his mind.” In other words, they give you a free pass. Not only can you get away with being stupid, you wind up with social license to become even more stupid.

I feel like this has some truth to it, but I would also guess there’s another good explanation that accounts for a lot of it—people generally want to fit in, and if there are other absent-minded professors around, new ones will tend to be more absent minded. Similarly, we all know famous stories of great scientists/academics who made great discoveries while being absent-minded (the Archimedes “Eureka” story being the archetypal, even if apocryphal). Emulating the great minds of the past seems like something we encourage.

In addition, there are plenty of absent-minded people who don’t hold much power if any, and the article’s explanation obviously doesn’t apply to them.

But I will agree that probably part of absent-mindedness in certain areas of academia is influenced by this. I would take it further and say that a lot of minor personal details, not just absent-mindedness, are all partly social dominance behaviors. Overall, the article is an interesting read, though I mostly disagree.

Nature Abhors a Vacuum: Population and Technology

There is an old saying in physics that “nature abhors a vacuum,” as the stuff around the vacuum would just fill in the vacuum and remove it. If a random cubic meter of air suddenly disappeared above Manhattan, the surrounding air would immediately fill it.

A variant of this saying appears in political history, that power abhors a vacuum. When you have some equilibrium and remove a powerful state or leader from the world and leave no one in charge, you create a power vacuum where surviving people rush (often violently) to seize control. (This is why I think despite having an inordinate amount of US taxpayer money going to the military, getting rid of it immediately would make the world even worse.)

But there is one more sense of the vacuum that I’m worried about and it’s that of population growth. It is the Malthusian worry but I would frame it in more abstract terms that also allow for extreme technological innovation. The idea is that for most of human history, the population grew at a slow rate, limited by disease and lack of technology. But as technology exploded in the last 300 years, so did population. Every time we innovated—every the population became capable of expanding—it did so.

The worry argument goes like this:

  1. The human population will always expand to its current carrying capacity based on current technology and environmental conditions. (Nature abhors a vacuum.)
  2. Technological growth will likely push the apparent carrying capacity higher than now. (However, it may have already slowed down significantly, i.e. will we have another Green Revolution?)
  3. There is some overall limit on the human population as determined by Earth’s finite resources.
  4. Eventually (if it has not already happened), (1) and (2) will raise the population to well above that of (3), and a global crisis may eventually follow.

Disclaimer: I’m a long-term optimist and this post represents more of a worry than a prediction.

The Green Revolution

In 1970 Norman Borlaug won the Nobel Peace Prize for breakthroughs in agricultural production. He is often credited with the saving of a billion lives.

But the cynical side asks: Did the Green Revolution actually save a billion people, or did it merely postpone and inflate a much bigger crisis? Did the human population simply increase to the new carrying capacity, leaving us back where we started? In fact, some people would argue we are now worse than where we started, due to environmental impact and loss of biodiversity.

Borlaug himself was very aware of this. In his Nobel speech:

The green revolution has won a temporary success in man’s war against hunger and deprivation; it has given man a breathing space. If fully implemented, the revolution can provide sufficient food for sustenance during the next three decades. But the frightening power of human reproduction must also be curbed; otherwise the success of the green revolution will be ephemeral only. Most people still fail to comprehend the magnitude and menace of the “Population Monster”…Since man is potentially a rational being, however, I am confident that within the next two decades he will recognize the self-destructive course he steers along the road of irresponsible population growth… (Nobel Lecture 1970)

Keynesian Leisure Society

I think this is also a reason we don’t live in the Keynesian dream of the “leisure society”. The Keynes argument goes, since productivity (since 1930) will increase by many many factors to now, by this time we should be working two days a week to obtain the same productivity and spending most of our time in leisure. Yet obviously, we still work the same hours as before.

There are some common answers to this including how people increase their wants as soon as they get what they want (which is sort of related to this topic), and relatedly that people care more about relative wealth than absolute. Another explanation is rising inequality (which will be mentioned later). But a third explanation is just population growth. The technology made the carrying capacity a lot higher, and the population filled in the vacuum. And the work required to sustain similar relative conditions might still be 5 days a week.

Why We Shouldn’t Be Worried (Objections)

I’ll go through a couple of objections.

  • Earth doesn’t have a limited carrying capacity because technology will keep improving.

It’s true that technology will continue to improve for the foreseeable future. But eventually there must be a limit. Every time people have said this before, that the population couldn’t possibly double again. they were wrong, so I’ll make no claim about where that limit is. But eventually you can do some basic math on consumption requirements for a human at modern standard of living, and just multiply that by some number. I don’t know if the limit is 10 billion or 100 billion. But it’s there.

This is also a dangerous objection in that the potential problem becomes much worse as the population grows. If the Green Revolution did not happen, we could already be in a crisis, but it would be a crisis with 3 billion people. Now imagine the same crisis but with 20 billion.

  • The world eventually reaches the demographic transition everywhere.

I’d be very happy if this happened. The demographic transition is the phenomenon that has occurred in most Western countries and some other industrialized countries like Japan, where the fertility rate (children born per woman) has started to approach the replacement rate of 2.1 and in some cases drop below it. In the long run, every country reaching 2.1 would mean a constant population.

There are still three concerns. One is that there is strong religious pressure in some places to not use contraception. If you imagine a world where nobody changes religions (not too unimaginable), and that certain religions have a higher fertility rate, then in the long run that religion will dominate the world population and we will have this problem again. Though, it’s not clear which effect is bigger between the demographic transition and “Be fruitful and multiply.”

A second concern is that high inequality could be partly keeping the population in check. That is, the US population is not growing as fast as it can, because many people are in poverty and don’t have enough resources, but once we solve inequality, the population increases rapidly again.

The third concern is that the demographic transition might not be the the complete phenomenon. We don’t know the future, and maybe beyond some point, the demographic transition reverses and the population rises again.

The negative association of fertility with economic and social development has therefore become one of the most solidly established and generally accepted empirical regularities in the social sciences. As a result of this close connection between development and fertility decline, more than half of the global population now lives in regions with below-replacement fertility (less than 2.1 children per woman). In many highly developed countries, the trend towards low fertility has also been deemed irreversible. Rapid population ageing, and in some cases the prospect of significant population decline, have therefore become a central socioeconomic concern and policy challenge. Here we show, using new cross-sectional and longitudinal analyses of the total fertility rate and the human development index (HDI), a fundamental change in the well-established negative relationship between fertility and development as the global population entered the twenty-first century. Although development continues to promote fertility decline at low and medium HDI levels, our analyses show that at advanced HDI levels, further development can reverse the declining trend in fertility. (Myrskylä 2009)

Future Population

UN_world_population

Finally, current projections of world population have the growth rate slowing down (United Nations Graphs). But I worry that this goes against basic intuitions of human nature and the vacuum. I worry it will keep increasing until it can’t.

The Good Old Days of the Internet (Circa 2010)

I made a claim recently at a dinner that the Internet was clearly good for the world up to 2010, and it was bad after that. And I’m wondering if this actually has any truth.

google-website-old

The main argument is that in the (relatively) early days of the Internet, you were generally an individual interacting with random strangers. This was good because you were exposed to new ideas, beliefs, and cultures you never knew even existed. And because people are naturally curious and empathic, you got to understand one another and could see where they were coming from. Then around 2010, social media achieved dominance and we now live in polarized bubbles where you are roughly never exposed to new ideas, beliefs, or cultures. And when you do, you are making fun of them with your group of like-minded individuals. To reach out across the aisle would draw ostracism from your group. I would guess the primary cause of this recent polarization and tribalism comes from how people interact on social media.

The Internet used to be a technology that, in addition to providing services like email and information searching, also fostered a global understanding of ideas. Now it is a tradeoff between the services which are clearly positive and the societal effect which at this point is probably negative.

Today I stumbled upon an chatroom that really reminded me of the old days. There were random people talking about philosophy and heatedly debating questions of existence. This is what the Internet was and could be. But really, we just wished to feel secure in our bubbles of belief. And that wish was granted.

Relative vs Absolute Wealth

There’s a stat I often hear in economics, but people rarely see the takeaway. The saying is that even though the annual income threshold to be in the top 1% in the US is around $400,000, for global income it is only $32,000 (Investopedia for US and world). Therefore the US middle class ought to feel like they are doing very well compared to the rest of the world.

While this is partly true, it ignores things like purchasing power and different costs of living in different countries. And people often get bogged down in the details of these objections rather than admiring the giant wealth gap that exists between countries and how well off Americans are who claim they are not.

To demonstrate the point better, I think there are two better comparisons to make. The first is to make the comparison across time and also within countries. The second is to make the comparison in simulated/virtual environments where people all start off on the same footing—video games. People usually care more about relative wealth than absolute. And maybe we should be thinking more about absolute.

Historical Economic Growth

I’ve previously written about human progress over time, and it’s still the case that this is underestimated. People generally think of growth as additive, but in reality it is exponential. Life isn’t just somewhat better than it was 300 years ago. It is orders of magnitude times better:

gdp_per_capita_slide

And yet, people often claim that things are worse than before (e.g., Make America Great Again). We’ve made this exponential curve of progress, and many problems of the past we now don’t ever think about—the diseases that have been conquered, a scientific understanding of the world, advances in healthcare, access to modern technology, democratic society, much lower chance to be murdered, not taking months to communicate with someone on a different continent, instantaneously looking up information from the sum total of human knowledge from a device in your pocket, and so forth.

We only think about the problems that face us now, never thinking about the problems that have already been solved and the things that didn’t exist before. And when we compare ourselves to people, we take all the above for granted and point out most absurd of differences—like claiming how in a hunter-gatherer society, you obtained some food and then had leisure time for much of the day, therefore we should go back to being hunters and gatherers.

Social Comparison

Among the most useful ideas to understand human interactions is that of keeping up with the Joneses. People strive to keep up in material wealth with their neighbors and friends. This is why one common response to the global 1% statistic is “why don’t I feel rich”? Because they are not comparing to the average human; they are comparing to their other global 1% neighbors.

A study by researchers at the University of Warwick and Cardiff University has found that money only makes people happier if it improves their social rank. The researchers found that simply being highly paid wasn’t enough — to be happy, people must perceive themselves as being more highly paid than their friends and work colleagues.

The researchers were seeking to explain why people in rich nations have not become any happier on average over the last 40 years even though economic growth has led to substantial increases in average incomes.

Lead researcher on the paper Chris Boyce from the University of Warwick’s Department of Psychology said: “Our study found that the ranked position of an individual’s income best predicted general life satisfaction, while the actual amount of income and the average income of others appear to have no significant effect. Earning a million pounds a year appears to be not enough to make you happy if you know your friends all earn 2 million a year.” (ScienceDaily)

This effect has become bigger in recent years, as it has been exacerbated by social media. People are now much more likely to see the day-to-day of people more well off than they are—not just the super rich, but the person you thought you were clearly superior to in high school who is now doing much better than you, and being reminded of this constantly on Facebook. And people show off their best on social media, so if everyone compares their average life with what they see on social media, everyone could be unhappy.

One of the most memorable essays from The Occupy Handbook is one precisely on relative vs absolute wealth, specifically on the idea of “last-place aversion”, and I’m not sure it actually makes the reader more or less supportive of the Occupy movement. The authors write:

We also documented last-place aversion outside the laboratory by surveying a sample of Americans about their attitudes toward an increase in the minimum wage. The minimum wage obviously affects low-income workers disproportionately, and thus it is reasonable to expect that most low-wage workers would support an increase. Indeed, we generally do observe this pattern, with one major, and telling, exception: those making just above the minimum wage, $7.25 per hour, are far more likely to oppose an increase than those making $7.25 or below or those making more than $9.00. That is, people making $7.50 per hour would rather forgo a small raise than take the chance that an increase in the minimum wage will cause them to earn the “last-place” wage themselves—and to be tied with workers previously below them. (“Where Is the Demand for Redistribution?” from The Occupy Handbook)

So it’s more important to make more money than other people, rather than to just make more money in absolute terms.

Video Games

Now let’s talk about Diablo 3. This game, as originally released in 2012, was an item-grinding game, where you kill monsters to get powerful items, so that you can kill even stronger monsters to collect even more powerful items, and so forth. The following is basically all anecdote, so be warned (I’m not aware of any literature on this).

This game was done very well, but was highly controversial at launch. Its Metacritic rating was 88% from critics but only 40% from site users (averaged from some good ratings and many 0’s).

diablo3_metacritic

I claim the controversy came from economics. The central core of the game is described above, where random items drop for you to collect, and you collect better and better items as you play more. This was all fine, except Diablo 3 did one thing that other games did not: have a massively available public online exchange for trading items.

Without trading, the game felt very good. To abstract it a bit for the sake of argument, suppose there are 10 difficulty stages in the game, 1-10, 1 being the easiest and 10 being the hardest. Most people breezed through the easier difficulties got to somewhere 4-7 on their own without much trouble. But the game ramped up in difficulty significantly once you got to 8, 9, and 10, and most people started struggling as soon as they got to 8. I claim this was still fun because the point of the game was to play the game, get better items over time so you can defeat 8 and go to 9, and then beat 9 and 10 eventually. However…

The most hardcore gamers, including many famous streamers on Youtube and Twitch, could get to 8, 9, and 10 very quickly (arguably partly from skill in playing video games, and also from just spending lots of time on the game or spending money on items), and one person even did 10 on a “hardcore” mode where any death is permanent and your character doesn’t respawn.

(For people who know the game, 8 refers to Act 1 Inferno, 9 = Act 2 Inferno, 10 = Acts 3&4 Inferno, and the one person who beat Act 4 Inferno before nerfs was Kripparrian.)

There were millions of players at launch, and they weren’t satisfied struggling in difficulty 8 while streamers were doing well in 9 and 10. And from the social aspect of the game, they weren’t satisfied being stuck on 8 while their real-life friends were on 9 and 10.

Remember the item exchange from earlier? Now all the people stuck on 8 could amass in-game currency and buy items obtained by the people on 9 and 10, and thus move themselves to 9 and 10. They basically bypassed the game itself, skipping the gameplay process of fighting monsters to get better items. And when a critical mass of people did this, the keeping up with the Joneses effect really kicked in. Now all of your friends are trying to take down the boss for 9 and 10, but you’re still stuck on 8 and can’t play with them. So you also go out and buy some items to keep up. The difficulty level you played on was your social status. (Why are you stuck on 8? Are you bad at the game?)

The whole game was a microcosm of economics. There were many different “builds” or strategies that characters could use, but the game soon degenerated into 3-4 common hyper-efficient builds that were borderline exploitative. It should have been that most builds seemed ok and a few were really good. But since everyone else was using the best-possible builds, using any other build was basically crippling yourself and your party. Instead of trying out cool, unique builds on level 8, everyone went to the degenerate builds on levels 9 and 10, which is why the game effectively had no variety. It was a case where buying a really good item made you more powerful in absolute terms, but since you then just went to a higher difficulty level, you didn’t become relatively more powerful, and now you have less flexibility of strategies. This made the game less fun despite being your character being objectively more powerful.

In addition, the more people bought items from the exchange, the lower the chance they would ever find a relatively better item naturally. If you have a 50th percentile item to start, you’ll in expectation find an upgrade in the next two items. But if you go to the exchange and buy a 99.9th percentile item off the bat (as most people did), then it will take 1000 items drops in expectation for you to find a better item naturally. (I think I was one of the people who figured this out at the time, and I wrote a popular forum post on the official Diablo 3 forums. Since then, the game has actually done both suggestions, to remove the exchanges and trading in general, and to soft-reset the items every few months, so people aren’t stuck in the situation where they have the 99.999th percentile items and can’t feasibly get better ones.)

Anyway, this experience in 2012 is why I think about the topic a lot. It’s nice for people to move up and improve their standard of living, but the improvement should be on a personal level and not to be keeping up coworkers and Facebook friends.

Nonorganic Growth of Nations

India’s rapid economic growth — and its long-standing poverty — are also reflected in the census. More than half of all Indian households now have cellphones, but fewer than half have toilets. [NPR 2012]

More recently:

Saudi Arabia announced on Tuesday that it would allow women to drive, ending a longstanding policy that has become a global symbol of the oppression of women in the ultraconservative kingdom.  [NYT 2017]

I would guess most people reading this learned history with a focus on Europe and the United States. One worry is that the West, as the technological leaders of the world for most of modern history, has a story that does not apply to developing countries today. Instead of progressing “organically” like the West, many countries today have built their laws, cultures, and institutions in different orders.

The United States began with a document outlining the principle of self governance. It had, at the time, a colonial and economic relationship with Great Britain. It was 14 years after the publication of The Social Contract, which was itself part of a larger body of work in the Enlightenment. And it took several weeks for news and communication to travel across the Atlantic. These were part of a larger number of factors that all aligned to make the Declaration of Independence happen.

Now imagine you skip to the Information Age and ask a particular faction of some war-torn country, whose primary export by overwhelming majority is oil, to draft some documents like the above. I worry that we look for the good things in the past and try to duplicate them in other places today, but fail to consider the circumstances that made them good.

What happens when social media appears before secularization? Airplanes before representative government? Internet before free speech? Our conventional lessons from history don’t apply.

To be certain, I am very in favor of teaching Western history as it is so crucial to understand the big themes of the modern era. It alone contains the world-changing arcs like the Scientific Revolution and the Industrial Revolution. But we need to consider what happens when countries make progress out of order compared to what we have learned.