Ethical Dilemmas and Human Morality, part 2

For the full explanation, see Ethical Dilemmas and Human Morality, part 1, written almost exactly a year ago.

decision3

Moral Consistency

We had a particular debate recently on consistency in moral dilemmas. In particular, we went over two variants of the Trolley Problem: the fat man and the transplant. One side argued that you must pick the same answer in both variants, while the other side argued that it was rational to have opposite answers in the two cases. I argued for the latter.

Here is Wikipedia’s preferred formulations of the two variants:

Fat man:

As before, a trolley is hurtling down a track towards five people. You are on a bridge under which it will pass, and you can stop it by dropping a heavy weight in front of it. As it happens, there is a very fat man next to you – your only way to stop the trolley is to push him over the bridge and onto the track, killing him to save five. Should you proceed?

Transplant:

A brilliant transplant surgeon has five patients, each in need of a different organ, each of whom will die without that organ. Unfortunately, there are no organs available to perform any of these five transplant operations. A healthy young traveler, just passing through the city the doctor works in, comes in for a routine checkup. In the course of doing the checkup, the doctor discovers that his organs are compatible with all five of his dying patients. Suppose further that if the young man were to disappear, no one would suspect the doctor.

In the original trolley problem, most people would sacrifice one person to save five. However, in the fat man variation, not as many people are willing to take the action. And in transplant, very few people agree that harvesting the healthy traveler’s organs is the correct move.

This is quite inconsistent. Why would you be willing to sacrifice one person to save five in some cases, but not in others? Shouldn’t the results be the same?

I argued that it is morally feasible to have different answers to this question, especially in regards to saying yes to the original or fat man case, and saying no in the transplant case.

From a utilitarian perspective, these scenarios are not the same, namely because people contribute different values to society. In the standard trolley example, there is no reason to suspect that the one person laid on one of the tracks is different from any of the five people laid on the other track. Since we are given no other information as to who these people are (of course, the situation changes if we have more information), the best bet is to save the five. Similar is the fat man scenario.

In the transplant case, however, we are given additional information: given that someone is about to die due to the failure of some vital organ, they are probably contributing less to society than the healthy traveler undergoing a routine checkup. Now, this effect may not be strong enough to warrant the sacrifice of 5 people, but it clearly makes the transplant scenario different from the trolley or the fat man.

Now, if the transplant case were replaced with sacrificing one life to save a million, then the problem is entirely changed as well. Similarly, in the trolley problem, if we said the five people were all serial killers and the one person on the other track was a normal hard-working person, that changes the situation.

Since we can change around the answers so easily, there doesn’t seem to be a fundamental one life versus five lives struggle at hand, but rather, a combination of other factors. We can answer the question based on what information we have about the people involved, and since these situations imply different types of people, we are not morally obliged to answer the same for all variants of the problem.

The Moral Landscape

“She: What makes you think that science will ever be able to say that forcing women to wear burqas is wrong?
Me: Because I think that right and wrong are a matter of increasing or decreasing well-being—and it is obvious that forcing half the population to live in cloth bags, and beating or killing them if they refuse, is not a good strategy for maximizing human wellbeing.
She: But that’s only your opinion.
Me: Okay … Let’s make it even simpler. What if we found a culture that ritually blinded every third child by literally plucking out his or her eyes at birth, would you then agree that we had found a culture that was needlessly diminishing human well-being?
She: It would depend on why they were doing it.
Me [slowly returning my eyebrows from the back of my head]: Let’s say they were doing it on the basis of religious superstition. In their scripture, God says, ‘Every third must walk in darkness.’
She: Then you could never say that they were wrong.

The Moral Landscape

This is a passage from Sam Harris’s The Moral Landscape (2011). The book is controversial and very thought-provoking, both philosophically and practically, especially to the liberal notions of the West. It has certainly changed my views of morality.

Namely, Harris argues that moral relativism has gone too far in our current world, and that it has caused morally inferior practices (such as the burqa) to persist without serious criticism. In addition, he notes of these practices, several are especially difficult to criticize, because to criticize them would be considered offensive to religion. Moreover, because morals are associated by many to religion, it is difficult to seriously argue what is right or wrong, again out of fear of being labeled as offensive or intolerant. And out of this, many moral issues are left unresolved because to debate them is considered wrong.

Can One Culture Be Inferior?

Consider two societies that had the same moral code in all ways except, as in the example earlier, one society required removing the eyes of every third-born, while the other did not. Can we say that the former has an inferior culture? Maybe, maybe not. But this question has an answer, according to Harris, although most of the world would think that it does not. In our world, the tendency is to say that all cultures are equal, that they deserve the same respect, or something along those lines. We would be viewed as supremely intolerant if we were to say otherwise.

And yet, there are issues with this: Can we really view a culture that plucks out the eyes of third-borns out of tradition as an equal culture? What about a culture that condones slavery, or one that requires the burqa, or one that isn’t taken aback by suicide bombing? In the back of my mind, at least, I think such cultures can be viewed as wrong in those areas, but of course, it is an entirely different thing to say it publicly. (See what I did there?)

In the section “Moral Blindness in the Name of ‘Tolerance'”:

There are very practical concerns that follow from the glib idea that anyone is free to value anything—the most consequential being that it is precisely what allows highly educated, secular, and otherwise well-intentioned people to pause thoughtfully, and often interminably, before condemning practices like compulsory veiling, genital excision, bride burning, forced marriage, and the other cheerful products of alternative “morality” found elsewhere in the world. Fanciers of Hume’s is/ought distinction never seem to realize what the stakes are, and they do not see how abject failures of compassion are enabled by this intellectual “tolerance” of moral difference. While much of the debate on these issues must be had in academic terms, this is not merely an academic debate. There are girls getting their faces burned off with acid at this moment for daring to learn to read, or for not consenting to marry men they have never met, or even for the “crime” of getting raped. The amazing thing is that some Western intellectuals won’t even blink when asked to defend these practices on philosophical grounds. I once spoke at an academic conference on themes similar to those discussed here. Near the end of my lecture, I made what I thought would be a quite incontestable assertion: We already have good reason to believe that certain cultures are less suited to maximizing well-being than others. I cited the ruthless misogyny and religious bamboozlement of the Taliban as an example of a worldview that seems less than perfectly conducive to human flourishing.

As it turns out, to denigrate the Taliban at a scientific meeting is to court controversy. At the conclusion of my talk, I fell into debate with another invited speaker, who seemed, at first glance, to be very well positioned to reason effectively about the implications of science for our understanding of morality. In fact, this person has since been appointed to the President’s Commission for the Study of Bioethical Issues…. Here is a snippet of our conversation, more or less verbatim:

She: What makes you think that science will ever be able to say that forcing women to wear burqas is wrong?
…”

An Atheist’s View on Morality

This is in response to my previous article, “Ethical Dilemmas and Human Morality.” In that article I listed several questions in several situations and asked you, the reader, what you would do in each case. At the end, I promised to explain my own moral principles as well. So, this post is my own view of ethics and morality, from an atheist.

What Is the End Goal?

First of all, what is the goal of morals? To create a better society is a satisfactory explanation to many, but what then? If a nearly perfect society were to exist some time in the future, would morals still matter? My answer is Yes.

I am optimistic in the future of humanity, and I hope there will be a time when humans can peacefully explore the stars, the galaxies, and the universe. When we are at this stage of civilization, we will be long past the petty conflicts that determine morals today.

Thus, a more long term goal is needed. I propose the following primary objective:

  • To preserve life in the universe.

There is no pure logical reason to put this directive above all others. However, if we start with this assertion, that a universe with life is better than a universe with no life, then many moral questions can be answered in a systematic way.

A Moral Hierarchy

It is systematic enough to put morals into a hierarchy:

Levels
6. Preservation of Life
5. Preservation of Intelligent Species
4. Preservation of Diversity of Species
3. Preservation of Well-Being of the Species
2. Preservation of Self
1. The Following of Social Norms/Cultures/Religion/Laws
0. Natural Instinct and Personal Wants

The way to read this is for any action, start at the bottom and see if it fits with the statement at that level. Then an action is morally justified if it fits a given level and to the best of your knowledge, it does not contradict a higher level. On the other hand, an action is morally wrong if it fails to fit the highest level that you are knowledgeable of.

Examples

Perhaps this hierarchy is a bit confusing, so I will give a few examples.

Example 1: You see a dollar bill is on the ground and nobody else is around. Is it right or wrong to take the dollar bill?

  • According to Level 0, you are allowed the action of taking the dollar bill. You go up one level, to Level 1, and the action is still allowed by society. You don’t believe it will affect any of the higher levels. So, the decision to take the bill is morally justified.

Example 2: Someone has $1,000. Is it morally right or wrong to steal the money from this person?

  • The action fits Level 0, but it fails at Level 1, as it is against the law. You do not believe it will affect any higher level. Since Level 1 is the highest relevant level to your knowledge, the action is morally wrong.

Example 3: Thousands of nuclear weapons around the world are about to explode, and the only way to stop them is to extract a certain code from a captured terrorist. However, the terrorist will not speak. Is it morally justified to torture the terrorist?

  • Torture is against social norms and the law, so the action fails at Level 1. But, Level 3 and Level 4 are very relevant, as the large number of nuclear detonations would kill billions, collapse ecosystems, and cause catastrophic changes to the environment. It would not only threaten human civilization (Level 3), but also wipe out many, many species (Level 4). It could even wipe out humans (Level 5). Thus, to preserve Level 3, Level 4, and Level 5, the action is morally justified.

Example 4: An alien species is about to create a super-massive black hole that will devour millions of galaxies and eventually the whole universe. The only way to prevent this is to preemptively wipe out this alien species.

  • Killing the alien species is against the law, so the action fails at Level 1. Even worse, it would kill an entire species, an act of xenocide, so it fails at Level 4. However, it satisfies the highest objective, Level 6, as it prevents a case where all life in the universe could be destroyed. So, wiping out this alien species is morally justified.

Reasoning

The reasoning for each level is as follows:

  • Level 1 overrides Level 0: The society most likely has a better chance to function  with rules than without rules. This gives it a higher chance to advance.
  • Level 2 overrides Level 1: An individual should be allowed to preserve one’s own life regardless of what other people assert, as long as the individual believes the actions necessary do not contradict any of the higher levels. This is because an individual may discover truth that is contradictory to the rest of the society.
  • Level 3 overrides Level 2: It is justified for an individual to sacrifice one’s own life to improve the quality of living for the species. This increases the chance that the society will be able to preserve itself.
  • Level 4 overrides Level 3: It is justified to lower the quality of living of a species to preserve the diversity of life, i.e., the number of species. This way, if some catastrophe wipes out one species, there are a large number of species remaining to preserve life.
  • Level 5 overrides Level 4: An ecosystem has a better chance to survive if the most intelligent and advanced species is alive. For instance, if a massive asteroid is on a collision path with Earth, it will require Space Age technology (achieved only by humans) to preserve life on Earth, so humans are more important to Earth’s ecosystem than any other species.
  • Level 6 overrides Level 5: It is better for a technologically advanced species to sacrifice itself if it allows life to continue in the universe if life would otherwise be destroyed.

The Role of Knowledge

This hierarchy of morality is strange in that the determination of whether an action is morally justified depends partially on the knowledge of the individual.

For example, suppose that someone were brainwashed when they were young by a society or religion, and that he is led by it to an action that contradicts one of the higher levels. On Earth, for instance, it is common for many of the popular religions to contradict Level 2: Preservation of Self, and Level 3: Preservation of Well-Being of the Species. Level 3 is particularly relevant in today’s age, when the understanding gained from stem cell research, particle accelerators, and evolution have the result of giving life on Earth a much higher chance to survive potential global or cosmic catastrophes.

When someone who is brainwashed by a religion commits an act that contradicts Level 2 or 3, then according to this moral system, the person is not to blame—the fault is with the religion, and with the society for allowing that particular religion to be so pervasive.

Who Exactly to Blame?

Imagine a massive asteroid that will crash into Earth in the year 2050.

At the rate of advancement of our current technology, with a few years of advance warning, we as a species will be able to send multiple rockets armed with nuclear weapons to knock the asteroid off course and save not only our lives, but the lives of all species on Earth, and all of Earth’s children. But say religion had been more prominent and had delayed the onset of the Renaissance and the Scientific Revolution by just 100 years. Then when the asteroid hits, we would only have what we know as 1950 technology, and likely all of humanity, and all life on Earth, could be destroyed. Surely this is not the fault of any person, but the fault of religion.


The corollary to this question is, What if an asteroid had crashed into Earth in the year 1850? There would have been absolutely nothing we humans could have done at that time to stop it. If that were the case, then we could not blame anyone in that time period. Instead, we would blame the Dark Ages, for practically halting the advancement of technology for a thousand years.

Ethics in Religion

If we value life, and if we want life to prosper in the universe, then humanity as a whole needs to adopt a new form of ethics. Maybe not the one above, but it must embrace one that is based on the existence and diversity of life, not based on myths that were invented in an ancient past.

This is why, among religions, a tolerant religion such as Buddhism is better for the future of humanity than an heavily indoctrinated one such as Christianity or Islam. Religions of the latter category only claim to be “tolerant,” but in practice are often not. See Galileo, the Salem witch trials, or the recent anti-free speech protests in the Middle East. These kinds of religions are fundamentally resistant to change. Whereas, truly tolerant religions are always open to change.

If science proves some belief of Buddhism wrong, then Buddhism will have to change.

-Dalai Lama

All the world’s major religions, with their emphasis on love, compassion, patience, tolerance, and forgiveness can and do promote inner values. But the reality of the world today is that grounding ethics in religion is no longer adequate. This is why I am increasingly convinced that the time has come to find a way of thinking about spirituality and ethics beyond religion altogether.

-Dalai Lama

Sure, the less tolerant religions may teach values they consider to be good, but for life to survive, sometimes the rules must adapt. Say a powerful alien species abducts you and gives you two options: (1) to kill a fellow human and the aliens will befriend the human race and help us advance, or (2) to refuse to kill a human but then the aliens will destroy the entire Earth. You could blindly follow “Thou shalt not kill” as in option (2) and let all the millions of species on Earth die, or you could rationalize that the survival of millions of species, including your own, is more valuable than any single individual member of the species, and instead advance life as in option (1).

Some Concluding Remarks

To preserve life and to let it flourish through the stars, and eventually throughout the universe, we must use an ethics system that adapts to the given situation, not one that proclaims to be absolute and forever-lasting.

Some nations, particularly many of those in Europe, have already realized this. When the United States finally realizes this as well—and hopefully before it’s too late—the rest of humanity will follow, and then finally, the human species will be one of progress, discovery, and peace.

Ethical Dilemmas and Human Morality

Decision

Introduction

This article is the result of numerous debates I’ve had concerning ethics and morality. The debates were very friendly in nature, as we tried to pick each other’s brains. Sometimes we agreed on certain situations, other times we completely disagreed. It was very interesting to see the way different people view the world.

Some key information for the rest of the article:

  • The debates were largely conducted by creating hypothetical situations (thought experiments) and asking each other what we would do in such examples.
  • Often when one said X for a situation, we would try to adjust one variable to change the situation slightly, in order to see what exactly in the situation was important. This is sort of like the scientific method applied to ethics.
  • Most of the group was not highly religious. This post is written by an atheist.
  • The group consisted of Cornell students.
  • There was no name-calling or mocking in the debates. Disagreements were handled with civility.

Situation 1: The iPhone Return Dilemma

This is based on a real-life example. I could simply the situation considerably, but because of the reality of it, I am including all the important details.

Suppose your income was largely based on buying and reselling iPhones for higher prices. That is, you could buy an iPhone for $400 and resell it on eBay for $800 to someone in a country where the iPhone is not sold.

Now, someone in South Africa buys your iPhone. There is a 14-day return policy; however, this 14-day count starts the moment the transaction is made. The iPhone takes 22 days to ship from the USA to South Africa. The buyer is aware of this. But when the iPhone arrives, the buyer finds that the iPhone does not work, and takes it to a local repair shop. The repair shop opens up the iPhone but cannot make it work, because the iPhone simply doesn’t work in South Africa. Five days after he receives the phone, the buyer emails you, demanding a refund.

Since it is now 27 days since the transaction, or 13 days past the return deadline, you do not respond to the email. The buyer opens up an official return investigation on eBay. A week later, eBay rules in your favor, stating that you are not obligated to provide a refund.

Now, without informing you, the buyer had shipped the iPhone back to you in the middle of the eBay investigation. Three weeks later, the iPhone arrives on your doorstep, a complete surprise for you. Five days later, the buyer emails you. He knows that eBay ruled in your favor, so instead of asking for the $800 back, he asks for the iPhone back. Since he voluntarily shipped you back the iPhone, it is legally yours.

Questions

1. You are not legally obligated to return $800 or the iPhone. However, are you morally obligated to do so?

2. Would you return the iPhone?

3. The shipping fee from USA to South Africa is $100. If you do feel obligated to return the iPhone, should you pay the $100 shipping fee or should you ask $100 from the buyer in South Africa to provide it?

4. If you choose not to return the $800 or the iPhone, then who is to blame for the buyer’s loss? Is it your fault or his own fault?

5. Instead of an $800 iPhone, what if the item cost $5 or $100,000? How would this affect your responses?

Situation 2: The Million Dollar Button

From this point on, all situations are strictly hypothetical.

You are in a room with a button. If you press the button, a random person in the world dies, but you gain one million dollars. Nobody else in the world knows about this room or the button, and nobody would know that you pressed the button.

This situation completely shocked many of us when it was first asked in the debate. Most people’s gut instinct was to say, “Of course not!” But is that actually what people would do? People might say “No” to maintain their reputation in a public setting, but deep down, would their answer be “Yes”?

Questions

1. Would you press the button?

2. Suppose you know 5 people who are homeless and jobless. If you press the button, you could give them $200,000 each. Would you press the button?

3. Instead of one random person in the world dying, one random convicted criminal in  the world dies. Would you press the button?

4. Instead of one million dollars, you gain one billion dollars. You could donate massive amounts to charities and fund scientific research to cure diseases. Would you press the button?

5. Instead of a random person dying, a random person goes into a coma for a month. Would you press the button?

6. If the answer to any of the questions was yes, then how many times would you press it?

Situation 3: The Doomsday Asteroid

In the future, there is a nuclear-powered manned spaceship in the outer solar system. Scientists detect an asteroid heading to Earth that has a 100% probability of impact. The asteroid is large enough to annihilate all of human civilization, kill billions, and set back humanity to the Stone Age. The only way to deflect the asteroid before it gets too close is to have the crew of the manned spaceship suicide the craft into the asteroid and blow it up with nuclear power. There is one person on the ship.

Questions

1. If you were the captain of the ship, would you be morally obligated to send the ship into the asteroid?

2. Suppose instead you are the director of NASA, back on Earth. The captain on board the ship refuses to impact the asteroid, even though it is the only hope to maintain current human civilization. You can issue an order to the ship itself that places the ship on computer autopilot, so that the captain cannot control the ship. Should you autopilot the ship into the asteroid?

3. Suppose that, at the time of the decision, there is only a 10% probability of the asteroid hitting Earth. However, we will not know for certain whether it will happen until it gets close enough to be unstoppable. Should the captain preemptively suicide him/her-self into the asteroid, before it gets close enough?

4. Suppose that the asteroid is large enough to annihilate not only human civilization, but also all life on Earth. Does this change the answer to any of the other questions?

5. Suppose that instead of there being 1 person on board, there are 1,000 people aboard that spaceship. Does this change any of your answers?

Situation 4: Alien Attackers

You are the president of the United States. An advanced alien race is about to attack the Earth, but before they do, they snatch you on board and give you two options. Option 1 is for you to kill half of humanity, and the aliens will leave Earth alone. Option 2 is to let the aliens destroy all of humanity. There is no hope of beating the alien technology.

Questions

1. Which option would you take?

2. Suppose Option 1 were, instead of you killing half of humanity, you let the aliens kill half of humanity. Does this change the answer?

3. Suppose Option 1 were, instead of half of humanity, 99% of humanity. Does this change the answer?

4. Suppose Option 1 were, instead of half of humanity, 1% of humanity. Does this change the answer?

5. Suppose human technology is actually far more advanced than it is now, and the best human military analysts claim there is a 5% chance to repel the alien attack if they attempt Option 2. Which option do you take?

Situation 5: The Million Dollar Button, Version 2

The following is the same as Situation 2: The Million Dollar Button. However, the variations are different.

“You are in a room with a button. If you press the button, a random person in the world dies, but you gain one million dollars. Nobody else in the world knows about this room or the button, and nobody would know that you pressed the button.”

Questions:

1. Instead of a random human in the world dying, a random dog in the world dies. Would you press the button?

2. Instead of a random human, it is a random cat. Would you press the button?

3. Instead of a random human, it is a random fly. Would you press the button?

4. You are not the one pressing the button. Someone else is pressing the button, but you have a special button that electric shocks the other person, stopping them from hitting their button. Supposing you know the other person is just about to press the button, should you push your special shock button?

5. When you press the button, a random person in the world dies, but you gain one million dollars AND a random person in the world who has cancer is suddenly cured of cancer. Would you press the button?

6. If you said no in the previous case, what if 100 people were suddenly cured of cancer?

Results

Some people had a view that it is always wrong to take away someone’s freedom. Such people of course said “No” in the button example, but surprisingly, they also said “No” in the spaceship example with the NASA director. They said that if the crew refused to crash into the asteroid, it is wrong for someone on Earth to force them into doing so, even if it is the only way to save humanity.

Some, especially religious people, were okay with pressing the button because they viewed humans as “inherently evil,” so they had no problem terminating a random human’s life. I personally found this view to be quite scary!

1. The iPhone Return

Everyone agreed that you are not morally obligated to return the $800. However, there was disagreement over whether to return the iPhone back to the buyer. Those claiming there is no moral obligation used the argument that the buyer should have been more careful with money, while those claiming there is a moral obligation used the argument of intention, that the buyer did not intend to just give back the iPhone for no refund.

2. The Million Dollar Button

Depending on the situation, most people found some case where it was justifiable to press the button. As said above, some religious people justified it by saying humans are “inherently evil.” Those of a utilitarian view justified it by saying the million dollars could go towards good purposes and advance the human race better than an average person could.

3. The Doomsday Asteroid

Most people agreed that in almost all cases, the ship should crash into the asteroid. To my surprise, there were people who said that even if the asteroid were guaranteed to wipe out all life on Earth if it hit, that if the captain refuses, NASA should not force the ship via autopilot to crash into the asteroid. I view this as a human imperative. Not only would we be ending our own species, but also millions of others on Earth. The survival of millions of species is far more important than the decision of one individual of one species.

4. Alien Attackers

There were people who  preferred letting all of humanity be killed by the aliens than to kill half of humanity. The decision for them rested in who was doing the killing. So when the question was rephrased to letting the aliens kill 50% of humanity vs letting the aliens kill 100% of humanity, the answer was unanimously let the aliens kill 50%. But when we ourselves were killing 50%, some people would rather let the aliens kill 100%.

5. The Million Dollar Button, Version 2

Most were more likely to press the button in the animal case than in the human case. However, there were some who would rather a human than a dog die. These were the same people who, in situation #2, claimed that it was justifiable to let a human die because humans are inherently evil. They claimed that dogs are not inherently evil, and so would not press the button in the case of a dog.

My Own Perspective

Overall I think the survival of the species is more important than the life of any one particular individual of the species. I may have hinted at this a few times in the article. But I will write a post specifically on my own views of morality later on.

What would you do in these situations?