The physics of morality
Einstein said "god does not play dice" as a critique of quantum physics. But now, most physicists fully accept quantum physics and free will as gospel. If you take these ideas seriously enough, you can end up believing that consciousness arises from quantum effects since these effects aren’t deterministic. Roger Penrose argues “that consciousness is the result of quantum gravity effects in microtubules” (source).
If the universe is deterministic, doesn’t this absolve us of responsibility of our actions? Without free will, how do you hold people morally accountable? If our actions are determined by unpredictable randomness at the quantum level, again we end up not having any reason to hold anyone morally accountable for their actions.
Then again, if the universe will burn up in heat death, does anything we do matter?
There’s no morality without knowledge? There’s no knowledge without morality?
Not everyone believes you can derive statements about right and wrong from statements of fact about the universe. This is the is-ought problem. The thinkers here are giants like Hume. It’s clear that science does have something to contribute to our understanding of what’s right and wrong, but it’s hard to say how the two should interact. Why would it be bad or wrong to look at the facts and use them to decide what we should do? It’s this the most rational way to be?
Then again, without a moral society where people don’t lie to each other, don’t steal, etc, how do we get physics in the first place? How do we even get to the point where we can contemplate the implications of a quantum physics on morality?
Yes, but doesn’t knowledge still come first? Is it possible be moral if you’re a brain in a vat, and no input from the outside world? And even if you did have input, wouldn’t this still mean nothing if you didn’t also have “write access” to the outside world? Suppose you had write access to the outside world, but no read access? Is it possible to do evil then?
The first sin in Genesis is committed when someone eats the wrong fruit. Nobody stole from anyone, there was no murder or rape. As far as sin goes, the trespass of eating a bad fruit is about as tame as it gets. Genesis seems to claim that morality comes into play when a certain number of prerequisites are met. I’m mystified about why God didn’t punish Eve right away. Why did he wait until the fruit was eaten by Adam? The implication here seems to be that an act has moral weight when it affects someone else. Then again, both Adam and Eve consented to eating the fruit, and nobody else was affected. And let’s not forget they’re punished for eating a piece of fruit. The implication is that our interaction with the outside world carries moral weight, even if the thing we’re interacting with is a fruit. And the Bible claims such crimes are punishable by death. But what if God instead told them not to kick a rock? It seems God is only concerned with living things.
Engineering as a moral act
Noah was saved by building a boat. “..the Lord saw that the wickedness of man was great in the earth, and that every intent of the thoughts of his heart was only evil continually”. There’s this implication that engineering work has moral merit. This time, instead of being condemned for eating from the wrong tree, we have someone rewarded for cutting trees down to make a boat.
It’s nearly impossible to disconnect engineering and physics from morality. Every creation has moral implications. Nuclear weapons can kill us all. Encryption allows normal people to send encrypted messages over long distances. Pigments allow women to improve their appearance. Guns allow the weak to tyrannize the strong.
Technology obscures and creates distance between cause and effect
Many technologies put distance between cause and effect. Drones allow you to remotely bomb people on the other side of the world. Twitter allows me to control which pixels are on and off on someone’s phone, even if they’re on the other side of the planet. And these pixels contain messages that can then be permanently embedded in your mind, possibly causing you to make decisions based on this information. My goal is to embed axioms so deeply into your psyche that I can control, and thereby predict what you’ll do next. Free speech is dangerous, and it’s amazing that it’s regulated as lightly as it is.
Technological distance protects us. If my missiles can reach you, and your missiles can’t reach me, then I’m safer than you are. If your country restricts free speech, and mine doesn’t, then I can cook up memes so potent that people in your country incinerate on contact with my memes. Suppose I invent something tempting like Marxism. So you can end up following a set of ideas that are quite tempting, but may end up wrecking your country. Or maybe your country will have been wrecked anyways, and Marxism was preferable to the alternatives — it’s hard to know.
Technology can obscure our moral sense. If I have technology and you don’t, I can use my technology to take your things. I may be willfully blind to the immorality of my actions because I don’t suffer the consequence of my actions. What’s the difference between conquering and stealing?
Waging war when you’re at a technological disadvantage: morality of suicide bombings
Aren’t suicide bombings inherently more ethical than drone bombings? The suicide bomber puts his life on the line. He has skin in the game. He’s not hiding from the consequences of his actions. There’s no distance between cause and effect for him. He can’t lie to himself about the violence he causes others to suffer. Terrorism is how the weak fight against the strong since they don’t have access to more powerful weapons like nuclear bombs, satellites, chip manufacturers, etc. And yet, terrorism is also awful. If you’re going to defend terrorism, then you can find yourself defending school shooters on principle.
It’s intuitively clear to us that morality doesn’t just hinge on minimizing the distance between cause and consequence, or about having skin in the game. It also matters that we are circumspect about our use of force.
Is it immoral to work for a weapons manufacturer? If so, what exactly is and isn’t a weapon? Is nuclear research inherently weapons research? Isn’t all tech arguably weapons tech?
Privacy and morality
What about non-fatal force? What about the right of the state to punish people? What about jails? Isn’t solitary confinement inherently immoral since it hides the evil crimes of the state against the individual from public view? If the government is going to condemn someone, shouldn’t this be done in plain view for all to see? But if we go down this road, what’s stopping us from getting back to public hangings?
The USSR and the Nazis had secret police. Prisons and gas chambers hide evil. A work of engineering can be evil, even if you can technically describe the result as “affordable housing”.
Governments gain from being able to surveil their citizens. Evil likes to hide where it can’t be seen. This doesn’t mean that everything done in public is good, and everything done in private is bad. If you get social credit for being moral, then it can’t be purely altruistic. Altruism requires privacy. On some level, encryption creates the space for both moral and immoral behavior. However, it’s hard to argue that real altruism exists. We feel good when we do something for others.
Conclusion
Let’s conclude this whirlwind tour of the moral landscape and how it interacts with the physical. Morality depends on knowledge. Knowledge depends on morality. Morality requires both read and write access to the world outside our conscious experience. Technology increases the distance and scale of our write access on the universe. All technology has moral implications, but some technologies (like weapons) have a more narrow moral scope than others. If our technology can reach others, and their technology can’t reach us, then we’re physically protected. However, this physical protection doesn’t protect us morally. We are still not immune to moral attack. Technology can democratize the use of force, but technology can also increase the power of the state over individuals. Altruism requires privacy, but it’s hard to say that true altruism exists.
The primitives at play are knowledge, action, and the moral choices that we make about what we choose to learn, and what we choose to do. We’re all ultimately trying to succeed in the world. Different choices “make sense” for people in different situations. Sometimes, the choices that “make sense” for one person, end up being terrible for everyone else. Ultimately, the arc of history bends toward justice. Increased government surveillance creates fewer opportunities for bad people to hide, the democratization of force allows people to anti-socially hurt others, and so on. Technology buffers us against the consequences of our actions, and makes social freedoms less catastrophic. People’s true selves can be economically brought to the surface.