What I believe, part 1: Utilitarianism
Introducing a new series of posts about my ethical beliefs
Happy new year, and welcome back to Sunyshore!
This is the first in a new series of posts about the foundations of my ethical and political worldview. Currently, I support effective altruism, which uses reason and evidence to benefit humans and other sentient beings as much as possible. At the level of public policy, I identify foremost as a social liberal—I support liberal democracy and nearly-free markets together with government intervention to reduce inequalities and provide public goods.
I expect my beliefs to change over time—they fluctuate from day to day depending on what I learn and experience—and this post is just a snapshot of my beliefs in the present moment. It may not reflect my beliefs a year from now.
I intend to cover a lot of topics in this series, ranging from economic systems to technological progress. In this first post, I will discuss utilitarianism and the foundations of my worldview.
So, let’s get to it!
Moral agents and moral patients
First, let me explain what I mean by “moral agent” and “moral patient,” since I will be using these terms throughout this post and future posts on ethics. These terms are seldom used outside of moral philosophy and are often conflated into the single concept of “moral personhood.”
Moral patients are beings whose welfare (pleasure minus suffering) is morally relevant. To me, moral patienthood requires both sentience (the ability to have feelings) and qualia (conscious experience).
Moral agents are beings whose actions are morally relevant. Moral agency requires the ability to reason about one’s actions, so that one can be held morally responsible for them.
Humans are moral patients because they can experience emotions such as pain and pleasure, and moral agents because they can reason about and take moral responsibility for their actions. Autonomous robots are moral agents because they reason about the effects of their actions on the real world, but are not moral patients because they lack sentience and conscious experience. By contrast, some non-human animals, such as chickens and cattle, are moral patients because they experience pleasure and pain, but are not moral agents because they cannot be meaningfully held responsible for their actions by humans (who lack the ability to communicate with them).
The veil of ignorance
In this section, I present an argument for why I believe total utilitarianism—which aims to maximize the total well-being of all moral patients—is closest to the correct ethical theory.
The original position is a well-known thought experiment in ethics, in which members of a society are given a chance to decide how that society should work, like a role-playing video game in which players decide on the game mechanics before they start playing. The players deliberate behind a veil of ignorance, in which they don’t know ahead of time anything about who they will be—including their social status, race, ethnicity, gender, or where and when they will be born. Because players negotiate from a position of ignorance about their specific stations in the resulting society, they must deliberate impartially, as if any of them could end up as the richest person or the poorest person; a light- or dark-skinned person; an able-bodied or disabled person; a person born with male, female, or intersex reproductive traits.
The most famous version of the veil of ignorance was developed by philosopher John Rawls in his book, A Theory of Justice (1971). However, Rawls borrowed this concept from previous thinkers, including philosopher Immanuel Kant and economist John Harsanyi. Harsanyi believed that people deliberating behind the veil of ignorance would design their society in such a way that maximizes their expected, or average, utility.
But wait. Does expected utility really mean average utility? Average utility refers to the average welfare of moral patients, whereas total utility also depends on the number of patients that exist. Depending on the society chosen by our players in the original position, different numbers of people will be instantiated. For example, if humanity goes extinct by 2100, then anyone slated to be born after 2100 will not exist. If everyone prefers to exist, then those people will prefer a world in which humanity survives past 2100. In general, each person will want to maximize the total utility of everyone instantiated, which depends on the probability that they will exist and the average utility of the people who do exist.
Similarly, we can show that the players will want to maximize the utility of all moral patients, not just human beings. Even though the players are capable of reasoning (and thus moral agency) while in the original position, they could be instantiated as humans or non-human animals, with or without moral agency. All players have a stake in the decision-making process whether or not they end up as moral agents.
Utilitarianism in the real world
Based on my (non-expert) knowledge of the social sciences, especially economics and political science, I think that a society with maximum total utility would have the following characteristics:
It would avoid unnecessary suffering and violence. Thus, it would provide for everyone’s safety while avoiding excessive or discriminatory punishments, and it would be free from war and armed conflict.
It would tolerate various ways of living in terms of religion, political belief, culture, sexuality, and so on; and it would be free from prejudice and discrimination based on morally irrelevant features like race and gender.
It would have a globalized, free-market economy (to promote economic efficiency and growth) with an effective welfare state (to limit inequality). Both markets and government intervention would work together to eliminate poverty and create wealth for all.
It would protect non-human animals and the natural and built environments, since everyone benefits from clean air, clean water, and a good climate.
It would protect humanity from existential risks, such as biological and nuclear weapons, so that humanity can survive and flourish for thousands, if not millions, of years.
It would have mechanisms to make progress and address new challenges. Thus, it would have inclusive, democratic institutions, as well as freedom of speech and assembly, so that people can openly propose and debate ideas for improvement.
In short, such a society would embrace economic, social, and political liberalism. It would be an open society in which everyone can fully participate, free from discrimination and violence. But the real world is full of suffering and injustice, even as it has improved so much in the last 200 years. How can we build a better world?
Countless intellectuals and social movements have dedicated themselves to improving the world. One such movement is liberalism, a diverse political movement that aims to achieve such goals as securing civil and political rights and creating shared prosperity through the reform of political and economic institutions. Liberalism came of age in the early 19th century, and it has come to dominate modern politics. More recently, the effective altruism movement has been applying careful reasoning and evidence to figure out how to help others as effectively as possible. Its successes include GiveWell, Open Philanthropy, and the Gates Foundation.
I plan to write more posts about how we can improve the world, drawing from both the liberal and effective altruist traditions—be sure to subscribe so you’ll receive them. Also, if you like this post, please share it with your friends.
In the meantime, you can learn more about utilitarianism at Utilitarianism.net, a website co-written by William MacAskill, a philosophy professor at Oxford and one of the founders of effective altruism, and Darius Meissner, an Oxford student and fellow member of the EA community.
Take care!
> However, Rawls borrowed this concept from previous thinkers, including philosopher Immanuel Kant and economist John Harsanyi.
I think the credit should maybe go more to Hume than Kant since, imo, the original position is a reformulation of the ideal observer theory.
> In general, each person will want to maximize the total utility
Pursuing expected total utility leads to problems such as 'the repugnant conclusion' and 'the wagering calamity objection': https://bobjacobs.substack.com/p/the-wagering-calamity-objection-to?utm_source=profile&utm_medium=reader2
comments:
1. one well known ea person wrote a paper which i interpreted as saying if we want to reduce or end wild animal suffering the best thing to do would not be to protect their habitats --unless they benefit humans ---but rather just wipe them out--put them out of their misery as uncivilized brutes. they have no hope of ever getting tenure at oxford.
total utility (and/or average depending on how you calculate it----eg additivity vs mulitiplicativity/nonlinearity) would increase by say, turning all national parks into dumps, mining and drilling areas, and shopping malls loaded with fun things to buy.
2. alot of economists i think have been pretty slack when dealing with 'utility functions'.
eg some still seem to view it as a zero sum game. 1 billionaire= 1 billion people with 1 $.
3. i dont think sentience has ever been proven. in quantum theory schrodinger did discuss whether electrons had minds. some ask if numbers have feelings. in my theory some special ones do.
qualia i think on the other hand have been isolated as well as synthesized in some labs and are sold in stores.
4. ea people do seem to permit free speech, but also seem to endorse the view that they don't have to listen to speech they don't want to hear .
to biologists, altruism doesn't really exist except in people's minds.
while violence reduction seems optimal and common sense, humans love war movies, etc. and these provide some well paying jobs.