Ethics and Philosophy: A Defense of Egoism
In my previous post about ethics, I explored the normative question through the works of Christine Korsgaard. I concluded that ethics are not normative, or at least that ethics have no collective normativity, and I rejected Korsgaard's theory that we may ethically obligate others.
This post will attempt to provide an overview of how I do see ethics, and will primary discuss the works of Dan Fincke.
I. Ethics are About Self-Interest
In my previous post, I rejected the strongest arguments that I could find in favor of normativity, and specifically in favor of the idea that we have ethical duties to others. What remains, then, are duties to ourselves. One of Korgaard's arguments that I do find convincing is the idea that anyone who takes an action must have a motivation. Everyone wants something. Everyone has desires. If we had no desires, we would never do anything. And so the goal of any action is to satisfy one or more of those desires. It is therefore universal among all creatures which take actions that we have desires we wish to satisfy.
Because we have no moral obligations toward others, but we all wish to satisfy our own desires, ethics becomes a discussion regarding how to best meet our own desires. This way of thinking about ethics is commonly known as egoism.
Within an egoist framework, what is "good" is whatever best satisfies our own subjective desires. Our desires are complicated and dependent on various factors, but for purposes of this discussion, the term "well-being" will be used to refer to the satisfaction of our subjective desires, whatever they may be.
Dan Fincke agrees that our our well-being is the appropriate starting point for an ethical system:
Ultimately, I think that justifying my interest in a good is going to require, on the most fundamental level, reference to my own egoistic good. My own thriving is the most fundamental, instrinsic, and unavoidably objective good I have.
Once we recognize that we have no moral duties to anyone or anything outside of our own subjective self-interest, the rest of the conversation is about how best to satisfy our desires.
II. Empowering Ourselves by Empowering Others
Dan Fincke sees the ultimate good as empowerment. In his ethics, what is "good" is whatever increases our power:
"What best advances our functioning, best advances our being, and is thereby our objectively greatest interest. This can be theoretically be determined according to facts about the nature of our characteristic functioning and facts about what effectively constitutes or advances that functioning the most."
My disagreements with this approach are discussed below. However, if "power" is replaced with "well-being" (as defined above), I mostly agree with everything else Fincke has to say on the subject. In particular, I completely agree with Fincke that increasing our own well-being relies in part on increasing the well-being of others:
"On the purely egoistic level, the development of our own powerful functioning depends to an incalculable extent on others’ flourishing. To maximally realize our potential, we need the conditions of stability and prosperity which others’ thriving creates and sustains for us and we need the cultivation of our powers by those already powerful who can advance us far beyond where we would ever have been in isolation and make it so that our own efforts can attain to even greater extents than would otherwise have been possible."
I've previously written about how I support feminism out of self-interest. This principle holds true in many areas. In most circumstances, my well-being will best be served by increasing everyone's well-being. I have no desire to see others suffer, and tend to take substantial joy in seeing others doing well. My more material goals are likewise accomplished by increasing everyone's well-being. I make money by empowering my employer to profit from my labor. I purchase things from people and organizations who would rather have the money paid than the product purchased. For many ways my life could be improved, society will have to improve, which benefits everyone. Most of the political changes I support would benefit the majority of people affected. Life is not a zero-sum game. In almost all ways I can think of, improvements in my well-being involve improvements in the lives of others.
Further, my strongest attachments include a desire on my part for others to have their desires met. Part of attachment, for me, is a sort of convergence between my own well-being and the well-being of another person. At baseline, I have a weak attachment like this with every other person in the world. Imagining or being exposed to unhappy people makes me unhappy. Imagining or being exposed to happy people makes me happy. Therefore, I have a strong incentive to empower others to be happy for purely egoist reasons.
III. Newcomb's Problem, or Honesty Is the Best Policy
Dan Fincke discusses how much of morality is the process of sacrificing short-term gains for larger or more long-term gains:
What I think is ultimately happening in morality is that it is overriding our misperception of our interests and our tendencies to subjectively desire in short term and micro level ways, in order to fulfill our ultimate interests on the macro and long term level, considering our good from a third person standard of what maximizes our total power.
With this in mind, there is a strong argument that egoism is best served by, in most circumstances, conforming to virtue ethics. This argument can be understood through an understanding of Newcomb's Problem:
In Newcomb's problem, a superintelligence called Omega shows you two boxes, A and B, and offers you the choice of taking only box A, or both boxes A and B. Omega has put $1,000 in box B. If Omega thinks you will take box A only, he has put $1,000,000 in it. Otherwise he has left it empty. Omega has played this game many times, and has never been wrong in his predictions about whether someone will take both boxes or not.
This is referred to as a "problem" or a "paradox" because, once the boxes have been filled, nothing we do could affect what it is them. So long as Omega thought we would only take one box, we are free to take both boxes and reap the profits. However, doing so means that if Omega correctly predicted our behavior, there would be only $1,000 in the boxes. So in order to "win" the game, whether one takes one or both boxes is irrelevant. To win, one must be the type of person who would only take one box. Newcomb's problem, then, does not depend on what you do. It depends on who you are.
The relevance is that life is made up of many situations which resemble Newcomb's problem:
Most real decisions that humans face are Newcomblike whenever other humans are involved. People are automatically reading unconscious or unintentional signals and using these to build models of how you make choices, and they're using those models to make their choices.
[...]
I know at least two people who are unreliable and untrustworthy, and who blame the fact that they can't hold down jobs (and that nobody cuts them any slack) on bad luck rather than on their own demeanors. Both consistently believe that they are taking the best available action whenever they act unreliable and untrustworthy. Both brush off the idea of "becoming a sucker". Neither of them is capable of acting unreliable whilesignaling reliability. Both of them would benefit from actually becoming trustworthy.
[...]
You can't reliably signal trustworthiness without actually being trustworthy. You can't reliably be charismatic without actually caring about people. You can't easily signal confidence without becoming confident. Someone who cannot represent these arguments may find that many of the benefits of trustworthiness, charisma, and confidence are unavailable to them.
Because life resembles Newcomb's problem, people have strong incentives to behave in ways that are seen as virtuous, as those behaviors are generally rewarded, and "bad" behaviors punished. If society is doing its job, there is no need to appeal to a higher morality to encourage people to behave in prosocial ways. Rational actors will recognize that it is in their best interest to do so.
All these tools can be fooled, of course. First impressions are often wrong. Con-men often seem trustworthy, and honest shy people can seem unworthy of trust. However, all of this social data is at least correlated with the truth, and that's all we need.
It doesn't matter that omega isn't real. Overall, the best way to gain the social benefits of appearing virtuous is to be virtuous. In my estimation, the gains of doing so outweigh any short-term gains that one can obtain by taking advantage of others. Malicious behavior, in most circumstances, is ultimately self-defeating.
From that standpoint, the main goal of society is to make sure that it is in everyone's best interest to behave in prosocial ways. Society must reward virtue and punish vice. To build a world that is beneficial for all, society must keep incentives properly aligned.
IV. Well-Being, Not Power, is The Goal
Because the goodness of an action is determined by our own subjective self-interest, goodness is dependent upon our own motivation. This idea is my primary area of disagreement with Dan Fincke. Fincke advocates for empowerment as the ultimate good:
But pleasures and pains or consciously formed preference attitudes, etc. are not themselves “conferrers” of goodness on things. Goodness is intrinsic and our pleasures, pains, attitudes, reasoned judgments, can either effectively align with our objective goods and contribute to maximizing our attainment of them or fail to do so.
I disagree. Fincke's empowerment ethics rely on the idea that functioning is a good in itself. In the same way that a "good hammer" is effective at pounding nails, Fincke feels that a "good person" is effective at expressing their humanity. Human powers consist of "rational powers, emotional powers, social powers, technological powers, artistic powers, physical powers, and sexual powers" with associated sub-powers. Fincke's argument is completely internally consistent, but I don't find it convincing because I don't think humans have a purpose.
A good hammer is effective at pounding nails because people designed hammers for that purpose. It's reliant on the idea that a person using the hammer desires to pound a nail, and its goodness is derivative of that desire. If nobody wanted to pound nails, it would not be good for a hammer to be effective at that task.
Similarly, human powers are only good because people want to exercise them. If people do not desire to exercise their powers, then doing so has no intrinsic goodness. All the goodness in an action is derivative of the desires of those affected. This goes back to my original argument - that we all have motivations, and that the only reason we act is to satisfy those motivations. It's not that satisfying our motivations is intrinsically good. It's that, no matter what we may tell ourselves or others, satisfying our motivations is the only thing that causes us to take actions. Satisfying our own subjective, egoistic desires is our goal, no matter how we choose to conceptualize it. So, for each individual, what is "good" is what satisfies our desires.
Ethical dilemmas, then, are places where a single person's subjective desires conflict. I may want a fancy car, but I also may want a healthy bank account. To resolve the conflict, I need to decide which I want more. Similarly, I don't want to take the trash out, but I also want my wife to be happy. A resolution of that conflict requires me to estimate the effect that my actions will have on both me and my wife, and decide which I want more.
People have these kinds of dilemmas all of the time, and we are notoriously bad at acting in our own self-interest. While it's up to each individual to decide for themselves what is in their own self interest, I'm partial to the idea that the degree to which something satisfies our desires is a fact about the universe, and could be measured, given enough information. AI researches refer to a concept called coherent extrapolated volition:
In calculating CEV, an AI would predict what an idealized version of us would want, "if we knew more, thought faster, were more the people we wished we were, had grown up farther together".
Obviously, this sort of thing is impossible to measure, given our current level of technology and understanding of the brain, but I support the idea that our subjective desires are not always what we think they are, and that a lot of our thinking about ethics should be thinking about what we actually want.
V. Implications
My vision of egoism is functionally very similar to R.M. Hare's two-level utilitarianism, which starts from utilitarian ethics, but concludes that, in most situations, it's best to operate according to a series of heuristics, and that actually trying to estimate the full effect of our actions should be reserved for special circumstances (or for the process of selecting heuristics).
My egoism works in a similar way. As a general rule, one is encouraged to adopt the heuristics that benefit all of humanity, as those are likely the ones that benefit the individual as well. One is encouraged to be a virtuous person, as society generally rewards the virtuous and punishes those seen as wicked.
However, there are important distinctions, the most important of which is the understanding that there is no such thing as moral superiority. When one understands that the most moral thing is to act in our own self-interest, and that everyone is attempting to act in their own self-interest (even if they are doing a bad job), it is unreasonable to feel morally superior to another person. It is likewise unreasonable to feel morally inferior. Such concepts become incoherent.
From this standpoint, it is easy to see that nobody "deserves" any more or less happiness than anyone else. This has important implications for the justice system, which tends to include an element of retribution, or the idea that it is important to punish bad acts based on how intrinsically bad they are. From an egoist perspective, the only purpose of rewarding or punishing behavior is to affect future behavior, and all such rewards or punishments are measure by their effectiveness at doing so. This attitude would quickly lead to the wholesale reform of our prison system and the end of most forms of incarceration (as it is ineffective at preventing recidivism). It would also lead to a lot less moral condemnation and righteous anger, as moral disagreements would instead be seen as simple differences in preference and not high-minded judgments of a person's value. The strongest statement a person could make about someone else's morality is "I want something different," or "I don't think that will actually help you."
Accepting that ethics are all about our own egoistic desires would also make it easier to analyze moral dilemmas. Classic moral dilemmas (such as The Trolley Problem) are much easier from an egoistic perspective - we just have to figure out which option makes us feel worse and choose the other. The same goes for questions about animal welfare. Animals have moral value to the same extent that other people have moral value, which is the extent to which we desire their well-being. If enough people desire animal welfare to a sufficient extent, society will reward protecting animal welfare and punish actions that harm animal welfare. Most advocates already understand this, and concentrate their advocacy between actual caring for animals and attempting to convince others to care more about animal welfare. Even utilitarians who love debating ethical questions understand that concessions must be made for egoistic reasons.
Life is a series of moral dilemmas. Every day, we make decisions that a different person, with different ideas of right and wrong, would make differently. Ethics aren’t just about political questions – e.g. war, civil rights, socialism, taxes – thought it’s about those too. Ethics tell us what time to wake up, which jobs to apply for, what to eat, where to shop, and whether to give $1 to the homeless man on the street.
Part of being able to make those kinds of decisions is a firm understanding of right and wrong. Most of the time, we rely on heuristics to make those decisions, but as in two-step utilitarianism, the process of choosing the best heuristics requires us to know what the ultimate goal of our ethics is. Once we understand that the goal is to satisfy our egoistic desires (and understand how our well-being is intrinsically linked to the well-being of others), we can more effectively make decisions.
Egoism also makes it much easier to forgive people for their bad behavior. When someone mistreats me, I understand that they are only doing what they think is right, and I understand that, no matter how bad their behavior, they deserve just as much happiness as me. This doesn't mean that they continue to have access to me, but it does mean that I rarely wish to see people suffer (though it happens on occasion - I am human).
Ultimately, I favor egoism because it is true. As my friend Kaveh Mousavi recently wrote,
We need people whose main concern is not activist effectiveness. We need intellectuals whose primary concern is speaking the truth. We need people who push the boundaries of our thinking, who dare think the impossible, we need moral watchdogs saying things they know will be unpopular, we need people who are willing to be polarizing and controversial, we need people who are harsh and blunt. Without them human history would be impoverished, and they have achieved much in other areas of life if not in activism.
Likewise, even if the implications of egoism were terrible for the world and would result in disaster if widely adopted, I would still believe in it because I think it is true. I just probably wouldn't write blog posts about it.