Moral Arguments Are Self-Delusional
Often those who expound moral sentiments justify them by articulating a moral argument, i.e., a chain of reasoning in which conclusions are inferred from premises and whose ultimate conclusion is a moral judgment. I consciously and emphatically avoid moral arguments. In this meditation I articulate my reasons by examining the social intuitionist model of moral psychology.
The Origins of Moral Sentiments
Oftentimes different humans have different moral sentiments. Indeed sometimes different individuals arrive at diametrically opposed moral sentiments. How do we arrive at a specific moral sentiment? There are a variety of levels at which this could be examined.
We might investigate what society we live in, what our place is in that society, what our peer group was like growing up, how we were raised by our parents or guardians, what genes we inherit, how natural selection shaped the evolution of those genes, what the physical environment – both in utero and after we were born – in which we developed was like, etc.
Furthermore, since psychological phenomena are emergent properties out of neurological ones, we could answer this question by understanding how an arrangement of synapses between neurons in a nervous system can lead to a moral sentiment, how levels of circulating hormones affect those neurons, etc. However, this just transfers the question from one of how a given moral sentiment came to be to how a particular arrangement of synapses between neurons came to be.
Suffice it to say that how we come to our moral sentiments is an interesting question, one that can be answered by empirically investigating the real, physical world in a variety of different ways. What is almost never a literally truthful answer is “because I made up this argument.”
The Danger of Self-Delusion
As long as an individual isn’t making the claim that an argument is why the individual developed the moral sentiment, but only that it is why someone should have the moral sentiment, what is the harm in such an indulgence? If those making such arguments can keep this clear in their heads, strictly speaking, there is nothing fallacious about this.
However, what is the significance of such an argument for those of us on the receiving end of it? If the argument is not really the reason the arguer has the moral sentiment, why should the argument persuade us, when it has not even actually persuaded the individual making the argument?
Moreover the greater danger here for one making a moral argument is that one begins to believe the argument is why one has the moral sentiment instead of why one thinks others should have the moral sentiment. This is a subtle difference that is easy to forget when one has memorized one’s premises and conclusions and is ready to recite one’s argument whenever prompted with the simple question, “why?”
Moral arguments are a bunch of words created to ornament a moral sentiment that one was going to have anyway. What are the benefits of these verbal ornaments? They can signal those around us that we have the moral sentiment that precipitated them. We can accomplish this by simply sharing that we have a moral sentiment directly, without the ornamentation. What are the costs of these verbal ornaments? At best they are superfluous distractions from investigating the real origins of our moral sentiments empirically. At worst they are gateways to self-delusion.
The self-delusion of moral arguments continues beyond this. Oftentimes those who have ornamented some of their moral sentiments with arguments look down upon those of us who have not abused ourselves of such self-delusions. Such individuals might say their own morality is “rational” or “logical” whilst ours is “just emotional.” However, they are in an inferior position to discern truth from falsity, because their moral arguments are a substitute for a scientific understanding of themselves.
Moral Philosophers or Moral Psychologists?
“The emotional dog and its rational tail” was more mild in interpreting the implications of the social intuitionist model. The author writes that “moral reasoning is rarely the direct cause of moral judgment. That is a descriptive claim, about how moral judgments are actually made. It is not a normative or prescriptive claim, about how moral judgments ought to be made.”
Additionally the author acknowledges “people are capable of engaging in private moral reasoning, and many people can point to times in their lives when they changed their minds on a moral issue just from mulling the matter over by themselves. Although some of these cases may be illusions, other cases may be real, particularly among philosophers,” and the author “recognizes that a person could, in principle, simply reason her way to a judgment that contradicts her initial intuition. The literature on everyday reasoning suggests that such an ability may be common only among philosophers, who have been extensively trained and socialized to follow reasoning even to very disturbing conclusions.”
So shall we all aspire to become moral philosophers? Shall we aspire to override how we actually feel about issues of the greatest importance to ourselves because of the words in thousand-page treatises about categorical imperatives or utilitarianism?
Or shall we return to philosophy’s roots and aspire to fulfill the ancient maxim “know thyself” with the full power of modern scientific inquiry?
This latter approach is the course that these meditations take. Instead of aspiring to become moral philosophers, we aspire to become moral psychologists, i.e., to understand what is actually going on in our minds when we engage in the process we call “morality,” rather than engaging in the vanity of trying to override this process and so often deluding ourselves that we have succeeded.
(In fairness to Jonathan Haidt, author of “The emotional dog and its rational tail,” the tentative way implications are discussed in the paper is prudent scholarship. In the paper “the social intuitionist model is presented . . . only as a plausible alternative approach to moral psychology, not as an established fact.” It is considered good form in scientific inquiry to be modest in one’s claims. It is also prudent, when proposing a new model, not to anger a whole branch of academia.)
Moral Reasoning versus Self-Delusional Arguments
Oftentimes the social intuitionist model is contrasted with other models by how these models regard moral “reasoning.” Unfortunately “reasoning” is a vague term. The implications of the social intuitionist model are critical of one kind of reasoning: the ex post facto reasoning that occurs in the wake of a moral judgment.
This is not to be taken to imply that all moral reasoning is fallacious, however. Once we are self-aware that we have a moral sentiment, we of course use our reasoning powers to apply it in our lives. Just because we are dealing with morality, does not mean we lose our powers of foresight or our ability to understand consequences.
For example, suppose someone had the moral sentiment of sympathy to non-human animals, such as those typically used as livestock. Such an individual in choosing what foods to consume would likely take into consideration the fact that any dietary meat comes from the killing of livestock. Feeling sympathy for the livestock killed in order to produce such meat, this individual would likely consider vegetarian or vegan dietary habits. Indeed if the individual did not at least consider such diets, we might question whether the individual really did have the moral sentiment of sympathy to animals used for livestock.
On the other hand, if such an individual put forth the argument that the individual’s sympathy to livestock comes from an argument based on a theory of rights, utilitarianism, justice, categorical imperatives, virtue, etc., then this would be an example of a self-delusional argument.
This highlights that the social intuitionist model of moral psychology explains the origins of our moral judgments. It does not imply that any kind of reasoning to do with morality is self-delusional, only that our rationalizations about why we have moral judgments are self-delusional.
The penultimate section of “The emotional dog and its rational tail” is a sketch of ways in which reasoning can be used to improve moral judgment, even if the social intuitionist model is true. Consideration of lots of different points of view that provoke conflicting intuitions, using our reasoning powers to resolve these conflicts, and creating social environments that encourage this process is proposed. This seems pleasant inasmuch as it involves individuals becoming cognizant of each other’s sentiments, rather than groups of people shouting demagoguery at one another.
Morality as Mythology
If morality is indeed an instinct that is customized and externalized by our social context, it functions in such a way to make our moral sentiments more like the moral sentiments of those with whom we associate. It therefore causes individuals to cluster into groups with similar moral sentiments. Because the differences between these groups are moral differences – which are some of the most passionate and contentious differences there are – this is apt to lead to especially strong affinities for those in the same group and especially strong hostilities to others not in the same group.
This is exactly tribalism, a phenomenon familiar to those who have encountered our meditation on human nature. Whereas in more primitive societies tribalism was expressed as membership in literal tribes, we discern membership in more abstract tribes not by living in close physical proximity to one another and recognizing each other personally, but by recognizing a common mythology. This is why, in a previous meditation, we estimated that morality is “the greatest mythology of them all.” Morality functions exactly how mythology was defined: it lacks a literal truth, and it is a mechanism we have for separating ourselves into “us” and “them.”
Previously we have seen how failure to protect one’s beliefs about reality from one’s moral sentiments leads to fallacy. Now we have seen how moral arguments are self-delusional. Next we shall see how moral constructs are fictional. Finally it would be remiss of me to point out so many fallacies of morality without putting forth a technique for avoiding them.
Adler, J. E., & Rips, L. J. (Eds.). (2008). Reasoning: Studies of Human Inference and Its Foundations. Cambridge University Press.
Bargh, J. A., & Chartrand, T. L. (1999). The unbearable automaticity of being. American Psychologist, 54(7), 462–479. https://doi.org/10.1037/0003-066X.54.7.462
Batson, C. D., O’Quin, K., Fultz, J., Vanderplas, M., & Isen, A. M. (1983). Influence of self-reported distress and empathy on egoistic versus altruistic motivation to help. Journal of Personality and Social Psychology, 45(3), 706–718. https://doi.org/10.1037/0022-3518.104.22.1686
Blasi, A. (1980). Bridging moral cognition and moral action: A critical review of the literature. Psychological Bulletin, 88(1), 1–45. https://doi.org/10.1037/0033-2909.88.1.1
Cleckley, H. (1955). The Mask of Sanity: An Attempt to Clarify Some Issues about the So-Called Psychopathic Personality. Retrieved from https://books.google.com/books?id=ksw4DwAAQBAJ
Davis, J. L., & Rusbult, C. E. (2001). Attitude alignment in close relationships. Journal of Personality and Social Psychology, 81(1), 65–84. https://doi.org/10.1037/0022-3522.214.171.124
Gazzaniga, M. S., Bogen, J. E., & Sperry, R. W. (1962). Some functional effects of sectioning the cerebral commissures in man. Proceedings of the National Academy of Sciences, 48(10), 1765–1769. https://doi.org/10.1073/pnas.48.10.1765
Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108(4), 814–834. https://doi.org/10.1037/0033-295X.108.4.814
Haidt, J., Bjorklund, F., & Murphy, S. (2000). Moral dumbfounding: When intuition finds no reason. Unpublished Manuscript.
Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480–498. https://doi.org/10.1037/0033-2909.108.3.480
Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology, 37(11), 2098–2109. https://doi.org/10.1037/0022-35126.96.36.1998