Writings about contraception, critical thinking, and other things

Moral Arguments Are Self-Delusional

December 22, 2017
3,141 words (~15 minutes)
Tags: morality mythology psychology

Often those who expound moral sentiments justify them by articulating a moral argument, i.e., a chain of reasoning in which conclusions are inferred from premises and whose ultimate conclusion is a moral judgment. I consciously and emphatically avoid moral arguments. In this meditation I articulate my reasons by examining the social intuitionist model of moral psychology.

The Origins of Moral Sentiments

Oftentimes different humans have different moral sentiments. Indeed sometimes different individuals arrive at diametrically opposed moral sentiments. How do we arrive at a specific moral sentiment? There are a variety of levels at which this could be examined.

We might investigate what society we live in, what our place is in that society, what our peer group was like growing up, how we were raised by our parents or guardians, what genes we inherit, how natural selection shaped the evolution of those genes, what the physical environment – both in utero and after we were born – in which we developed was like, etc.

Furthermore, since psychological phenomena are emergent properties out of neurological ones, we could answer this question by understanding how an arrangement of synapses between neurons in a nervous system can lead to a moral sentiment, how levels of circulating hormones affect those neurons, etc. However, this just transfers the question from one of how a given moral sentiment came to be to how a particular arrangement of synapses between neurons came to be.

Suffice it to say that how we come to our moral sentiments is an interesting question, one that can be answered by empirically investigating the real, physical world in a variety of different ways. What is almost never a literally truthful answer is “because I made up this argument.”

The Social Intuitionist Model

Some may recognize this as an allusion to what was come to be called the “social intuitionist” model in the psychology of morality, which has risen to prominence in the field.

In the previous meditation we reflected on how beliefs about reality differ from moral sentiments, how these two kinds of mental activity relate to each other, and the danger of allowing the latter to influence the former. This occurred with the analysis of language and with a priori reasoning.

This meditation, however, is based on empirical observation of how people actually come to a moral sentiment and, in particular, one model of that process. We therefore turn from philosophy to psychology. However, a full review of the literature on the subject is beyond the scope of this mediation. Instead a seminal paper popularizing the social intuitionist model is summarized.

The Model

The model was popularized in the paper “The emotional dog and its rational tail: A social intuitionist approach to moral judgment.” (Haidt, 2001)

In the model, individual humans inherit an intrinsic instinct to make moral judgments in general. This instinct is customized and externalized by social influence as an individual grows, leading the individual to specific moral judgments. The mental process that causes an individual to develop these specific moral judgments is not accessible to the individual’s conscious observation.

Eventually the individual encounters some stimulus that causes an affective response such as disgust or empathy. This causes the individual to become consciously aware of a specific moral judgment. This is what makes the model “intuitionist.” The result of the process leading to a moral judgment is something an individual is consciously aware of, but the process itself is not.

The individual – now aware of the moral judgment – engages in reasoning after the fact to rationalize having the moral judgment. This ex post facto process, unlike the one actually leading to the moral sentiment, is something of which the individual is consciously aware. However, this ex post facto reasoning rarely serves to contradict and change the moral judgment that precipitated it.

The individual shares this ex post facto reasoning with others. The reasoning rarely changes the moral judgments of others directly. However, the ex post facto reasoning does serve as a marker that the individual has the moral judgment. If a second individual is unconsciously developing moral judgments and views this first individual as an “us” and not a “them,” then this second individual is influenced by the mere fact the first individual has this specific moral judgment, regardless of the content of the reasoning. Thus the cycle repeats itself. This is what makes the model a “social” intuitionist model.

The paper’s titular analogy is that viewing moral reasoning as the cause of a moral judgment is akin to viewing a dog’s tail as wagging the dog, rather than the other way around. Moral judgments cause moral reasoning; the dog wags the tail. Furthermore, while tail wagging is a consequence and not a cause of the dog’s behavior, it serves a social function, communicating something about the dog to others.

(The full social intuitionist model, for the sake of completeness, does include ways in which reasoning can actually change one’s own moral sentiments and in which reasoning can directly change the moral sentiments of others. However, according to the model, these are recondite and rare. Thus for the sake of brevity these elements are elided here.)

Differences with Rationalist Models

One popular misconception of the difference between the social intuitionist model and rationalist models of moral psychology is that the social intuitionist model posits that morality is based on emotions whereas rationalist models posit that morality is based on reasoning. This is an oversimplification and is not accurate. In both paradigms moral judgments are cognitive, not affective. In both paradigms affective reactions are prompts that lead to moral judgments. The fundamental difference is that rationalist models posit that affective reactions are inputs to a conscious process that results in a moral judgment, whereas the social intuitionist model posits that affective reactions lead to conscious awareness of a moral judgment that was already reached as a result of an unconscious process.

Evidence for the Model

The power of the social intuitionist model, like all scientific models, is in parsimonious explanation of empirical observations. “The emotional dog and its rational tail” groups into four main categories observations that the social intuitionist model explains parsimoniously and that other models have difficulty with.

Automatic Judgments

Humans are observed to make judgments without conscious deliberation. (Bargh & Chartrand, 1999) Furthermore there is evidence that one’s social judgments tend to converge with those one has an affinity for. (Davis & Rusbult, 2001) In extreme cases individuals are left saying they don’t know why, but they think something is wrong – an effect Jonathan Haidt, the author of “The emotional dog and its rational tail,” gained notoriety for observing experimentally. (Haidt, Bjorklund, & Murphy, 2000) The social intuitionist model explains these observations by recognizing intuition informed by social context as the main source of our moral judgments.

Motivated Reasoning

The innocent-sounding phrase “motivated reasoning” is jargon in psychological circles for engaging in reasoning so biased that the conclusion has effectively been decided already. This is reasoning in which evidence is picked to support the preordained conclusion and dismissed if it does not. Humans are observed to do this generally. (Kunda, 1990) Furthermore humans are observed to do this even more when their core values are challenged, such as in moral questions. (Lord, Ross, & Lepper, 1979) The social intuitionist model interprets these observations as an explanation of why it is so rare for reasoning to change one’s own intuitive moral judgments.


Humans have been observed to invent motives for their behaviors that were not, in fact, their motives. This was observed most dramatically in experiments involving patients whose brain hemispheres were disconnected as part of an obsolete treatment for seizures. One hemisphere of their brains, when asked, would instantly make up a reason for behavior initiated by the other hemisphere. (Gazzaniga, Bogen, & Sperry, 1962) The social intuitionist model interprets these observations as an explanation of why it is so rare for reasoning to change someone else’s moral judgments. Moral debates perpetually target reasons cited for moral judgments that are not the real reasons the individual in question has those moral judgments.

Moral Behavior Correlates with Emotion, not Reasoning

There is a body of work finding that moral behavior is correlated with various emotions. This research has most notably investigated empathy, but research has also found links between moral behavior and disgust, sadness, guilt, and shame, as well. (Batson, O’Quin, Fultz, Vanderplas, & Isen, 1983) An extreme example of this correlation are psychopaths, who are capable of reasoning about morality, but who often commit actions quite contrary to that reasoning. They do not necessarily have cognitive impairment, but lack an ability to integrate emotions such as empathy. (Cleckley, 1955) On the other hand evidence interpreted as a correlation between reasoning and moral behavior can be better explained by confounders. (Blasi, 1980) This is consistent with the social intuitionist model in which affective responses lead to intuitions of moral judgments without reasoning, but is inconsistent with models in which moral judgments necessarily involve reasoning.

The preceding is just a summary. Curious readers are invited to consult “The emotional dog and its rational tail” and its bibliography. It is freely available on the Internet as the last paper in a book-length anthology. (Adler & Rips, 2008) This meditation next turns to implications of the social intuitionist model.

The Danger of Self-Delusion

As long as an individual isn’t making the claim that an argument is why the individual developed the moral sentiment, but only that it is why someone should have the moral sentiment, what is the harm in such an indulgence? If those making such arguments can keep this clear in their heads, strictly speaking, there is nothing fallacious about this.

However, what is the significance of such an argument for those of us on the receiving end of it? If the argument is not really the reason the arguer has the moral sentiment, why should the argument persuade us, when it has not even actually persuaded the individual making the argument?

Moreover the greater danger here for one making a moral argument is that one begins to believe the argument is why one has the moral sentiment instead of why one thinks others should have the moral sentiment. This is a subtle difference that is easy to forget when one has memorized one’s premises and conclusions and is ready to recite one’s argument whenever prompted with the simple question, “why?”

Moral arguments are a bunch of words created to ornament a moral sentiment that one was going to have anyway. What are the benefits of these verbal ornaments? They can signal those around us that we have the moral sentiment that precipitated them. We can accomplish this by simply sharing that we have a moral sentiment directly, without the ornamentation. What are the costs of these verbal ornaments? At best they are superfluous distractions from investigating the real origins of our moral sentiments empirically. At worst they are gateways to self-delusion.

The self-delusion of moral arguments continues beyond this. Oftentimes those who have ornamented some of their moral sentiments with arguments look down upon those of us who have not abused ourselves of such self-delusions. Such individuals might say their own morality is “rational” or “logical” whilst ours is “just emotional.” However, they are in an inferior position to discern truth from falsity, because their moral arguments are a substitute for a scientific understanding of themselves.

Moral Philosophers or Moral Psychologists?

“The emotional dog and its rational tail” was more mild in interpreting the implications of the social intuitionist model. The author writes that “moral reasoning is rarely the direct cause of moral judgment. That is a descriptive claim, about how moral judgments are actually made. It is not a normative or prescriptive claim, about how moral judgments ought to be made.”

Additionally the author acknowledges “people are capable of engaging in private moral reasoning, and many people can point to times in their lives when they changed their minds on a moral issue just from mulling the matter over by themselves. Although some of these cases may be illusions, other cases may be real, particularly among philosophers,” and the author “recognizes that a person could, in principle, simply reason her way to a judgment that contradicts her initial intuition. The literature on everyday reasoning suggests that such an ability may be common only among philosophers, who have been extensively trained and socialized to follow reasoning even to very disturbing conclusions.”

So shall we all aspire to become moral philosophers? Shall we aspire to override how we actually feel about issues of the greatest importance to ourselves because of the words in thousand-page treatises about categorical imperatives or utilitarianism?

Or shall we return to philosophy’s roots and aspire to fulfill the ancient maxim “know thyself” with the full power of modern scientific inquiry?

This latter approach is the course that these meditations take. Instead of aspiring to become moral philosophers, we aspire to become moral psychologists, i.e., to understand what is actually going on in our minds when we engage in the process we call “morality,” rather than engaging in the vanity of trying to override this process and so often deluding ourselves that we have succeeded.

(In fairness to Jonathan Haidt, author of “The emotional dog and its rational tail,” the tentative way implications are discussed in the paper is prudent scholarship. In the paper “the social intuitionist model is presented . . . only as a plausible alternative approach to moral psychology, not as an established fact.” It is considered good form in scientific inquiry to be modest in one’s claims. It is also prudent, when proposing a new model, not to anger a whole branch of academia.)

Moral Reasoning versus Self-Delusional Arguments

Oftentimes the social intuitionist model is contrasted with other models by how these models regard moral “reasoning.” Unfortunately “reasoning” is a vague term. The implications of the social intuitionist model are critical of one kind of reasoning: the ex post facto reasoning that occurs in the wake of a moral judgment.

This is not to be taken to imply that all moral reasoning is fallacious, however. Once we are self-aware that we have a moral sentiment, we of course use our reasoning powers to apply it in our lives. Just because we are dealing with morality, does not mean we lose our powers of foresight or our ability to understand consequences.

For example, suppose someone had the moral sentiment of sympathy to non-human animals, such as those typically used as livestock. Such an individual in choosing what foods to consume would likely take into consideration the fact that any dietary meat comes from the killing of livestock. Feeling sympathy for the livestock killed in order to produce such meat, this individual would likely consider vegetarian or vegan dietary habits. Indeed if the individual did not at least consider such diets, we might question whether the individual really did have the moral sentiment of sympathy to animals used for livestock.

On the other hand, if such an individual put forth the argument that the individual’s sympathy to livestock comes from an argument based on a theory of rights, utilitarianism, justice, categorical imperatives, virtue, etc., then this would be an example of a self-delusional argument.

This highlights that the social intuitionist model of moral psychology explains the origins of our moral judgments. It does not imply that any kind of reasoning to do with morality is self-delusional, only that our rationalizations about why we have moral judgments are self-delusional.

The penultimate section of “The emotional dog and its rational tail” is a sketch of ways in which reasoning can be used to improve moral judgment, even if the social intuitionist model is true. Consideration of lots of different points of view that provoke conflicting intuitions, using our reasoning powers to resolve these conflicts, and creating social environments that encourage this process is proposed. This seems pleasant inasmuch as it involves individuals becoming cognizant of each other’s sentiments, rather than groups of people shouting demagoguery at one another.

Morality as Mythology

If morality is indeed an instinct that is customized and externalized by our social context, it functions in such a way to make our moral sentiments more like the moral sentiments of those with whom we associate. It therefore causes individuals to cluster into groups with similar moral sentiments. Because the differences between these groups are moral differences – which are some of the most passionate and contentious differences there are – this is apt to lead to especially strong affinities for those in the same group and especially strong hostilities to others not in the same group.

This is exactly tribalism, a phenomenon familiar to those who have encountered our meditation on human nature. Whereas in more primitive societies tribalism was expressed as membership in literal tribes, we discern membership in more abstract tribes not by living in close physical proximity to one another and recognizing each other personally, but by recognizing a common mythology. This is why, in a previous meditation, we estimated that morality is “the greatest mythology of them all.” Morality functions exactly how mythology was defined: it lacks a literal truth, and it is a mechanism we have for separating ourselves into “us” and “them.”

Other Fallacies

Previously we have seen how failure to protect one’s beliefs about reality from one’s moral sentiments leads to fallacy. Now we have seen how moral arguments are self-delusional. Next we shall see how moral constructs are fictional. Finally it would be remiss of me to point out so many fallacies of morality without putting forth a technique for avoiding them.

Inline Citations

Adler, J. E., & Rips, L. J. (Eds.). (2008). Reasoning: Studies of Human Inference and Its Foundations. Cambridge University Press.

Bargh, J. A., & Chartrand, T. L. (1999). The unbearable automaticity of being. American Psychologist, 54(7), 462–479. https://doi.org/10.1037/0003-066X.54.7.462

Batson, C. D., O’Quin, K., Fultz, J., Vanderplas, M., & Isen, A. M. (1983). Influence of self-reported distress and empathy on egoistic versus altruistic motivation to help. Journal of Personality and Social Psychology, 45(3), 706–718. https://doi.org/10.1037/0022-3514.45.3.706

Blasi, A. (1980). Bridging moral cognition and moral action: A critical review of the literature. Psychological Bulletin, 88(1), 1–45. https://doi.org/10.1037/0033-2909.88.1.1

Cleckley, H. (1955). The Mask of Sanity: An Attempt to Clarify Some Issues about the So-Called Psychopathic Personality. Retrieved from https://books.google.com/books?id=ksw4DwAAQBAJ

Davis, J. L., & Rusbult, C. E. (2001). Attitude alignment in close relationships. Journal of Personality and Social Psychology, 81(1), 65–84. https://doi.org/10.1037/0022-3514.81.1.65

Gazzaniga, M. S., Bogen, J. E., & Sperry, R. W. (1962). Some functional effects of sectioning the cerebral commissures in man. Proceedings of the National Academy of Sciences, 48(10), 1765–1769. https://doi.org/10.1073/pnas.48.10.1765

Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108(4), 814–834. https://doi.org/10.1037/0033-295X.108.4.814

Haidt, J., Bjorklund, F., & Murphy, S. (2000). Moral dumbfounding: When intuition finds no reason. Unpublished Manuscript.

Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480–498. https://doi.org/10.1037/0033-2909.108.3.480

Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology, 37(11), 2098–2109. https://doi.org/10.1037/0022-3514.37.11.2098