Theory on framework issues

Sunday, January 29, 2012

14.2. What's morality for?—Integrity versus conformity

This series’ topic has been the biological function of moral principles, in the sense that the circulation of blood is the biological function of the heart. (See Ruth G. Millikan (1984) Language, Thought, and Other Biological Categories.) I claim moral principles function to create habits that minimize decision fatigue by making automatic the otherwise ego-depleting subordination of short- to long-term interests. Automatization comes at the cost of choices that from a self-interested perspective are less than best—when you omit considering the usefulness of the moral habit itself. Although automatization is costly, it’s worth its price, as proven by the harsh consequences psychopaths face due to their disability for adopting moral principles to form habits of integrity.

I introduced the habit theory of morality to explain—without supposing that moral judgments are objectively true—why we do what we ought, but most moralists with naturalistic world views see moral principles differently. Their view—as will be seen—doesn’t successfully explain why we ought to act in accord with any moral principles, and it loses on general merits as an explanation. According to my habit theory, the only incentives for conforming to moral principles are avoiding effortful decision-making, for your present benefit, and (much more importantly) strengthening or at least not weakening the habits constituting moral character traits, for your future benefit. The moral sentiment, guilt, is anxiety about your moral integrity, so the incentive to avoid guilt is only strong to the same degree that the prudential need to maintain integrity is threatened.

The dominant conception, on the other hand, is that ultimately we conform to moral principles to avert guilt, conceived as an automatic reaction to our moral transgressions. Moral principles, on the dominant conception, are installed during the process of childhood socialization as an internal policeman serving the greater society. Freud’s theory of the super-ego is often taken as a prototype of this conception, although the super-ego is a mostly unconscious structure responsible for neurotic guilt, whereas the principles of explicit morality reside in Freud’s less-discussed ego-ideal. The role of moral affects as prime movers of explicit morality can be seen more clearly in neobehaviorist theories about learning moral values. John Dollard and Neal E. Miller explained moral values as classically conditioned responses, which the culture can arrange because they’re formed by the mere temporal contiguity of stimuli. (Dollard & Miller (1963) Personality and Psychotherapy: An Analysis in Terms of Learning, Thinking, and Culture.) Philosopher John S. Wilkins incidentally expresses this conception of conscience, “If you ever contemplated a murder, you would dread the horrible memory of your victim’s last moments or lifeless corpse.”
A theory of moral principles as society’s beachhead within the individual might explain how moral principles influence behavior, but it doesn’t explain why we should conform to them, since they impede their bearers. The theory unwittingly implies you should try to escape the grip of any moral principles, which offer only guilt pangs. It doesn’t counsel cultivating your moral character. The dominant theory defies evolutionary considerations, where today’s moralistic tenets purport to benefit humanity rather than kin. Society’s beachhead prevails as the leading conception of morality much as the evolutionary selection of entire species for complex adaptive traits persists in the popular mind.

Next in series: "Unraveling the mystery of morality: The unity of comprehension and belief explains moralism and faith"


  1. You should create a brief post that outlines your positions on ethics, epistemology, metaphysics, etc. Perhaps an FAQ on the side that just gives a brief wrap up of your philosophical views with links to longer posts.

  2. I was linked to this series by your comments on Hanson's Overcoming Bias. I thought I'd repost the comment I made over there, in case you didn't see it.

    I’ve read your take on morality and noticed it is founded on a major error in definition. You seem to assume the word “morality” refers to some sort of magic mind-control argument that is so intrinsically persuasive that it has the power to bridge the is-ought gap. That is not the common meaning of the word “morality,” that is some bizarre alternate definition some mixed-up philosophers use. Saying that “Ultimate moral judgments are always false” because magic mind control arguments don’t exist is like saying that the statement “stainless steel is silver in color” is always false because stainless steel doesn’t contain the element Ag in its alloy. Saying that making ultimate moral judgements is an error like “green grows” is like saying “stainless steel is silver” is an error because it’s using the word “silver,” which refers to an element, to refer to a color of all things.

    “Good” is simply a shorthand word for something along the lines of “increases utility for yourself and others.” A good person is someone who increases utility for themselves and others. Morality is a system that the systematizing parts of our brains created so we could do good more efficiently. Trying G. E. Moore’s trick of asking “This increases utility, but is it good?” is like asking “This is water, but is it H20?”

    The next giant mistake you make is saying:

    the only incentives for conforming to moral principles are avoiding effortful decision-making, for your present benefit, and (much more importantly) strengthening or at least not weakening the habits constituting moral character traits, for your future benefit

    This is patently false. There is another incentive: your moral conscience. The conscience is an innate desire humans have within them to be good (that is, to be a person who increases utility for themselves and others). You seem to be assuming that self-interested desires are innate, but non-self-interested desires are caused by environment. This is patently false, there is plenty of evidence that some rare people are born without consciences and have no innate desire to be good no matter what society tells them (they are called “sociopaths“).

    Of course, since the desire to be moral is a desire like any other, it can be overridden by other desires. That explains why people aren’t moral all the time, even though everyone who isn’t a sociopath wants to be.

    This leads us to the source of your (and Moore’s) error of definition. When someone is asking what they morally “ought to do” what they are actually asking is “what should I do if I want to be a good person.” Because the desire to be a good person is inherent in all humans who are not sociopaths, the “I want to be a good person” part is implicit, it is assumed the listener would be intelligent enough to figure that part out for themselves.

    It’s like if I tell an unhealthy friend “you ought to see your doctor.” What I’m really saying is “if you want to be healthy you should see your doctor.” Since the desire to be healthy is nearly universal in humans the “if you want to be healthy part can go unsaid.”

    To wrap it up:
    1. Ethical naturalism is correct.
    2. Good is a synonym for something like “increases utility.”
    3. The existence of the human conscience makes the is-ought gap irrelevant to moral discourse, because we already innately want to do good (unless we’re a sociopath).

  3. Evan,

    The idea that utility is shorthand for utility maximization—in other words, that utilitarianism is true as a matter of _definition_—is a complete nonstarter. Consulting the dictionary might suffice to settle the matter, but one can also consult common knowledge and the scientific research I discuss in 14.1. Many moral judgments have nothing to do with utility. Most people consider it immoral to use the national flag as a cleaning rag, even when no one else knows about it. Moral codes include not only utility (hard-avoidance, welfare) but also fairness, loyalty, purity, and respect. If you consider these, the bite of Moore's argument will be more obvious; why is it moral to show respect or preserve purity? isn't an artificial question. You avoid threats to your moral conceptions by shrinking morality down to your version of its purified essence, and then you "prove" its truth by an act of definition.

    Conscience isn't innate. What people feel guilty for varies too much between cultures. But having a conscience is innate. In general, humans have pro-social tendencies—the development of conscience is an example—but the specific content is largely due to experience.

    Even restricting morality to utility, that equation would be far from providing the basis for an objective morality. The big question is always "whose utility?" Utilitarians say everyone's utility is to be considered equally when summing utilities, but there's nothing in human nature or the world selecting that formula.

  4. I should apologize for not defining utility. You seem to think that when I said “utility” I meant the "harm" portion of Haidt's Foundations. That's a natural mistake, as harm is among the most important parts of utility. But I'm a preference utilitarian, not a pleasure utilitarian. I tend to think of utility as more along the lines of "the sum total of all human values and preferences." i.e. when you maximize utility you maximize preference satisfaction.

    Even if you hate my definition of morality for some reason that is not really relevant to my point. These are my core points:
    1. Most humans have an innate moral conscience that makes them value other people and want to help other people achieve their desires.
    2. It is possible to systematize the implementation of that desire by developing a code of behavior that lets them achieve that desire.
    3. The sum total of human desires can be called “utility” and the system for implementing them effectively can be called utilitarianism.

    Are any of these statements empirically false? If they are not then my theory is sound.

    If you want to keep using the arrangement of letters "morality" to mean "magic mind control arguments" I really don't care. But when you claim that all moral statements are false you are mixing and matching definitions. Your argument is basically.
    1. Morality consists of magic mind control arguments.
    2. Magic mind control arguments do not exist.
    3. Therefore there is no good reason to care about the preferences of others and want to see them fulfilled, and generally do good things for other people.
    4. Since there is no good reason to care about other people morality must be about something else, like forming habits of behavior that aren't shortsighted.

    You switch definition of morality in the middle of your reasoning from "mind control arguments" to "doing good things for other people." Therefore, you “prove” that there is no inherent desire in humans to do good things by proving the blatantly obvious fact that there are no magic mind control arguments. It’s nothing but verbal sleight of hand. That is the central problem with your thesis.

  5. I do use two different senses of "morality," hoping that the context disambiguates the usage. One reason is that the "mind control" argument is something I presume rather than argue, except broadly.

    I hold that morality *is* mind control (if I grasp your usage). It's not my stipulated definition but my interpretation of what the term means in moral discourse. When I speak of morality as something helpful and encourageable, I mean--when I spell it out more--the principles that a person follows, not the contents of any objective morality.

    We agree there are some innate prosocial propensities in humans. Where we disagree is that these innate tendencies are systematized by conventional, objective morality. These propensities contribute to our capacity to develop moral principles, but they don't dictate principles. This point is most developed in the 14.1 essay ( on civic morality and the multifariousness of our moral predilections. The easiest way to see the expression of this indeterminacy of our predilections is that, allowing that we are disposed to help realize other people's preferences, we don't necessarily believe that every person's utility counts the same. People vary widely in whose utility they care about.

    To me, this is so obvious, that I think you and other utilitarians must have a subterranean "mind control" understanding of morality--you must really believe that certain moral principles mysteriously compel behavior, even against their bearers' interests. This mind-control view is inherent in our moral discourse--it, too, is probably an innate predilection--and it will infect anyone who doesn't question it. You, I contend, could think morality only systematizes what's already given innately because you conceive morality as giving rise to a magical "ought." Without that preconception, how could you not see that our cares for others' preferences don't imply that every person's preferences count the same.

    14.1 also points out that we have moral inclinations that don't involve realizing the preferences of others. This narrowness is another failing of utilitarianism, and again, you deny some persons' moral principles the status of being moral because, according to my diagnosis, you aren't prepared to grant "mind control" powers to those other principles.

  6. "I hold that morality *is* mind control (if I grasp your usage). It's not my stipulated definition but my interpretation of what the term means in moral discourse."
    I concur that there are a lot of philosophers who think morality is the same thing as "mind control" (I think they're called motivational internalists). But I don't think that is the common definition of morality among average people. When you want to show that someone is a good person in a work of fiction you don't show them being unusually vulnerable to mind control, you show them being kind to people and helping others.

    I think the comic book storyline "Final Crisis" provides the best example of why "mind control" is not the common definition of morality. In that story Darkseid, an alien despot, discovers an irrefutable "mind control" argument that everyone ought to be his slaves. Anyone who reads the argument instantly becomes a mindless drone serving him (they never show the argument of course). The various superheroes coordinate an effort to stop Darkseid and wipe the argument from people's minds.

    Throughout the story Darkseid is clearly meant to be the bad guy and the heroes fighting him are clearly the good guys. It is demonstrated that Darkseid is evil by showing that he makes people do awful things when they submit to him. When he is defeated and the argument is suppressed it is a good thing. I think most people reading the story would interpret it this way. That indicates that most people think morality is about doing good things, not about mind control.

    "The easiest way to see the expression of this indeterminacy of our predilections is that, allowing that we are disposed to help realize other people's preferences, we don't necessarily believe that every person's utility counts the same"

    Actually, you yourself state in your essay that morality consists of fairness in addition to Haidt's other pillars. It is that pillar of fairness that makes valuing people's utility equally a moral thing to do.

    "14.1 also points out that we have moral inclinations that don't involve realizing the preferences of others. "

    No it doesn't. Welfare, purity, loyalty, fairness, and respect are all preferences. Preference utilitarianism encompasses all of them. You're mistaking me for a pleasure utilitarian.

    I don't deny that Haidt's other pillars of morality; purity, loyalty, and respect aren't important. I admire fair, pure, loyal, and respectful people. I do think welfare is more important and should be given priority when those other values conflict with it. The people Haidt interviewed seemed to agree, they tried to justify their moral judgements by trying to find ways the behaviors they disapproved of were harmful and expressed "moral dumbfoundment" when they couldn't.

    I don't think morality gives rise to a magical "ought." I think that morality is something (non-sociopathic) people innately desire, but their efforts to achieve that desire are vulnerable to selfishness, poor impulse control, and self deception.

    To make an analogy, the factual statement "exercise is important t health" is instantly persuasive to anyone with an intrinsic desire to be healthy, but has no mind control effects on someone who doesn't care about health. However, due to poor impulse control and shortsightedness, many people often don't exercise, even though they want to on some level. Because people like having high self esteem, they may use self deception to trick themselves into thinking that exercise isn't that important to health. However, none of that means that the statement "Exercise is important to health" isn't a fact, or that those people don't value health. It just means that they are screwups.

  7. Hi Evan,

    Sorry for the delay. I like to wait until I have time to think about how to respond.

    Preference utilitarianism doesn't include the purity ethic (for example): like every norm, the purity expresses someone's preference, but the preference it expresses doesn't correspond in strength with the norm's weight as preference utilitarianism requires. The easiest way to see this is that someone imbued with a strong moral preference for purity won't be moved to renounce his moral preference because few are offended by the impurity. Even he might not be offended, yet me will insist on the standard. Welfare is the only Haidt factor that works in any kind of utilitarian fashion.

    I don't think moral standards are strictly determined genetically because I think my habit theory explains moral standards more plausibly in terms—based on ego-depletion theory or decision fatigue. But let's go straight to the meta-ethical point. What if the moral principles we were determined by our genes? What if included in this mix of species-specific moral traits were found tendencies to xenophobia and racism? Would you then say these tendencies were part of morality—that they are "good" or what one "ought" do? Anticipating a negative answer, I say this shows that whatever genetic determinism offers, it isn't objective morality.

    My guess at your meaning of "mind control" was probably wrong. I didn't realize you meant the term so literally. The mind control is of the sort where you're controlled by a viral meme, not by another particular person. My newest essay in this series,, might clarify my sense for "mind control," to the extent the term is a decent metaphor.

  8. Hi. I'm a student of UNN and here's the school's website(, I'll like to know more about your website and how I could actually contribute. Thanks.


Blog Archive

About Me

Joshua Tree, California 92252-2141, United States
SUPPLIER OF LEGAL THEORIES. Attorneys' ghostwriter of legal briefs and motion papers, serving all U.S. jurisdictions. Former Appellate/Law & Motion Attorney at large Los Angeles law firm; J.D. (University of Denver); American Jurisprudence Award in Contract Law; Ph.D. (Psychology); B.A. (The Johns Hopkins University). E-MAIL: Phone: 760.974.9279 Some other legal-brief writers research thoroughly and analyze penetratingly, but I bring another two merits. The first is succinctness. I spurn the unreadable verbosity and stupefying impertinence of ordinary briefs to perform feats of concision and uphold strict relevance to the issues. The second is high polish, achieved by allotting more time to each project than competitors afford. Succinct style and polished language — manifested in my legal-writing blog, Disputed Issues — reverse the common limitations besetting brief writers: lack of skill for concision and lack of time for perfection.