Theory on framework issues

Showing posts with label morality. Show all posts
Showing posts with label morality. Show all posts

Thursday, March 7, 2013

14.2.1. The habit theory of morality, moral influence, and moral evolution

Contrasting with all forms of moral realism, the habit theory of morality recognizes no “terminal moral values,” since it holds that transgression of an agent’s principles of integrity is necessary to allow their adjustment to circumstances. By conceiving moral principles as principles of integrity, prosocial habits serving as indispensable self-control devices in a psychological economy where willpower is an exceedingly scarce resource, it uniquely explains—without recourse to group selectionism—how humans could evolve a group-minded morality.

Group selectionism is the minority view among evolutionists that natural selection in humans occurs in the manner of eusocial species, at the level of groups, not just genes. Eusocial species comprise primarily the social insects, whose hives’ genetic commonality permits group selection, which in their case—unlike the human—reduces to the gene level. Social psychologist Jonathan Haidt in his recent book The Righteous Mind (2013) frames the case for human group selection with the aphorism “Humans are 90% chimpanzee and 10% bee.” Haidt observes that most moral arguments are hypocritical, aiming to impress or control others, agents often ignoring standards when they can avoid punishment for transgressions. Yet Haidt acknowledges that humans occasionally behave selflessly, as when a soldier takes huge personal risks for his fellows or zealots lose themselves in moral or political causes. Haidt thinks these phenomena inexplicable at the gene or individual level because it would subject agents to strongly adverse selection pressures, since their altruism fails to serve their individual interests. Group selectionism is subject to a standard objection: inevitable exploitation by free riding, which group-selectionist theory must contain a mechanism to punish. Human societies curtail free riding by social approval and disapproval, including material rewards and punishments.

According to Haidt (and other strict-adaptationist theorists), the proclivity to reward and punish transgression against group interest must arise through group selection because otherwise approval and disapproval meted out in the group interest would be another form of self-sacrifice. The habit theory of morality treats moral approval and disapproval as expressing the same habit set used for self-control, illustrated by the habit theory of civic morality: U.S. citizens practice habits of frugality in their personal lives by demanding the government cut spending. The same equivalence describes moral suasion directed toward individuals.

Ironically, moral hypocrisy provides evidence that self-control and moral suasion practice a unified moral habit. If hypocritical demands are seen as purely deceptive, it’s hard to see how they would serve as a costly signal; carrying no costs, moral hypocrisy would have no value as a signal. Hypocrisy has no point if anyone can be a hypocrite cheaply. But if engaging in moral suasion agents rehearse (practice) principles of integrity that they habitually apply to themselves, the cost of demanding more morality than you want to give is becoming more moral than you wish.

The workings become clearer and more plausible with more concreteness about the structure of the moral habits (or principles of integrity), and Alan P. Fiske’s relationship-regulation theory integrates well with the habit theory. According to Fiske’s model, systems of moral principles are activated when their associated social relationships are “constituted,” where the systems of moral principles are Unity, Hierarchy, Equality, and Proportionality. (T.S. Rai and A.P. Fiske, Moral psychology is relationship regulation (2011) Psychological Review, 118: 57 ‒ 75.) Hierarchical principles, for example, are activated when the appropriate social relations of Authority/Ranking are constituted, so when an agent is involved in an authoritarian relationship, such as between employees and their boss, the corresponding Hierarchical principles of unconditional submission and conditional protection dominate. To restate in habit-theory terms, negotiating hierarchical relationships motivates agents to form habits based on Hierarchical principles. Most importantly, the Hierarchical structure is a coherent whole, including facets involving regulation of both self and others. In the habit theory, other-directed morality is a spandrel deriving from the primary adaptive value of self-control.

Monday, February 18, 2013

14.1.1. Utilitarianism twice fails

It seems almost self-evident that (barring foreign subjugation) a government will care about the wants of (some of) its citizens and nothing else: no other object of concern is plausible. If governments concern themselves with the wants of noncitizens, that will be only because citizens desire their well-being. The now platitudinous insight that the only possible basis for government policy is people’s wants can be attributed to utilitarianism, which gets credit in its stronger form for the apparent success of weaker claims. 

Another reasonable claim derives from utilitarianism: citizens’ wants should count equally. This seems only fair in a democracy, where one citizen gets one vote. Few today would deny the principle that public policy should serve the most good of the greatest number, which may seem to contradict my claim that no general moral principle governs public policy, but in practice, the consequences of this limited utilitarianism are thin indeed, leaving ample room for ideology. I’ll call thin utilitarianism this public-policy formula: the greatest good for the greatest number of citizens, weighting their welfare equally. 


First, I’ll consider whether thin utilitarianism succeeds on its own terms by providing a practical guide to public policy. Second, I’ll examine how this deceptively appealing guide to public policy transmogrifies into the monster of full-blown utilitarianism, a form of moral realism. The first constrains even casual use of thin utilitarianism; the second impugns utilitarianism as a general ethical theory.


1. Non-negotiable conflicts between subagents undermine thin utilitarianism

Although simple economic models attributing conduct to rational self-interest require that agents assign consistent utilities to outcomes, agents are inconsistent. One example of inconsistent utility assignment is the endowment effect, where agents assign more value to property they own than  to the same property they don’t own. The inconsistency considered here is stronger than the endowment effect, which we can surmount with effort, as professional traders must do. Despite the endowment effect, there is an answer to how much utility an outcome affords, the endowment effect being a bias, which willpower or habit may neutralize.

The conflict between subagents within a single person, on the other hand, can’t be resolved by means of a common criterion, such as market price, since two subagents pursue different ends. Which of these subagents dominates depends on situational and personological factors that elicit one or the other, not on bias. Construal-level theory reveals a conflict between intrapersonal subagents, near-mode and far-mode, integrated mindsets applied to matter experienced at fine or broad granularities. Modes (or “construal levels”) differ in that far-mode is more future-oriented and principled, near-mode, present-oriented and contextual. Far-mode and near-mode are elicited by the way social choices are made: voting elicits far-mode and market choices, near-mode; the utility of a choice depends on construal level.

Take a policy choice: how much wealth should be spent on preventive medicine? There are two basic ways allocating resources to medical care, political process and the market, socialized medicine being an example of political process, private medicine, the market. Socialized medicine makes allocating funds for the medical care a political decision; the market makes it each consumer’s personal choice. When you compare the utility of the choices by political process with those on the market, you should expect to find that when people choose politically, they use far-mode thinking encouraged by voting; whereas when they make purchases, they use near-mode thinking encouraged by the market. The preventive-care expenditure will be higher under socialized medicine because political process elicits far-mode, which is concerned with future health. People will be more miserly with preventive care under private medicine, where the decision to spend is made by consumer choice in near-mode, where we care more about the present. People favor spending more on preventive care when they vote to tax themselves than when they buy it on the market. Which outcome provides the greater utility—more preventive care or more recreation—is relative to construal level.

The same indeterminacy of utility occurs when comparing decisions made under different political processes, such as local versus central. Local decisions will be near-mode, central decisions far-mode. Assuming socialized medicine, less funding would be available if it were subject to state rather than federal control. Which provides more utility depends on whether the consequences are evaluated in near-mode or far-mode; no thin-utilitarian criterion applies.

Some utilitarians will protest that we should measure experiences rather than wants. The objection misses the argument’s point, which is that utility is relative to mode, a conclusion easiest to see in the public-choice process because the alternatives may be delimited. If the conclusion that utility depends on construal level holds, the same indeterminacies occur in evaluating experience. That apart, when utilitarianism is applied to public policy, present wants rather than experienced satisfaction is the criterion; agents necessarily choose based on present wants whether on the market or the political process.

2. Full-blown utilitarianism stands convicted of moral realism

Full-blown utilitarians are necessarily moral realists, but increasingly they are seen to deny it. While moral realism is widely recognized as absurd, utilitarianism seems to some an attractive ethical philosophy. For the sake of intellectual respectability, utilitarians can appear to reject anachronistic moral realism while practicing it philosophically.

Full-blown utilitarianism often obscures its differences with thin utilitarianism, which is a questionable doctrine but in accord with ordinary common sense. It emerges from thin utilitarianism by the misdirection of subjecting ethical premises to the test of simplicity, a test appropriate to realist theories exclusively, because simplicity serves truth. A classic illustration: Aristotle theorized that everything on earth that goes up goes down; Newton set out the gravity theory, which applies to all objects, not just those terrestrial, and which predicts that objects can escape the earth’s gravitational field by traveling fast. Scientists confidently bet on Newton well before rockets were invented, and their confidence was vastly increased by the simplicity of Newton’s theory, which made correct predictions concerning all objects. Although philosophers have explained variously the correlation between simplicity and truth, they generally agree that simplicity signals truth. Unless utilitarians can otherwise justify it, searching for a simple moral theory means searching for a true theory.

The full-blown utilitarian seeks a misplaced simplicity by insisting that all entities that can experience happiness, a much simpler criterion than “current citizens,” serve as the beneficiary reference group—including future generations of humans and even beasts, whose existence depends on policy; whereas, thin utilitarianism is a democratic convention, serving only the wants of the currently existing citizens. Because they must incorporate future generations into the reference group, utilitarian philosophers have had to accept that using a policy-dependent reference group entails a dilemma regarding interpretation of full-blown utilitarianism, with unattractive consequences at both horns, which realize radically different ideals.  In one version, you maximize the average utility obtained by the whole population; in the other, you sum the utilities. These interpretations seem almost equally unattractive: the averaging view says that one supremely happy human is better than a billion very happy ones; the adding approach implies that a hundred trillion miserable wretches is better than a billion happy people. 

To apply a utilitarian standard to scenarios so distant from thin utilitarianism, accepting their consequences because of simplicity’s demands, is to treat moral premises as truths and to practice moral realism, despite contrary self-description. Those agreeing that moral realism is impossible must reject full-blown utilitarianism.

Friday, October 12, 2012

14.4. The deeper solution to the mystery of moralism—Morality and free will are hazardous to your mental health

The complex relationship between Systems 1 and 2 and construal level

The distinction between pre-attentive and focal-attentive mental processes  has permeated cognitive psychology for some 35 years. In the past half-decade has emerged another cognitive dichotomy specific to social psychology: processes of abstract construal (far cognition) versus concrete construal (near cognition). This essay will theorize about the relationship between these dichotomies to clarify further how believing in the existence of free will and in the objective existence of morality can thwart reason by causing you to choose what you don’t want.

The state of the art on pre-attentive and focal-attentive processes is Daniel Kahneman’s book Thinking, Fast and Slow, where he calls pre-attentive processes System 1 and focal-attentive processes System 2. The reification of processes into fictional systems also resembles Freud’s System Csc (Conscious) and System Pcs (Preconscious). I’ll adopt the language System 1 and System 2, but readers can apply their understanding of preconscious – conscious, pre-attentive – focal-attentive, or automatic processes – controlled processes dichotomies. They name the same distinction, in which System 1 consists of processes occurring quickly and effortlessly in parallel outside awareness; System 2 consists of processes occurring slowly and effortfully in sequential awareness, which in this context refers to the contents of working memory rather than raw experience and accompanies System 2 activity.

To integrate Systems 1 and 2 with construal-level theory, we note that System 2—the conscious part of our minds—can perform any of three routines in making a decision about taking some action, such as whether to vote in an election, a good example not just for timeliness but also for linkages to our main concern with morality: voting is a clear example of an action without tangible benefit. The potential voter might:

Case 1. Make a conscious decision to vote based on applying the principle that citizens owe a duty to vote in elections.
Case 2. Decide to be open to the candidates’ substantive positions and vote only if either candidate seems worthy of support.
Case 3. Experience a change of mind between 1 and 2.

The preceding were examples of the three routines System 2 can perform:

Case 1. Make the choice.
Case 2. “Program” System 1 to make the choice based on automatic criteria that don’t require sequential thinking.
Case 3. Interrupt System 1 in the face of anomalies.

When System 2 initiates action, whether it retains the power to decide or passes to System 1 is the difference between concrete and abstract construal. Case 2 is key to understanding how Systems 1 and 2 work to produce the effects construal-level theory predicts. Keep in mind that the unconscious, automatic System 1 includes not just hardwired patterns but also skilled habits. Meanwhile, System 2 is notoriously “lazy,” unwilling to interrupt System 1, as in Case 3, but despite the perennial biases that plague System 1, resulting from letting it have its way, the highest levels of expertise also occur in System 1.

A delegate System 1 operates with holistic patterns typifying far cognition. This mode is far because we offload distant matter to System 1 but exercise sequential control under System 2 as immediacy looms—although there are many exceptions. It is critical to distinguish far cognition from the lazy failure of System 2 to perform properly in Case 3, as such failure isn’t specific to mode. Far cognition, System 1 acting as delegate for System 2, is a narrower concept than automatic cognition, but far cognition is automatic cognition. Near cognition admits no easy cross-classification.

Belief in free will and moral realism undermine our “fast and frugal heuristics”

The two most important recent books on the cognitive psychology of decision and judgment are Thinking, Fast and Slow by Daniel Kahneman and Gut Reactions: The Intelligence of the Unconscious by Gerd Gigerenzer, and both insist on the contrast between their positions, although conflicts aren’t obvious. Kahneman explains System 1 biases as due to employing mechanisms outside their evolutionary range of usefulness; Gigerenzer describes “fast and frugal heuristics” that sometimes misfire to produce biases. Where these half-empty versus half-full positions on heuristics and biases really differ is their overall appraisal of near and far processes, as Gigerenzer is a far thinker and Kahneman a near thinker, and they are both naturally biased for their preferred modes. Far thought shows more confidence in fast-and-frugal heuristics, since it offloads to System 1, whose province is to employ them.

The fast-and-frugal-heuristics way of thinking helps in understanding the effects of moral realism and free will: they cause System 2 to supplant System 1 in decision-making. When we apply principles of integrity to regulate our conduct, sometimes we do better in far mode, where System 2 offloads the task of determining compliance to System 1. Not if you have a principle of integrity that includes an absolute obligation to vote; then you act as in Case 1: based on a conscious decision. But principles of integrity do not really take this absolute form, an illusion created by moral realism. A principle of integrity flexible enough for actual use might favor voting (based, say, on a general principle embracing an obligation to perform duties) but disfavor it for “lowering the bar” if there’s only a choice between the lesser of evils. The art of objectively applying this principle depends on your honest appraisal of the strength of your commitment to each component virtue, a feat System 2 is incapable of performing; when it can be accomplished, it’s due to System 1’s unconscious skills. Principles of integrity are applied more accurately in far-mode than near-mode. [Hat Tip to Overcoming Bias for these convenient phrases.]

But beliefs in moral realism and free will impel moral actors to apply their principles in near-mode because these beliefs hold that moral conduct results from freely willed acts. I’m not going to thoroughly defend the premise here, but this thought experiment might carry some persuasive weight. Read the following in near mode, and introspect your emotions:


Sexual predator Jerry Sandusky will serve his time in a minimal security prison, where he’s allowed groups of visitors five days a week.


Some readers will experience a sense of outrage. Then remind yourself: There’s no free will. If you believe the reminder, your outrage will subside; if you’ve long been a convinced and consistent determinist, you might not need to remind yourself. Morality inculpates based on acts of free will: morality and free will are inseparable.

A point I must emphasize because of its novelty: it’s System 1 that ordinarily determines what you want. System 2 doesn't ordinarily deliberate about the subject directly; it deliberates about relevant facts, but in the end, you can only intuit your volition. What a belief in moral realism and free will do is nothing less than change the architecture of decision-making. When we practice principles of integrity and internalize them, they and nonmoral considerations co-determine our System 1 judgments, whereas according to moral realism and free will, moral good is the product of conscious free choice, so System 2 contrasts its moral opinion to System 1’s intuition, for which System 2 compensates—and usually overcompensates. With the voter who had to weigh the imperatives of the duty to vote and the duty to avoid “lowering the bar” when both candidates promote distasteful or vacuous programs, System 2 can prime and program System 1 by studying the issues, but the multifaceted decision is itself best made by System 1. What happens when System 2 tries to decide? System 2 makes the qualitative judgment that System 1 is biased one way or the other and corrects System 1, implicating the overcompensation bias, by which conscious attempts to counteract biases usually overcorrect. A voter who thinks correction is needed for a bias toward shirking duty will vote when not really wanting to, all things considered. A voter biased toward "lowering the bar" will be excessively purist. Whatever standard the voter uses will be taken too far.

Belief in moral realism and free will biases practical reasoning

This essay has presented the third of three ways that belief in objective morality and free will causes people to do other than what they want:

  1. It retards people in adaptively changing their principles of integrity.
  2. It prevents people from questioning their so-called foundations.
  3. It systematically exaggerates the compellingness of moral claims.

Monday, April 9, 2012

14.3. Unraveling the mystery of morality: The unity of comprehension and belief explains moralism and faith

Supposedly objective moral judgments—as opposed to adopting personal standards on which to base a principled integrity—issue always in falsehood; consequently, moral discourse is irrational. Knowing the rational purpose that principles of integrity serve might help people reject moralism—conveying this knowledge the point of 14.0, 14.1, and 14.2—but doesn’t fully explain objective morality’s wide acceptance, since objective moralism irrationally rigidifies the principles moralists adopt. This essay unravels the mystery of morality.

To understand moralism’s near-universal grip, I rely on social psychologist Daniel T. Gilbert’s findings on the relationship between comprehension and belief, what I call the unity of comprehension and belief. Gilbert’s essential findings are:
  1. Merely to comprehend a message, you must suspend disbelief and accept the message as true.
  2. To disbelieve a message, once understood (hence believed), you must later decide to reject it.
The mechanism of moralism
Comprehension requires suspending disbelief, a radical, counter-intuitive finding. The reason objective morality is a viral meme can now be fathomed: hearing and understanding a moralistic viewpoint in childhood—before absorbing or constructing concepts that could inoculate against the virus—implants an uncontestable moral conviction, protected by an unconditional reluctance to grasp conflicting ideas, including morality’s logical deconstruction, because purportedly objective moralities dictate not only what people ought do but also what they ought believe: a moral person “knows the difference between right and wrong.” Just as properly prudential persons avoid drifting into irrational beliefs, so properly moral persons avoid drifting into immoral beliefs. But the consequences differ. To avoid irrational belief, you update the information it’s based on and change the belief accordingly, but to remain moral, believers in objective morality must abjure changes to their foundational beliefs. The unity of comprehension and belief implies that if persons believe they ought continue their moral course they will unconsciously avoid understanding messages refuting their beliefs about morality’s demands and its nature.

The unity of comprehension and belief leaves no room to entertain views that narrow the application of moralistic principles but leaves some latitude to entertain positions that broaden it, since adding moral claims doesn’t necessarily contradict implanted beliefs. This implication conflicts with the observation that moralism’s scope—driven by free will’s death agony—is declining in the Western world. But what is waning isn’t moralism’s scope but its intensity, which the free-will myth fuels. While rejecting free will violates objective moralities, which incorporate moral blame, the declining influence of free will isn’t due to the doctrine’s widespread rejection. Free will’s influence declines with increased understanding of the specific circumstances influencing outcomes. If we had a complete determinist theory, people might believe in free will yet see no room for its exercise. Without rejecting the doctrine itself, people consider actual conduct variably responsive to free will—variably accountable to morality.
Faith and fanaticism
Religious faith works in exactly the manner of morality. Moralists can’t understand a refutation of moral realism because to reject morality—even momentarily—is to be immoral. For the faithful, arguments that would undermine their faith—even momentarily—are self-censored, since religious faith shares with secular morality the conviction that one ought to believe.

When believers relinquish religion, it’s often because they’ve been indoctrinated concurrently with a conflicting moralism; perhaps personal tragedy makes the contradiction vivid by raising questions about God’s justness and benevolence. In mainly such manner—that of superseding fideistic convictions with moralistic convictions—is a dying theism replaced by a secular moralism.

Some political ideologies carry the same intransigence as do moralism and faith. When the obligatoriness of politics focuses on states of personal consciousness—whether raising it on the left or purifying it on the right—beliefs that would lower or pollute consciousness must not just be eventually rejected but must remain uncomprehendingly unreceived, since the preventive to avoid ever holding the belief is refusing to understand its challenge. The only hope for the fanatical, the faithful, and the moralistic is surprise’s shock when they reach unexpected conclusions from unrelated concepts.

Sunday, January 29, 2012

14.2. What's morality for?—Integrity versus conformity

This series’ topic has been the biological function of moral principles, in the sense that the circulation of blood is the biological function of the heart. (See Ruth G. Millikan (1984) Language, Thought, and Other Biological Categories.) I claim moral principles function to create habits that minimize decision fatigue by making automatic the otherwise ego-depleting subordination of short- to long-term interests. Automatization comes at the cost of choices that from a self-interested perspective are less than best—when you omit considering the usefulness of the moral habit itself. Although automatization is costly, it’s worth its price, as proven by the harsh consequences psychopaths face due to their disability for adopting moral principles to form habits of integrity.

I introduced the habit theory of morality to explain—without supposing that moral judgments are objectively true—why we do what we ought, but most moralists with naturalistic world views see moral principles differently. Their view—as will be seen—doesn’t successfully explain why we ought to act in accord with any moral principles, and it loses on general merits as an explanation. According to my habit theory, the only incentives for conforming to moral principles are avoiding effortful decision-making, for your present benefit, and (much more importantly) strengthening or at least not weakening the habits constituting moral character traits, for your future benefit. The moral sentiment, guilt, is anxiety about your moral integrity, so the incentive to avoid guilt is only strong to the same degree that the prudential need to maintain integrity is threatened.

The dominant conception, on the other hand, is that ultimately we conform to moral principles to avert guilt, conceived as an automatic reaction to our moral transgressions. Moral principles, on the dominant conception, are installed during the process of childhood socialization as an internal policeman serving the greater society. Freud’s theory of the super-ego is often taken as a prototype of this conception, although the super-ego is a mostly unconscious structure responsible for neurotic guilt, whereas the principles of explicit morality reside in Freud’s less-discussed ego-ideal. The role of moral affects as prime movers of explicit morality can be seen more clearly in neobehaviorist theories about learning moral values. John Dollard and Neal E. Miller explained moral values as classically conditioned responses, which the culture can arrange because they’re formed by the mere temporal contiguity of stimuli. (Dollard & Miller (1963) Personality and Psychotherapy: An Analysis in Terms of Learning, Thinking, and Culture.) Philosopher John S. Wilkins incidentally expresses this conception of conscience, “If you ever contemplated a murder, you would dread the horrible memory of your victim’s last moments or lifeless corpse.”
 
A theory of moral principles as society’s beachhead within the individual might explain how moral principles influence behavior, but it doesn’t explain why we should conform to them, since they impede their bearers. The theory unwittingly implies you should try to escape the grip of any moral principles, which offer only guilt pangs. It doesn’t counsel cultivating your moral character. The dominant theory defies evolutionary considerations, where today’s moralistic tenets purport to benefit humanity rather than kin. Society’s beachhead prevails as the leading conception of morality much as the evolutionary selection of entire species for complex adaptive traits persists in the popular mind.

Next in series: "Unraveling the mystery of morality: The unity of comprehension and belief explains moralism and faith"

Wednesday, January 11, 2012

14.1 A habit theory of civic morality

The morality that primarily concerns legal theory is civic morality, the morality used to ground political argument. Even the possibility of politics can seem hard to understand without the existence of objective moral judgments. How can we agree on or even argue about fundamental policies without an external standard of correctness? The answer is twofold. First, we don’t necessarily agree or even argue. We assemble majorities or effective pluralities not necessarily underwritten by fundamental agreement. Second, when many citizens agree on the applicable morality, their convergence—much as judicial agreement obtains despite the absence of any theory of constitutional interpretation—is due to influences other than correspondence with an external standard. This essay will propose a mechanism responsible for a limited moral convergence.

The contrary view—that morality is real, moral facts true—probably arose with the universalistic religions. When religion began to wash up and its natural moral laws became uncompelling, the movement for legal codification partly filled the breach. A universalistic written law supported the illusion that political disagreement would be resolved under common moral premises.

The need for an illusory political morality survives in some small part because of the absence of challenge from an alternative theory of political morality. The main candidate explanation is common moral indoctrination within a culture, but indoctrination fails as a candidate for the source of moral agreement in politics because people do not automatically apply the morals they are taught, even if they believe them true. Consider Biblical morality and the extent to which people choose those teachings they find convenient and disregard the rest. Neither is self-interest an adequate explanation, since false consciousness is widespread, and the ways are limitless for a person to slice his self-interest in moral terms A third explanation, depth psychology, may explain why moral precepts are sometimes applied inaccurately, say, why a truth teller is thought a liar; but it doesn’t explain the terms on which moral judgments are made—why truth telling is or isn’t the criterion in the first place.

Since most of the time few lead lives centered on politics, the habit theory of explicit morality must use the moral habits important in citizens’ ordinary lives to explain the moralizing they apply to politics. Per 14.0, personal morality is a tool for creating and strengthening habits of forgoing narrow, short-term self-interest. Political morality usually favors those same habits useful in personal (nonpolitical) life. This practice doesn’t make for intelligent politics, since personal morals—being habits that serve quotidian needs—are often ill-adapted for politics. Some examples will be considered, but note that the direction of influence can reverse at those rare historical junctures where masses of people become deeply involved in politics.

Recently, social psychologists have studied the differing political morals of liberals and conservatives. Five bases for political reasoning have emerged: liberals focus on values of welfare (or harm avoidance) and fairness; conservatives weight those factors less and include values of loyalty, purity, and respect. Some clues about how these values emerge from ordinary life are provided by societies where the dominant values belong to the conservative cluster. In these traditional agrarian societies, respect and subordination figure large in most people’s lives. Although the demographic correlates of liberal and conservative political thinking haven’t been mapped out in modern societies, the habit theory predicts that political moralities are bolstered by different styles of life, which make one moral system or another personally adaptive. Geographical mobility, for instance, may make habits of loyalty less advantageous. The fact that conservatives tend to deride liberals and radicals as “rootless cosmopolitans” bears this out. Another possible connection is that rural areas require more concern with personal cleanliness; hence, habits of purity are stronger.

One of the weird developments in contemporary politics is the huge Tea Party movement within the middle class to reduce the federal deficit, a movement that hates President Obama, more than any reason, simply because he spent a lot of federal money trying to provide an economic stimulus and remove an economic obstacle. Although polls report that citizens are more concerned about jobs and the economy than the deficit, the sheer degree of concern with this technical question of macroeconomics remains staggering and bewildering in that austerity undercuts recovery. This isn’t a classic tax revolt: taxes haven’t been unusually high. What’s weird about caring so much about the deficit is that most economists think austerity will worsen the struggling economy. The concern with frugality is a reflex, a moral habit forged in the personal battle to control the family budget, extended to politics as a means of strengthening the personal moral habit by rehearsing it. According to my habit theory, you shouldn’t expect that the habit is suitable for government. It arises in personal life and extends to politics as another way, outside the personal realm, to practice the personal moral habit of frugality.

Another movement, Occupy Wall Street, espouses a morality emphasizing principles of fairness, which conflicts with today’s welfarist morality that’s dominant across the political spectrum—the total good of all, lacking regard for distribution. The Occupy Movement contends that the wealth and income distributions are unfair. A certain kind of middle class sector seems drawn to this movement; some have termed it the lower echelon of the elite. Many supporters are members of guild-like professional associations (but not trade unions). Guild membership fashions daily moral habits whose purpose is avoiding transgression of norms proscribing unfair competition. This contrasts with the habits useful to the business executive, whose life creates different ethical sensibilities. Functioning as a team and hierarchy at the same time, executives must show loyalty to superiors. They must resist appetites to sabotage their boss, and they must even take the rap for him. Steve Jobs is called a genius; John Ives, the real designer of popular Apple products, was only able to prosper because he suppressed his resentment that Jobs stole the credit. This businessman’s morality demands intensely loyal partisanship.

Citizens practice their personal moralities in the public sphere because practicing habits useful in their personal lives benefits them personally. But personal moral habits are maladaptive for politics, a reality most obvious when intelligent politics is most necessary. At some point, if the economy fails to recover, moralities adapted to politics will become ascendant—despite the maladaptiveness of political morality for personal life. Then, personal morality will necessarily suffer, as in the case of an intensely political British communist sect where it is said, “You can trust a comrade with your money or your life, but you can’t trust a comrade with your books or your wife.”

Thursday, December 15, 2011

14.0 Why do what you "ought"?—A habit theory of explicit morality

Moral judgments are always false
Ultimate moral judgments are always false—not like “Santa Claus exists” is false but like “Green grows” is: it is false because it is illogical, due to using concepts outside their range of application. As to “Green grows,” only particular things, not properties of things, can grow; and analogously, something is “good” only with respect to some purpose. A good hammer is good for hammering; a good move in a game is good for winning; but nothing is simply “good.” What is a good man or a good deed? Good for what?

At the turn of the 20th century, G. E. Moore developed Hume’s conclusion: what’s good or what's obligatory—ought to be done—can’t be derived from is. Moore made the reasoning behind Hume’s discovery intuitive and showed that moral claimants commit a logical confusion, although Moore didn’t regard his argument as refuting moral realism, the objective existence of moral facts. Moore argued that moral judgments, claims about what one ought to do, can’t be restated as factual. Take any moral platitude—you ought not kill, you ought to treat others as you want them to treat you—you can always ask the further question: why is it true, but the question has no meaningful answer. No facts can ground moral ultimates, since, if they did, the moral platitude wouldn’t be ultimate: it would surrender its ultimacy to whatever moral principle links the platitude to factual truth.

The higher reaches of ethical philosophy (meta-ethics) preoccupies itself with finding a naturalistic response to Moore’s demonstration. One proposed solution is to identify morality with a purportedly innate moral orientation, but this answer doesn’t rebut Moore; you can still ask why ought we to do what our instinctual impulses demand. This goes regardless of whether the innate morality is conceived as sparse—for example, starvation is bad; all-encompassing—human flourishing is good; or abstract—whatever complex function our brains “compute” in moral judgments. What we tend to do is no moral argument for what we ought to do; what we inevitably do is even more obviously irrelevant to the moral question.

Explicit morality is a tool for forming habits
An unrecognized problem in rejecting the existence of moral facts conduces to the overwhelming intellectual resistance to Moore’s almost obvious conclusion. If moral judgments are unnatural, are false just in that they can neither imply nor be implied by facts, how can moral beliefs play any role in directing behavior? People seem to accept moral realism because they think that morality plays a role in their natural lives—they donate to charity because they ought to—but why would someone do something merely because they think they ought to do it? The apparent answer is that they’re hypnotized by language; having learned they should do B to get A, they do B when they ought to, failing to notice they have misused “ought” by omitting any context for B’s efficacy, previously set by A, an error that would leave them without any way to decide how hard they’ll strive for B. If you do B because you should, due to its being a means to A, the effort you devote to B and the sacrifices you endure for it depend on how much you want A. If you do B simply because you ought to, how hard do you work at B? How much do you donate to charity because it’s what you simply ought to do? Their making these decisions suggests that people have some way to decide how much weight to give morality. A paradox then arises when the moral judgment gains its force from seeming like an instrumental judgment but is lacking in just what makes effort apportionment possible. Moral judgments must serve some natural function, some directive function; even moral hypocrisy works only because morality can have some directive effect, which, therefore, must be reconciled with rejecting moral facts.

The perplexity is rooted in a bias favoring belief and desire over habit in explaining behavior. In its basic function, explicit morality is a tool for using force of habit to resist the temptation of narrow self-interest. Consider a typical temptation: students in a packed room taking a multiple choice test, one student peeks at the answers of his neighbor, another doesn’t. Or friends tell one another “true” stories, where one embellishes the facts, another doesn’t. Much ethical behavior is automatic: often, people will avoid cheating or will tell the truth without any thought as to the options. If you want to be a person whose practical morality excludes cheating or telling false stories, you are best off forming the habit. Deciding to take a short-term loss is hard, energy consuming, and unpleasant, and it becomes harder, more energy consuming, and more unpleasant the more often you must decide. Honest people, whatever the lengths and limits of their honesty, are people who have made a habit of honesty. Their honesty is the habit of honesty. The terms of your explicit morality define the kind of person you want to become—the choice itself without moral foundation. (Which is not to say it is “freely” chosen.)

Different moral strokes for different moral folks
Regardless of its content, morality takes different forms. Explicit morality can consist of specific commands, usually negative, such as the Ten Commandments (deontology); it can consist of general goals, such as create the greatest happiness or welfare (consequentialism); or it can consist of virtue prescriptions, such as wisdom, honesty, and generosity (virtue ethics). Given that explicit morality is a species-specific self-control method, not in any sense a set of truths, we can ask what form of morality most effectively serves that purpose. Most people’s explicit morality contains a mixture of these forms. Someone might apply deontology to serious criminal acts, consequentialism to resolving conflicts, and virtue ethics to personal decisions. The advantage of a unitary system is avoiding uncertainty around the edges, from which the agent may suffer both longer decision time and more numerous opportunistic, rationalized judgments.

Which form of explicit morality should dominate to best realize morality’s function depends on one’s central life ambitions. One surrounded with self-endangering temptations to break the law might benefit from deontology; one whose life is involved in balancing the conflicting demands of others—say, a politician, at least of the conventional sort—may benefit from strengthening his consequentialist tendencies; one oriented toward a largely internalized standard of excellence—an academic or, even more so, an artist—may be served best by virtue ethics.

The personal cost of moral realism is inflexibility in choice of moral framework. The inculcation of deontology often accompanies social oppression, one of the reasons religion can serve as the “opium of the people.” Pressures stifling intellectuals may be imbued with consequentialism. Wage earners indoctrinated in virtue ethics may seek to become model employees, despite better serving their greater interest with a morality focused on consequences. Since explicit morality is a tool, as with other tools, form follows function.

Blog Archive

About Me

Joshua Tree, California 92252-2141, United States
SUPPLIER OF LEGAL THEORIES. Attorneys' ghostwriter of legal briefs and motion papers, serving all U.S. jurisdictions. Former Appellate/Law & Motion Attorney at large Los Angeles law firm; J.D. (University of Denver); American Jurisprudence Award in Contract Law; Ph.D. (Psychology); B.A. (The Johns Hopkins University). E-MAIL: srdiamond@gmail.com Phone: 760.974.9279 Some other legal-brief writers research thoroughly and analyze penetratingly, but I bring another two merits. The first is succinctness. I spurn the unreadable verbosity and stupefying impertinence of ordinary briefs to perform feats of concision and uphold strict relevance to the issues. The second is high polish, achieved by allotting more time to each project than competitors afford. Succinct style and polished language — manifested in my legal-writing blog, Disputed Issues — reverse the common limitations besetting brief writers: lack of skill for concision and lack of time for perfection.