Theory on framework issues

Monday, April 15, 2013

11.5. Why do we confuse belief and opinion?: A construal-level-theory analysis. THE CONFUSION BETWEEN BELIEF AND OPINION AND THE NATURES OF FANATICISM AND PHILISTINISM. PART 6.

Why do we confuse the epistemic attitudes opinion and belief, which—to serve the distinct functions of deliberation and action—should be based, respectively, on our own thinking or on that of our epistemic superiors and peers? We confuse them because we’re prone to see our own beliefs as being more like original opinions than they are, so we forgo the distinction, treating these epistemic attitudes identically whether as opinion or belief. Construal-level theory provides the explanatory concepts. We use a distinctively global way of thinking—abstract construal or far-mode—when we contemplate the future and the psychologically distant; and we use a distinctively narrow way of thinking—concrete construal or near-mode—when we act for the present and on the psychologically near. Appraisals of belief, unlike opinion, result from abstract construal, but when we think of our own (hence psychologically near) beliefs, we construe concretely, eliminating construal level as a cue to distinguish belief and opinion to assess each on its proper grounds.

Belief is more psychologically distant than opinion because one’s own person is near, others’ distant, and belief incorporates (averages) others’ opinions, which are disregarded in rational opinion formation. Construal-level theory would indicate that we perceive others as acting for their beliefs and ourselves as acting for our opinions, that is, we see ourselves as engaged in deliberation when we see our counterparts engaged in action. The spiral ensues in which parties to deliberation misperceive the other as advancing an agenda rather than engaging in good-faith deliberation, forcing each to reciprocate because the party engaging in action controls through benefiting from the distraction.

Beliefs and opinions are different entities, not just different functions that entities serve. We typically ascribe beliefs to others to describe and predict their conduct. Beliefs are far-mode constructions that must be grounded in an inbuilt template—since belief ascription is humanly universal—an idealization, which reality only approximates. If you believe that “She’s telling the truth,” your actions will comport with the absence of suspected untruthfulness but only to a point. Belief is a matter of degree, based on how close the template and reality match.

Belief is a primitive intuition regarding others, but applying the concept of belief to oneself doesn’t come naturally. Ferreting out the contours of unarticulated belief is what gives insight-based psychotherapy its power. Although the thinking comes easily that another agent is deceiving himself about what he really believes—others’ beliefs proven by behavior more than words—the agent himself often rejects belief ascriptions contradicting the words he tells himself. Those words are usually his opinions: our ordinary unwillingness to attribute an opinion to someone who can’t express it shows that opinions are closely tied to particular words. Opinion is the construct more suitable for possible neurobiological reduction, whereas belief is a family-relations concept, not a sharply delineated entity.

Friday, March 29, 2013

20.0. Buridan’s ass and the psychological origins of objective probability

The medieval philosopher Buridan reportedly constructed a thought experiment to support his view that human behavior is determined rather than “free”—hence rational agents can't choose between two equally good alternatives. In the Buridan’s Ass Paradox, an ass finds itself between two equal equidistant bales of hay, noticed simultaneously; the bales’ distance and size are the only variables influencing the ass’s behavior. Under these idealized conditions, the ass must starve, its predicament indistinguishable from a physical object suspended between opposite forces, such as a planet that neither falls into the sun nor escapes into outer space. (Since the ass served Buridan as metaphor for the human agent, in what follows I speak of “ass” and “agent” interchangeably.)

Computer scientist Leslie Lamport formalized the paradox as “Buridan’s Principle,” which states that the ass will starve if it is situated in a range of possibilities that include midpoints where two opposing forces are equal and it must choose in a sufficiently short time span. We assume, based on a principle of physical continuity, that the larger the bale of hay compared to the other, the faster will the ass be able to decide. Since this is true on the left and on the right, at the midpoint, where the bales are equal, symmetry requires an infinite decision time.  Conclusion: within some range of bale comparisons, the ass will require decision time greater than a given bounded time interval. (For rigorous treatment, see Buridan’s Principle (1984).)

Buridan’s Principle is counterintuitive, as Lamport discovered when he first tried to publish. Among the objections to Buridan’s Principle summarized by Lamport, the main objection provides an insight about the source of the mind-projection fallacy, which treats probability as a feature of the world. The most common objection is that when the agent can’t decide it may use a default metarule. Lamport points out this substitutes another decision subject to the same limits: the agent must decide that it can’t decide. My point differs from that of Lamport, who proves that binary decisions in the face of continuous inputs are unavoidable and that with minimal assumptions they preclude deciding in bounded time; whereas I draw a stronger conclusion: no decision is substitutable when you adhere strictly to the problem’s conditions specifying that the agent be equally balanced between the options. Any inclination to substitute a different decision is a bias toward making the decision that the substitute decision entails. In the simplest variant, the ass may use the rule: turn left when you can’t decide, potentially entrapping it in the limbo between deciding whether it can’t decide. If the ass has a metarule resolving conflicts to favor the left, it has an extraneous bias.

Lamport’s analysis discerns a kind of physical law; mine elucidates the origins of the mind-projection fallacy. What’s psychologically telling is that the most common metarule is to decide at random. But if by random we mean only apparently random, the strategy still doesn’t free the ass from its straightjacket. If it flips a coin, an agent is, in fact, biased toward whatever the coin will dictate, bias, here, means an inclination to use means causally connected with a certain outcome, but the coin flip’s apparent randomness is due to our ignorance of microconditions; truly random responding would allow the agent to circumvent the paradox’s conditions. The theory that the agent might use a random strategy expresses the intuition that the agent could turn either way. It seems a route to where the opposites of functioning according to physical law and acting “freely” in perceived self-interest are reconciled.

This false reconciliation comes through confusing two kinds of symmetry: the epistemic symmetry of “chance” events and the dynamic symmetry in the Buridan’s ass paradox. If you flip a coin, the symmetry of the coin (along with your lack of control over the flip) is what makes your reasons for preferring heads and tails equivalent, justifying assigning each the same probability. We encounter another symmetry with Buridan’s ass, where we also have the same reason to think the ass will turn in either direction. Since the intuition of “free will” precludes impossible decisions, we construe our epistemic uncertainty as describing a decision that’s possible but inherently uncertain.

When we conceive of the ass as a purely physical process  subject to two opposite forces (which, of course, it is), then it’s obvious that the ass can be “stuck.” What miscues intuition is that the ass need not be confined to one decision rule. But if by hypothesis it is confined to one rule, the rule may preclude decision. This hypothetical is made relevant by the necessity of there being some ultimate decision rule.

The intuitive physics of an agent that can’t get stuck entails: a) two equal forces act on an object producing an equilibrium; b) without breaking the equilibrium, an additional natural law is added specifying that the ass will turn. Rather than conclude this is impossible, intuition “resolves” the contradiction through conceiving that the ass will go in each direction half the time: the probability of either course is deemed .5. Confusion of kinds of symmetry, fueled by the intuition of free will, makes Buridan’s Principle counter-intuitive and objective probabilities intuitive.

How do we know that reality can’t be like this intuitive physics? We know because realizing a and b would mean that the physical forces involved don’t vary continuously. It would make an exception, a kind of singularity, of the midpoint.  

Thursday, March 7, 2013

14.2.1. The habit theory of morality, moral influence, and moral evolution

Contrasting with all forms of moral realism, the habit theory of morality recognizes no “terminal moral values,” since it holds that transgression of an agent’s principles of integrity is necessary to allow their adjustment to circumstances. By conceiving moral principles as principles of integrity, prosocial habits serving as indispensable self-control devices in a psychological economy where willpower is an exceedingly scarce resource, it uniquely explains—without recourse to group selectionism—how humans could evolve a group-minded morality.

Group selectionism is the minority view among evolutionists that natural selection in humans occurs in the manner of eusocial species, at the level of groups, not just genes. Eusocial species comprise primarily the social insects, whose hives’ genetic commonality permits group selection, which in their case—unlike the human—reduces to the gene level. Social psychologist Jonathan Haidt in his recent book The Righteous Mind (2013) frames the case for human group selection with the aphorism “Humans are 90% chimpanzee and 10% bee.” Haidt observes that most moral arguments are hypocritical, aiming to impress or control others, agents often ignoring standards when they can avoid punishment for transgressions. Yet Haidt acknowledges that humans occasionally behave selflessly, as when a soldier takes huge personal risks for his fellows or zealots lose themselves in moral or political causes. Haidt thinks these phenomena inexplicable at the gene or individual level because it would subject agents to strongly adverse selection pressures, since their altruism fails to serve their individual interests. Group selectionism is subject to a standard objection: inevitable exploitation by free riding, which group-selectionist theory must contain a mechanism to punish. Human societies curtail free riding by social approval and disapproval, including material rewards and punishments.

According to Haidt (and other strict-adaptationist theorists), the proclivity to reward and punish transgression against group interest must arise through group selection because otherwise approval and disapproval meted out in the group interest would be another form of self-sacrifice. The habit theory of morality treats moral approval and disapproval as expressing the same habit set used for self-control, illustrated by the habit theory of civic morality: U.S. citizens practice habits of frugality in their personal lives by demanding the government cut spending. The same equivalence describes moral suasion directed toward individuals.

Ironically, moral hypocrisy provides evidence that self-control and moral suasion practice a unified moral habit. If hypocritical demands are seen as purely deceptive, it’s hard to see how they would serve as a costly signal; carrying no costs, moral hypocrisy would have no value as a signal. Hypocrisy has no point if anyone can be a hypocrite cheaply. But if engaging in moral suasion agents rehearse (practice) principles of integrity that they habitually apply to themselves, the cost of demanding more morality than you want to give is becoming more moral than you wish.

The workings become clearer and more plausible with more concreteness about the structure of the moral habits (or principles of integrity), and Alan P. Fiske’s relationship-regulation theory integrates well with the habit theory. According to Fiske’s model, systems of moral principles are activated when their associated social relationships are “constituted,” where the systems of moral principles are Unity, Hierarchy, Equality, and Proportionality. (T.S. Rai and A.P. Fiske, Moral psychology is relationship regulation (2011) Psychological Review, 118: 57 ‒ 75.) Hierarchical principles, for example, are activated when the appropriate social relations of Authority/Ranking are constituted, so when an agent is involved in an authoritarian relationship, such as between employees and their boss, the corresponding Hierarchical principles of unconditional submission and conditional protection dominate. To restate in habit-theory terms, negotiating hierarchical relationships motivates agents to form habits based on Hierarchical principles. Most importantly, the Hierarchical structure is a coherent whole, including facets involving regulation of both self and others. In the habit theory, other-directed morality is a spandrel deriving from the primary adaptive value of self-control.

Monday, February 18, 2013

14.1.1. Utilitarianism twice fails

It seems almost self-evident that (barring foreign subjugation) a government will care about the wants of (some of) its citizens and nothing else: no other object of concern is plausible. If governments concern themselves with the wants of noncitizens, that will be only because citizens desire their well-being. The now platitudinous insight that the only possible basis for government policy is people’s wants can be attributed to utilitarianism, which gets credit in its stronger form for the apparent success of weaker claims. 

Another reasonable claim derives from utilitarianism: citizens’ wants should count equally. This seems only fair in a democracy, where one citizen gets one vote. Few today would deny the principle that public policy should serve the most good of the greatest number, which may seem to contradict my claim that no general moral principle governs public policy, but in practice, the consequences of this limited utilitarianism are thin indeed, leaving ample room for ideology. I’ll call thin utilitarianism this public-policy formula: the greatest good for the greatest number of citizens, weighting their welfare equally. 


First, I’ll consider whether thin utilitarianism succeeds on its own terms by providing a practical guide to public policy. Second, I’ll examine how this deceptively appealing guide to public policy transmogrifies into the monster of full-blown utilitarianism, a form of moral realism. The first constrains even casual use of thin utilitarianism; the second impugns utilitarianism as a general ethical theory.


1. Non-negotiable conflicts between subagents undermine thin utilitarianism

Although simple economic models attributing conduct to rational self-interest require that agents assign consistent utilities to outcomes, agents are inconsistent. One example of inconsistent utility assignment is the endowment effect, where agents assign more value to property they own than  to the same property they don’t own. The inconsistency considered here is stronger than the endowment effect, which we can surmount with effort, as professional traders must do. Despite the endowment effect, there is an answer to how much utility an outcome affords, the endowment effect being a bias, which willpower or habit may neutralize.

The conflict between subagents within a single person, on the other hand, can’t be resolved by means of a common criterion, such as market price, since two subagents pursue different ends. Which of these subagents dominates depends on situational and personological factors that elicit one or the other, not on bias. Construal-level theory reveals a conflict between intrapersonal subagents, near-mode and far-mode, integrated mindsets applied to matter experienced at fine or broad granularities. Modes (or “construal levels”) differ in that far-mode is more future-oriented and principled, near-mode, present-oriented and contextual. Far-mode and near-mode are elicited by the way social choices are made: voting elicits far-mode and market choices, near-mode; the utility of a choice depends on construal level.

Take a policy choice: how much wealth should be spent on preventive medicine? There are two basic ways allocating resources to medical care, political process and the market, socialized medicine being an example of political process, private medicine, the market. Socialized medicine makes allocating funds for the medical care a political decision; the market makes it each consumer’s personal choice. When you compare the utility of the choices by political process with those on the market, you should expect to find that when people choose politically, they use far-mode thinking encouraged by voting; whereas when they make purchases, they use near-mode thinking encouraged by the market. The preventive-care expenditure will be higher under socialized medicine because political process elicits far-mode, which is concerned with future health. People will be more miserly with preventive care under private medicine, where the decision to spend is made by consumer choice in near-mode, where we care more about the present. People favor spending more on preventive care when they vote to tax themselves than when they buy it on the market. Which outcome provides the greater utility—more preventive care or more recreation—is relative to construal level.

The same indeterminacy of utility occurs when comparing decisions made under different political processes, such as local versus central. Local decisions will be near-mode, central decisions far-mode. Assuming socialized medicine, less funding would be available if it were subject to state rather than federal control. Which provides more utility depends on whether the consequences are evaluated in near-mode or far-mode; no thin-utilitarian criterion applies.

Some utilitarians will protest that we should measure experiences rather than wants. The objection misses the argument’s point, which is that utility is relative to mode, a conclusion easiest to see in the public-choice process because the alternatives may be delimited. If the conclusion that utility depends on construal level holds, the same indeterminacies occur in evaluating experience. That apart, when utilitarianism is applied to public policy, present wants rather than experienced satisfaction is the criterion; agents necessarily choose based on present wants whether on the market or the political process.

2. Full-blown utilitarianism stands convicted of moral realism

Full-blown utilitarians are necessarily moral realists, but increasingly they are seen to deny it. While moral realism is widely recognized as absurd, utilitarianism seems to some an attractive ethical philosophy. For the sake of intellectual respectability, utilitarians can appear to reject anachronistic moral realism while practicing it philosophically.

Full-blown utilitarianism often obscures its differences with thin utilitarianism, which is a questionable doctrine but in accord with ordinary common sense. It emerges from thin utilitarianism by the misdirection of subjecting ethical premises to the test of simplicity, a test appropriate to realist theories exclusively, because simplicity serves truth. A classic illustration: Aristotle theorized that everything on earth that goes up goes down; Newton set out the gravity theory, which applies to all objects, not just those terrestrial, and which predicts that objects can escape the earth’s gravitational field by traveling fast. Scientists confidently bet on Newton well before rockets were invented, and their confidence was vastly increased by the simplicity of Newton’s theory, which made correct predictions concerning all objects. Although philosophers have explained variously the correlation between simplicity and truth, they generally agree that simplicity signals truth. Unless utilitarians can otherwise justify it, searching for a simple moral theory means searching for a true theory.

The full-blown utilitarian seeks a misplaced simplicity by insisting that all entities that can experience happiness, a much simpler criterion than “current citizens,” serve as the beneficiary reference group—including future generations of humans and even beasts, whose existence depends on policy; whereas, thin utilitarianism is a democratic convention, serving only the wants of the currently existing citizens. Because they must incorporate future generations into the reference group, utilitarian philosophers have had to accept that using a policy-dependent reference group entails a dilemma regarding interpretation of full-blown utilitarianism, with unattractive consequences at both horns, which realize radically different ideals.  In one version, you maximize the average utility obtained by the whole population; in the other, you sum the utilities. These interpretations seem almost equally unattractive: the averaging view says that one supremely happy human is better than a billion very happy ones; the adding approach implies that a hundred trillion miserable wretches is better than a billion happy people. 

To apply a utilitarian standard to scenarios so distant from thin utilitarianism, accepting their consequences because of simplicity’s demands, is to treat moral premises as truths and to practice moral realism, despite contrary self-description. Those agreeing that moral realism is impossible must reject full-blown utilitarianism.

Friday, January 25, 2013

19.2. Infinitesimals: Another argument against actual infinite sets

Argument
My argument from the incoherence of actually existing infinitesimals has the following structure:

1. Infinitesimal quantities can’t exist;
2. If actual infinities can exist, actual infinitesimals must exist;
3. Therefore, actual infinities can’t exist.

Although Cantor, who invented the mathematics of transfinite numbers, rejected infinitesimals, mathematicians have continued to develop analyses based on them, as mathematically legitimate as are transfinite numbers, but few philosophers try to justify actual infinitesimals, which have some of the characteristics of zero and some characteristics of positive numbers. When you add an infinitesimal to a real number, it’s like adding zero. But when you multiply an infinitesimal by infinity, you sometimes get a finite quantity: the points on a line are of infinitesimal dimension, in that they occupy no space (as if they were zero duration), yet compose lines finite in extent.

Few advocate actual infinitesimals because an actually existing infinitesimal is indistinguishable from zero. For however small a quantity you choose, it’s obvious that you can make it yet smaller. The role of zero as a boundary accounts for why it’s obvious you can always reduce a quantity. If I deny you can, you reply that since you can reduce it to zero and the function is continuous, you necessarily can reduce any given quantity—precluding actual infinitesimals. When I raise the same argument about an infinite set, you can’t reply that you can always make the set bigger; if I say add an element, you reply that the sets are still the same size (cardinality). The boundary imposed by zero is counterpoint for infinitesimals to the openness of infinity, but actual-infinitesimals’ incoherence suggests that infinity is similarly infirm.

Can more be said to establish that the conclusion about actual infinitesimal quantities also applies to actual infinite quantities? Consider again the points on a 3-inch line segment. If there are infinitely many, then each must be infinitesimal. Since there are no actual infinitesimals, there are no actual infinities of points.

But this conclusion depends on the actual infinity being embedded in a finite quantity—although, as will be seen, rejecting bounded infinities alone travels metaphysical mileage. For boundless infinities, consider the number of quarks in a supposed universe of infinitely many. Form the ratio between the number of quarks in our galaxy and the infinite number of quarks in the universe. The ratio isn’t zero because infinitely many galaxies would still form a null proportion to the universal total; it’s not any real number because many of them would then add up to more than the total universe. This ratio must be infinitesimal. Since infinitesimals don’t exist, neither do unbounded infinities (hence, infinite quantities in general, their being either bounded or unbounded).

Infinitesimals and Zeno’s paradox
Rejecting actually existing infinities is what really resolves Zeno’s paradox, and it resolves it by way of finding that infinitesimals don’t exist. Zeno’s paradox, perhaps the most intriguing logical puzzle in philosophy, purports to show that motion is impossible. In the version I’ll use, the paradox analyzes my walk from the middle of the room to the wall as decomposable into an infinite series of walks, each reducing the remaining distance by one-half. The paradox posits that completing an infinite series is self-contradictory: infinite means uncompletable. I can never reach the wall, but the same logic applies to any distance; hence, motion is proven impossible.

The standard view holds that the invention of the integral calculus completely resolved the paradox by refuting the premise that an infinite series can’t be completed. Mathematically, the infinite series of times actually does sum to a finite value, which equals the time required to walk the distance; Zeno’s deficiency is pronounced to be that the mathematics of infinite series was yet to be invented. But the answer only shows that (apparent) motion is mathematically tractable; it doesn’t show how it can occur. Mathematical tractability is at the expense of logical rigor because it is achieved by ignoring the distinction between exclusive and inclusive limits. When I stroll to the wall, the wall represents an inclusive limit—I actually reach the wall. When I integrate the series created by adding half the remaining distance, I only approach the limit equated with the wall. Calculus can be developed in terms of infinitesimals, and in those terms, the series comes infinitesimally close to the limit, and in this context, we treat the infinitesimal as if it were zero. As we’ve seen, actual infinity and infinitesimals are inseparable, certainly where, as here, the actual infinity is bounded. The calculus solves the paradox only if actual infinitesimals exist—but they don’t.

Zeno’s misdirection can now be reconceived as—while correctly denying the existence of actual infinity—falsely affirming the existence of its counterpart, the infinitesimal. The paradox assumes that while I’m uninterruptedly walking to the wall, I occupy a series of infinitesimally small points in space and time, such that I am at a point at a specific time the same as if I had stopped.

Although the objection to analyzing motion in Zeno’s manner was apparently raised as early as Aristotle, the calculus seems to have obscured the metaphysical project more than illuminating it. Logician Graham Priest (Beyond the Limits of Thought (2003)) argues that Zeno’s paradox shows that actual infinities can exist, by the following thought experiment. Priest asks that you imagine that rather than walking continuously to the wall, I stop for two seconds at each halfway point. Priest claims the series would then complete, but his argument shows that he doesn’t understand that the paradox depends on the points occupied being infinitesimal. Despite the early recognition that (what we now call) infinitesimals are at the root of the paradox, philosophers today don’t always grasp the correct metaphysical analysis.

Distinguishing actual and potential infinities
Recognizing that infinitesimals are mathematical fictions solidifies the distinction between actual and potential infinity. The reason that mathematical infinities are not just consistent but are useful is that potential infinities can exist. Zeno’s paradox conceives motion as an actual infinity of sub-trips, but, in reality, all that can be shown is that the sub-trips are potentially infinite. There’s no limit to how many times you can subdivide the path, but traversing it doesn’t automatically subdivide it infinitely, which result would require that there be infinitesimal quantities. This understanding reinforces the point about dubious physical theories that posit an infinity of worlds. It’s been argued that some versions of the many-worlds interpretation of quantum mechanics that invoke an uncountable infinity of worlds don't require actual infinity any more than does the existence of a line segment, which can be decomposed into uncountably many segments, but an infinite plurality of worlds does not avoid actual infinity. We exist in one of those worlds. Many worlds, unlike infinitesimals and the conceptual line segments employing them, must be conceived as actually existing.

[Edit September 15, 2013.] Corrected claim that many-worlds theories of quantum mechanics posit an infinity of worlds. Some many-worlds theories do, and some don't. This argument applies only to those versions positing infinite worlds.

Monday, January 21, 2013

19.1. The meaning of “existence”: Lessons from infinity

Based on 19.0. Can infinite quantities exist?

The topic is the concept of existence, not its fact—not why there's something rather than nothing—but the bare concept brings its own austere delights. Philosophical problems arise from our conflicting intuitions, but “existence” is a primitive element of thought because our intuitions of it are so robust and reliable. Of course, we disagree about whether certain particulars (such as Moses) have existed and even about whether some general kinds (such as the real numbers) exist, but disputes don’t concern the concept of existence itself. If Moses’s existence poses any conceptual problem, it concerns what counts as being him, not what counts as existence. Adult readers never seriously maintain that fictitious characters exist; they disagree about whether a given character is fictitious. Even the question of the existential status of numbers is a question about numbers rather than about existence. As will be seen, sometimes philosophers wrongly construe these disputes as being about existence.

When essay 19.0 asked “Can infinite quantities exist?” existence’s meaning wasn't in play—infinity’s was. Existence is well-suited for the role as a primitive concept in philosophy because it is so unproblematic, but it’s unproblematic nature can be thought of as a kind of problem, in that we want to know why this concept is uniquely unproblematic. We would at least like to be able to say something more about it than merely that it’s primitive, but in philosophy, we acquire knowledge by solving problems, and existence fails to provide any but the unhelpful problem of its being unproblematic. The problem of infinity provides, in the end, some purchase on the concept of existence, which concept I assumed in dealing with infinity.

In one argument against actual infinity, I proposed as conceptually possible that separate things might be distinguishable only concerning their being separate things. If we assume that infinite sets can exist, the implication is the contradiction that an infinite set and its successor—when still another point pops into existence—are the same set because you can’t distinguish them. (In technical terms, the only information that could distinguish the set and its successor, given that their members are brutely distinguishable, is their cardinality, which is the same—countably infinite—for each set.)

What’s interesting is the role of existence, which imposes an additional constraint on concepts besides the internal consistency imposed by the mathematics of sets. Whereas we are unable to distinguish existing points, we are able—in a manner of speaking—to distinguish points that exist from those that don’t exist. While no proper subsets are possible for existing brutely distinguishable points, the distinction within the abstract set of points between “those” that exist and “those” that don’t exist allows us to extend the successor set by moving the boundary, resulting in contradiction.

If finitude is a condition for existence, we’ve learned something new about the concept of existence. Its meaning is imbued with finitude, with definite quantity. Everything that exists does so in some definite quantity. Existence is that property of conceptual referents such that they necessarily have some definite quantity.

Existence is primitive because almost everyone knows the term and can apply it to the extent they understand what they’re applying it to. The alternative to primitive existence is primitive sensation, as when Descartes derived his existence from his “thinking.” But sensationalism is incoherent; “experiences” inherently lacking in properties (“ineffable”) are conceived as having properties (“qualia”). The heirs of extreme logical empiricism, from Rudolf Carnap to David Lewis, have challenged existence’s primitiveness. Carnap defined existence by the place of concepts in a fruitful theory. Lewis applies this positivist maxim to conclude that all possible worlds exist. Lewis isn’t impelled by an independent theory of logical existence, such as a Platonic theory that posits actually realized idealizations. Rather, the usefulness of possible worlds in logic requires their acceptance, according to Lewis, because that’s all that we mean by “exists.” Lewis is driven by this theory of existence to require infinitely many existing possible worlds, which disqualifies it on other grounds. But the grounds aren’t separate. When you don’t apply the constraints of existence because you deny their intuitive force, you lose just that constraint imposing finitude. The incoherence of sensationalism and actual infinitism argues for a metaphysics upholding the primacy of common-sense existence.

Tuesday, January 1, 2013

19.0. Can infinite quantities exist?

1. The actuality of infinity is a paramount metaphysical issue.

Some major issues in science and philosophy demand taking a position on whether there can be an infinite number of things or an infinite amount of something. Infinity’s most obvious scientific relevance is to cosmology, where the question of whether the universe is finite or infinite looms large. But infinities are invoked in various physical theories, and they seem often to occur in dubious theories. In quantum mechanics, an (uncountable) infinity of worlds is invoked by the “many worlds interpretation,” and anthropic explanations often invoke an actual infinity of universes, which may themselves be infinite. These applications make actual infinite sets a paramount metaphysical problem—if it indeed is metaphysical—but the orthodox view is that, being empirical, it isn’t metaphysical at all. To view infinity as a purely empirical matter is the modern view; we’ve learned not to place excessive weight on purely conceptual reasoning, but whether conceptual reasoning can definitively settle the matter differs from whether the matter is fundamentally conceptual.

Two developments have discouraged the metaphysical exploration of actually existing infinities: the mathematical analysis of infinity and the proffer of crank arguments against infinity in the service of retrograde causes. Although some marginal schools of mathematics reject Cantor’s investigation of transfinite numbers, I will assume the concept of infinity itself is consistent. My analysis pertains not to the concept of infinity as such but to the actual realization of infinity. Actual infinity’s main detractor is a Christian fundamentalist crank named William Lane Craig, whose critique of infinity, serving theist first-cause arguments, has made infinity eliminativism intellectually disreputable. Craig’s arguments merely appeal to the strangeness of infinity’s manifestations, not to the incoherence of its realization. The standard arguments against infinity, which predate Cantor, have been well-refuted, and I leave the mathematical critique of infinity to the mathematicians, who are mostly satisfied. (See Graham Oppy, Philosophical perspectives on infinity (2006).) 

2. The principle of the identity of indistinguishables applies to physics and to actual sets, not to everything conceivable.

My novel arguments are based on a revision of a metaphysical principle called the identity of indistinguishables, which holds that two separate things can’t have exactly the same properties. Things are constituted by their properties; if two things have exactly the same properties, nothing remains to make them different from one another. Physical objects do seem to conform to the identity of indistinguishables because physical objects are individuated by their positions in space and time, which are properties, but this is a physical rather than a metaphysical principle. Conceptually, brute distinguishability, that is, differing from all other things simply in being different, is a property, although it provides us with no basis for identifying one thing and not another. There may be no way to use such a property in any physical theory, we may never learn of such a property and thus never have reason to believe it instantiated, but the property seems conceptually possible.

But the identity of indistinguishables does apply to sets of existing things (actual sets): indistinguishable actual sets are identical. Properties determine actual sets, so you can’t define a proper subset of brutely distinguishable things.

3. Arguments against actual infinite sets.

A. Argument based on brute distinguishability.

To show that the existence of an actual infinite set leads to contradiction, assume the existence of an infinite set of brutely distinguishable points. Now another point pops into existence. The former and latter sets are indistinguishable, yet they aren’t identical. The proviso that the points themselves are indistinguishable allows the sets to be different yet indistinguishable when they’re infinite, proving they can’t be infinite.

B. Argument based on probability as limiting relative frequency.

The previous argument depends on the coherence of brute distinguishability. The following probability argument depends on different intuitions. Probabilities can be treated as idealizations at infinite limits. If you toss a coin, it will land heads roughly 50% of the time, and it gets closer to exactly 50% as the number of tosses “approaches infinity.” But if there can actually be an infinite number of tosses, contradiction arises. Consider the possibility that in an infinite universe or an infinite number of universes, infinitely many coin tosses actually occur. The frequency of heads and of tails is then infinite, so the relative frequency is undefined. Furthermore, the frequency of rolling a 1 on a die also equals the frequency of rolling 2 – 6: both are (countably) infinite. But when there are infinitely many occurrences, relative frequency should equal the probability approached in a finite world. Therefore, infinite quantities don’t exist.

4. The nonexistence of actually realized infinite sets and the principle of the identity of indistinguishable sets together imply the Gold model of the cosmos.

Before applying the conclusion that actual infinite sets can’t exist, together with the principle of the identity of indistinguishables, to a fundamental problem of cosmology, caveats are in order. The argument uses only the most general and well-established physical conclusions and is oblivious to physical detail, and not being competent in physics, I must abstain even from assessing the weight the philosophical analysis that follows should carry; it may be very slight. While the cosmological model I propose isn’t original, the argument is original and as far as I can tell, novel. I am not proposing a physical theory as much as suggesting metaphysical considerations that might bear on physics, whereas it is for physicists to say how weighty these considerations are in light of actual physical data and theory.

The cosmological theory is the Gold model of the universe, once favored by Albert Einstein, according to which the universe undergoes a perpetual expansion, contraction, and re-expansion. I assume a deterministic universe, such that cycles are exactly identical: any contraction is thus indistinguishable from any other, and any expansion is indistinguishable from any other. Since there is no room in physics for brute distinguishability, they are identical because no common spatio-temporal framework allows their distinction. Thus, although the expansion and contraction process is perpetual, it is also finite; in fact, its number is unity.

The Gold universe—alone, with the possible exception of the Hawking universe—avoids the dilemma of the realization of infinite sets or origination ex nihilo.

(Edited July 25, 2013: Clarified in Section 2, last paragraph and other places, that the identity of indistinguishables applies to actual sets.)

Tuesday, December 18, 2012

17.1. Societal implications of ego-depletion theory and construal-level theory: Ignored transaction costs and proliferation of electoral events (Part 2 of "Philosophical and political implications of ego-depletion theory")

From ego-depletion theory, we should conclude that making choices is far costlier than what’s told by common sense, this conclusion the source of the theory’s societal implications. Nobody expected that decision fatigue at the day’s end would cause judges to deny almost all petitions they heard. According to common sense, the main cost of decision-making is the time it consumes, but ego-depletion research shows that there’s a much greater unnoticed cost:
Decisions become remarkably harder and less competent with each succeeding decision.
Two societal implications are that 1) accepting or declining economic transactions is costlier than we think and 2) electing numerous officials curtails democracy.
Ignored transaction costs
The housing mortgage crisis exemplifies the first implication: commentary has failed to take account of the toll imposed on people who want to buy homes, when pseudo-opportunity taxes their willpower. The structure of “opportunity” is central here: an open offer from varied offerors isn’t subject to once-and-for-all decisive rejection. Instead, a potential borrower may have to wrestle with impulse for months, so that finally accepting a loan becomes a desperate response to the constant drain on scarce willpower.

The harm never considered is how much willpower is drained from those who refrained from borrowing, who successfully resisted the impulse to take a home loan; how the drain on their willpower paralyzed them in making other decisions—having been forced to squander their willpower on resisting loans that should never have been offered. Willpower is a scarce resource, and it is far more costly than almost anyone realizes. It’s the great hidden societal cost of market transactions. And while researchers assume willpower is replenished with a night’s sleep and breakfast, I suspect that longer frames also operate—this is the reason we need weekends and vacations.

The faux-democratic proliferation of electoral events
Like proliferating consumer “choices” that kill happiness and productivity while seeming to enhance them, the proliferation of elections has an analogous paradoxical effect on democracy. Since every choice offered diminishes our ability to make choices, elections for judges and dogcatchers or for the multiple offices required under the U.S. federal system weaken democracy by detracting from the effort citizens devote to any electoral contest.

Although dramatically reforming the American political structure is neither feasible nor high priority, it is well to have a vision of what kind of structure is or isn’t effectively democratic. Ego-depletion theory tells us that the fewer offices for which a citizen votes the better, but construal-level theory offers additional standards. It proposes that
“Seeing the forest” and “seeing the trees” involve integrated mental sets, dubbed far-mode and near-mode because distance of time, place, and person makes us think in terms of forests and nearness in these respects makes us think in terms of trees.
Outcomes will depend on whether the decision is construed in far-mode or near-mode. The theory might be invoked to support a system of checks and balances like the U.S. system, where elections staged at different intervals and over different-sized constituencies induce varying construal levels. At the federal level, elections to the House of Representatives are relatively near-mode, due both to small districts and frequent elections, and presidential elections may be most far-mode, although Senate terms are longer. Near-mode fosters resistance to change, so it is theoretically consistent with construal-level theory that the House has taken so strongly to saying no.

But if the system succeeds in eliciting different construal levels in different government branches, this has come to seem a defect rather than merit. If government is to deal in broad purposes, far-mode should dominate in formulating policies. If policies are to be implemented intelligently, near-mode should dominate in their local application. How to square this with ego-depletion theory’s moral that the number of contests in which any citizen votes be limited, preferably to a single office? One way to try to accomplish this might be a unicameral parliament with local bureaucracies appointed top down, but this produces an effect opposite to the intended. Appointments to distant career posts are based on far-mode processes, unlikely to lead to effective near-mode reasoning by the appointees.

Another little used but in-theory effective means of unifying local government with national government could better secure the appropriate allocation of near and far cognition: indirect election of progressively higher levels of government by local bodies, so the choices are minimized and each delegation is progressively more far-mode. It may be objected that this was part of the defunct scheme originally adopted under the U.S. Constitution, which provided that local government bodies elect U.S. Senators and delegates to the electoral college, but the U.S. Constitutional scheme insubordinated local power to national by limiting the power of the federal government, whereas in the (unitary rather than federal) system here envisioned, the higher levels dominate the lower despite being selected by them, to subordinate near-mode to far-mode while economizing human willpower.

Saturday, December 1, 2012

18.0 Capitalism and socialism express conflicting reciprocity norms: A reinterpretation of Marx’s theory of capitalist decline

Capitalist stagnation
U.S. workers’ wages stagnated in the last three decades, state-driven China almost alone internationally in substantially improving popular living standards. While other political economists in Marx’s day had observed a tendency for profit rates—driving production under capitalism—to decline, Karl Marx claimed the decline is inevitable, this forming the conclusion of Marx’s three-volume magnum opus, Capital.

Marx’s central argument is counterintuitive but simple. Value consists of labor hours embodied in products. Employers (capitalists) profit by paying laborers for their time, the amount of value paid being less than the amount of socially necessary labor the workers add. With capitalism’s evolution, a declining proportion of the value produced is constituted of labor directly employed, an increasing proportion of labor already concretized in capital goods, since mechanization of production is the fundamental means to increasing economic efficiency, where capital goods contribute to the value of a product to the extent they are consumed in its production. With the increasing organic composition of capital—as proportionately more value is created through capital goods—rate of profit must fall, since it is based on exploiting labor and that already embodied in capital goods has been sold and accounted for.

Despite its centrality to Marx’s analysis of why capitalism eventually comes to retard economic progress, the tendency of the rate of profit to decline is far from universally accepted as true even by Marxian commentators. Marxian academics have even questioned it mathematically, but the real issue isn’t the almost-trivial mathematics but its mapping to reality: does the declining Marxian rate of profit entail a declining actual rate of profit?

Conflicting reciprocity norms
That owners of capital (capitalists) profit from a series of “fair” exchanges could be termed the central premise of Capital. Workers exchange their labor time for its value—that is, the laborers’ own price of reproduction. The arrangement is fair under a reciprocity norm according to which commodities trade at their market value. But it is unfair under a reciprocity norm according to which all receive in proportion to their value-producing labor. Although Marx didn’t stress the point, what’s striking is that each antagonist in the historical drama—the social classes workers and capitalists—frames its interests in terms of a simple coherent reciprocity principle, with the difference that the workers favor a ratio derived from production and the capitalists from distribution. (See Alan Page Fiske, Structures of Social Life: The Four Elementary Forms of Human Relations (1991) [“equality matching” and “market pricing,” but Fiske, while discussing Marx, doesn’t link equality matching to the labor theory of value].)

The market’s function
Attacks on the soundness of Marx’s law of the tendency of the rate of profit to decline derive from its seeming impossibility—this, in turn, due to not seeing the connection between “profit” as defined in the theory and in the ledger. We must start from fundamentals. According to Marx, civil society exists to allow human cooperation in the labor process. Civilization is built on accomplishing this by fostering the accumulation of economic resources by few; capitalism was the form economies came to take at the onset of the industrial revolution. Like other economic systems that followed the agricultural revolution, it arose and became ascendant because of its efficiency in extracting value from labor, but it accomplishes this with a progressive enlargement of the value diverted to augmenting industrial machinery rather than directly producing more products for consumption, which gradually changes the tasks presented. As the contribution of machinery grows relative to the direct contribution of laborers, the basic economic tasks besetting society change from producing value from labor to realizing the value embodied in machinery.

But the capitalist market continues to be a system adapted to extracting value from labor. Insofar as profit represents a gain in value accruing to the capitalist class as a whole, it comes from the value contributed by the laborers. As the production process has progressively less proportionate need for laborers, it becomes harder to profit sufficiently at their expense.

Limits of state action under capitalism
It might be thought that this Marxian profit is a reification. Who’s to say it is the proper abstraction for understanding capitalist motivation, rather than, say, the concept of “interest,” favored by the ultracapitalist “Austrian school.” One response that denies the centrality of Marxian profit (technically, surplus value) is that state action can co-opt the market to new ends. Since the market tends to overproduce capital, adroit government spending might redirect it to produce more consumer goods. This is the essence of Keynes’s policies. The most obvious problem is that unprofitable spending is competitively inefficient and is only sustainable in huge nation states with considerable economic autonomy; otherwise, it may cause a nation’s industries to fail against international competitors, who are free-riders on the increased buying power of the local population. International economic competition sets limits on a country’s ability to use Keynesian policies or any policies involving state subsidy. Yet periods aren’t rare when one capitalist power dominates and is subject to diminished international competition. Also, if Keynesian policies are directed to creating positive externalities or “public goods” favoring profitability, their benefit may outweigh their harm to profits.

But insuperable obstacles to using government to redirect the market keep Marxian surplus value a good first approximation to balance-sheet profit. While a government-regulated market is often thought to provide the best of capitalism and socialism, in an important sense, it provides the worst of each in that the state attempts to regulate in ignorance of the facts, a company’s plans a closely guarded commercial secret. But the more fundamental problem with government-directed capitalism is that it amounts to the government’s adding to some capitalists' profits at the expense of other capitalists. Where political power follows economic power, the political unity of the class of capitalists depends on their shared economic interests. Private property is the means of coordinating individuals into a social class sufficiently unified to legitimize government. This unity depends on allocating wealth according to the dominant reciprocity norm, based on market exchange rates.

The result is only limited government intervention can please the capitalist class. Rather than contributing to the profits of the class, government policy must use incentives which advantage parts of the class at the expense of the other parts. To create an incentive sufficient to replace the incentive of Marxian profit with balance-sheet profit, differently constituted, would involve huge wealth transfers within the capitalist class, calling the system’s legitimacy into question by undermining its broad support by the dominant social class.

Corrected on January 2, 2013: the organic composition of capital increases rather than declines. The point is purely terminological and the result of my still not grasping why machinery is termed "organic." Thanks to a correspondent, who corrected me.

Friday, November 2, 2012

17.0. Akrasia explained. Part 1 of "Philosophical and political implications of ego-depletion theory"

A recent empirical theory in social psychology solves the problem of weakness of the will (akrasia). Before Roy F. Baumeister developed ego-depletion theory, philosophers and psychologists (besides Freud) hadn’t seen need for an energy construct to explain the limits of the ego’s ability to exercise control. Ego-depletion theory’s message is that practical rationality consists of allocating the brain’s minuscule energy supply fueling our capacity to decide.

The problem of akrasia (weakness of will)
Weakness of the will (akrasia) has remained an unsolved philosophical problem since articulated by the ancients. Why do we accord the present moment more importance than the future, when rationality demands an Archimedean impartiality between our “present and future selves”? Why would you judge an action rational, as in your interest all things considered, yet not perform it?

The problem of akrasia is one of reconciliation with our commonsense introspective knowledge that, despite our failings, we clearly can (sometimes) make ourselves do things that we believe are the better choices. We make (some) rational choices and conform our behavior accordingly. What stops us from doing so consistently and biases us for the immediate? Something limits our ability to decide, but only Freud previously formulated that the limitation consists of the leaden weight of decisions recently made.

Attempted Solutions
Decision theory redefines rewardingness as including, as a feature, the time its utility is experienced. The utility function describes the decreased value of the same object enjoyed later, but it doesn’t render it rational. To the extent the timing of experience is relevant in no other respect, it isn’t rational to discount time. Why does the rational-choice assumption, that we will do what’s best for ourselves, fail so miserably when the rewards happen to occur later?

I can recollect only a single theory of why we discount future selves: Derek Parfit takes the language of “future selves” literally to maintain that they present to us as the same, in principle, as other persons’ selves, with which we identify only according to the degree of their relatedness. The cost of Parfit’s move is the concept of personhood. But while personhood doesn’t deserve to be part of our ontology, it’s a useful fiction over most of our personal transactions, failing conspicuously with regard to time discounting. We should aim to discover why.

Notice seems not to have been taken by philosophers of what would seem a scientific solution to the problem of why somewhat rational beings are so akrasic. The simple empirical answer offered is that we have a tiny daily ration of willpower. Rational beings can overcome the tendency to discount time by exerting will power, but they can only do it a few times a day. Our rationality is limited by our ability to exercise willpower, which is based on a measurable physical energy source—partly replenished by consuming glucose. The toll decision fatigue takes on people is shown by the declining performances on real-life judicial issues, resulting in a change from 70% to 10% favorable decisions between the beginning and end of the day.

Ego depletion and free will
Why is there resistance to recognizing that decisions become harder the more of them you make? While decision fatigue comports with the introspection that we can make ourselves do some things, it conflicts with our intuition that we can always do what we want. Ego-depletion phenomena present yet another breakdown of the concept of compatibilist free will.

Next essay in series: Implications for the structure of government, welfare economics, and even psychotherapy.

Friday, October 12, 2012

14.4. The deeper solution to the mystery of moralism—Morality and free will are hazardous to your mental health

The complex relationship between Systems 1 and 2 and construal level

The distinction between pre-attentive and focal-attentive mental processes  has permeated cognitive psychology for some 35 years. In the past half-decade has emerged another cognitive dichotomy specific to social psychology: processes of abstract construal (far cognition) versus concrete construal (near cognition). This essay will theorize about the relationship between these dichotomies to clarify further how believing in the existence of free will and in the objective existence of morality can thwart reason by causing you to choose what you don’t want.

The state of the art on pre-attentive and focal-attentive processes is Daniel Kahneman’s book Thinking, Fast and Slow, where he calls pre-attentive processes System 1 and focal-attentive processes System 2. The reification of processes into fictional systems also resembles Freud’s System Csc (Conscious) and System Pcs (Preconscious). I’ll adopt the language System 1 and System 2, but readers can apply their understanding of preconscious – conscious, pre-attentive – focal-attentive, or automatic processes – controlled processes dichotomies. They name the same distinction, in which System 1 consists of processes occurring quickly and effortlessly in parallel outside awareness; System 2 consists of processes occurring slowly and effortfully in sequential awareness, which in this context refers to the contents of working memory rather than raw experience and accompanies System 2 activity.

To integrate Systems 1 and 2 with construal-level theory, we note that System 2—the conscious part of our minds—can perform any of three routines in making a decision about taking some action, such as whether to vote in an election, a good example not just for timeliness but also for linkages to our main concern with morality: voting is a clear example of an action without tangible benefit. The potential voter might:

Case 1. Make a conscious decision to vote based on applying the principle that citizens owe a duty to vote in elections.
Case 2. Decide to be open to the candidates’ substantive positions and vote only if either candidate seems worthy of support.
Case 3. Experience a change of mind between 1 and 2.

The preceding were examples of the three routines System 2 can perform:

Case 1. Make the choice.
Case 2. “Program” System 1 to make the choice based on automatic criteria that don’t require sequential thinking.
Case 3. Interrupt System 1 in the face of anomalies.

When System 2 initiates action, whether it retains the power to decide or passes to System 1 is the difference between concrete and abstract construal. Case 2 is key to understanding how Systems 1 and 2 work to produce the effects construal-level theory predicts. Keep in mind that the unconscious, automatic System 1 includes not just hardwired patterns but also skilled habits. Meanwhile, System 2 is notoriously “lazy,” unwilling to interrupt System 1, as in Case 3, but despite the perennial biases that plague System 1, resulting from letting it have its way, the highest levels of expertise also occur in System 1.

A delegate System 1 operates with holistic patterns typifying far cognition. This mode is far because we offload distant matter to System 1 but exercise sequential control under System 2 as immediacy looms—although there are many exceptions. It is critical to distinguish far cognition from the lazy failure of System 2 to perform properly in Case 3, as such failure isn’t specific to mode. Far cognition, System 1 acting as delegate for System 2, is a narrower concept than automatic cognition, but far cognition is automatic cognition. Near cognition admits no easy cross-classification.

Belief in free will and moral realism undermine our “fast and frugal heuristics”

The two most important recent books on the cognitive psychology of decision and judgment are Thinking, Fast and Slow by Daniel Kahneman and Gut Reactions: The Intelligence of the Unconscious by Gerd Gigerenzer, and both insist on the contrast between their positions, although conflicts aren’t obvious. Kahneman explains System 1 biases as due to employing mechanisms outside their evolutionary range of usefulness; Gigerenzer describes “fast and frugal heuristics” that sometimes misfire to produce biases. Where these half-empty versus half-full positions on heuristics and biases really differ is their overall appraisal of near and far processes, as Gigerenzer is a far thinker and Kahneman a near thinker, and they are both naturally biased for their preferred modes. Far thought shows more confidence in fast-and-frugal heuristics, since it offloads to System 1, whose province is to employ them.

The fast-and-frugal-heuristics way of thinking helps in understanding the effects of moral realism and free will: they cause System 2 to supplant System 1 in decision-making. When we apply principles of integrity to regulate our conduct, sometimes we do better in far mode, where System 2 offloads the task of determining compliance to System 1. Not if you have a principle of integrity that includes an absolute obligation to vote; then you act as in Case 1: based on a conscious decision. But principles of integrity do not really take this absolute form, an illusion created by moral realism. A principle of integrity flexible enough for actual use might favor voting (based, say, on a general principle embracing an obligation to perform duties) but disfavor it for “lowering the bar” if there’s only a choice between the lesser of evils. The art of objectively applying this principle depends on your honest appraisal of the strength of your commitment to each component virtue, a feat System 2 is incapable of performing; when it can be accomplished, it’s due to System 1’s unconscious skills. Principles of integrity are applied more accurately in far-mode than near-mode. [Hat Tip to Overcoming Bias for these convenient phrases.]

But beliefs in moral realism and free will impel moral actors to apply their principles in near-mode because these beliefs hold that moral conduct results from freely willed acts. I’m not going to thoroughly defend the premise here, but this thought experiment might carry some persuasive weight. Read the following in near mode, and introspect your emotions:


Sexual predator Jerry Sandusky will serve his time in a minimal security prison, where he’s allowed groups of visitors five days a week.


Some readers will experience a sense of outrage. Then remind yourself: There’s no free will. If you believe the reminder, your outrage will subside; if you’ve long been a convinced and consistent determinist, you might not need to remind yourself. Morality inculpates based on acts of free will: morality and free will are inseparable.

A point I must emphasize because of its novelty: it’s System 1 that ordinarily determines what you want. System 2 doesn't ordinarily deliberate about the subject directly; it deliberates about relevant facts, but in the end, you can only intuit your volition. What a belief in moral realism and free will do is nothing less than change the architecture of decision-making. When we practice principles of integrity and internalize them, they and nonmoral considerations co-determine our System 1 judgments, whereas according to moral realism and free will, moral good is the product of conscious free choice, so System 2 contrasts its moral opinion to System 1’s intuition, for which System 2 compensates—and usually overcompensates. With the voter who had to weigh the imperatives of the duty to vote and the duty to avoid “lowering the bar” when both candidates promote distasteful or vacuous programs, System 2 can prime and program System 1 by studying the issues, but the multifaceted decision is itself best made by System 1. What happens when System 2 tries to decide? System 2 makes the qualitative judgment that System 1 is biased one way or the other and corrects System 1, implicating the overcompensation bias, by which conscious attempts to counteract biases usually overcorrect. A voter who thinks correction is needed for a bias toward shirking duty will vote when not really wanting to, all things considered. A voter biased toward "lowering the bar" will be excessively purist. Whatever standard the voter uses will be taken too far.

Belief in moral realism and free will biases practical reasoning

This essay has presented the third of three ways that belief in objective morality and free will causes people to do other than what they want:

  1. It retards people in adaptively changing their principles of integrity.
  2. It prevents people from questioning their so-called foundations.
  3. It systematically exaggerates the compellingness of moral claims.

Blog Archive

About Me

Joshua Tree, California 92252-2141, United States
SUPPLIER OF LEGAL THEORIES. Attorneys' ghostwriter of legal briefs and motion papers, serving all U.S. jurisdictions. Former Appellate/Law & Motion Attorney at large Los Angeles law firm; J.D. (University of Denver); American Jurisprudence Award in Contract Law; Ph.D. (Psychology); B.A. (The Johns Hopkins University). E-MAIL: srdiamond@gmail.com Phone: 760.974.9279 Some other legal-brief writers research thoroughly and analyze penetratingly, but I bring another two merits. The first is succinctness. I spurn the unreadable verbosity and stupefying impertinence of ordinary briefs to perform feats of concision and uphold strict relevance to the issues. The second is high polish, achieved by allotting more time to each project than competitors afford. Succinct style and polished language — manifested in my legal-writing blog, Disputed Issues — reverse the common limitations besetting brief writers: lack of skill for concision and lack of time for perfection.