Template:Under Construction

Utilitarianism assumes, asserts, or otherwise relies on conditions that facilitate the successful implementation of its theories. The merits of and potential of utilitarianism relies on the satisfactory justification of these assumptions. Perhaps the greatest premise of utilitarianism is that it offers a Unifying_Principle unifying principle that serves as a (relatively objective) foundation for morality. If this foundation is to stand on its own legitimacy and not merely serve as veil for underlying deontological or theological assertions, then philosophers must carefully examine each of the following assumptions.

Something Matters Edit

Ethicists in general, and utilitarians in particular, often ask people to change their behavior for some greater purpose that is linked to concepts of right and wrong. Traditional views suggest that right and wrong are defined by or embodied by divine beings. An early examination of this topic is found in Plato's Euthyphro[1] in which Socrates asks if God likes things that are good or if things are good because God likes them. Consequentialist suggest that right and wrong can be evaluated by observing consequences in the physical universe.

In the first case, ethicists who assert that right and wrong exist independently of human existence must rely on revelation or some other process that allows humans to know right and wrong. Some theologists might argue that right and wrong are defined by a god who communicates those ideas to humans, while others might argue that right and wrong exist independently of a god or that the two concepts are one and the same. Deontologists and virtue ethicists likewise might rely on a human faculty that allows discovery of right and wrong (e.g., intuition). Those who argue that right and wrong exist independently from human existence would need to assert the existence of a human faculty of identifying this. Contrary to each of these positions is the hypothesis that there is no morality or that the exercise of power in the absence of conventional morality is itself the best thing (similar to social Darwinism).

Consequentialists and utilitarians have more freedom in defining and understanding concepts of right and wrong. They have the option of constructing ideas of right and wrong based on inferences from empirical assessement of preferences. According to Jeremy Bentham's version of utilitarianism, humans are compelled to direct their behavior according to their individual perception of pleasure and pain (it could be argued that Bentham's intent was to define pleasure and pain as the set of all things that drive behavior). From this he argued that legislators should consider only pleasure and pain when formulating laws (see chp. 3). Although utilitarians have the option to derive their understanding of right and wrong from their understanding of pleasure and pain, it is unclear if they are compelled to exclude the possibility that right and wrong exist independently of human existence.

Individual-Level Measurement is Possible Edit

The broadest assumption of utilitarian calculus is that measurement of an individual's happiness is possible. This is the assumption that, for a given event, the net effect of positive and negative repercussions that affect one person (or being) can be identified and assigned a value that is adequately precise. The value of some events might be clearly different from zero, as in the case of being tortured or winning a large amount of money. The value of other events might be less clear or perhaps undifferentiable from the value of similar events. Measurement of an individual's happiness could theoretically be performed by the individual in question or by an observer, depending on additional assumptions.

Aggregation is Objective Edit

After an individual's happiness has been calculated, the task of aggregating individual scores into a grand total requires additional assumptions. Jeremy Bentham's version of utilitarianism counts everyone's happiness equally (chp. 1), meaning that one class of people is not given preference over another. Other versions of utilitarianism could propose that one person's utility could be weighted more heavily than another's, although such suggestions are rarely endorsed by utilitarians.

Apart from weighting the scores of each individual, attempts to maximize happiness (in the strictest sense) would rely on a consistent, objectively constant scale by which all happiness is measured and aggregated. If the scale is not objective, then the scale used by one observer could be different from the scale used by another observer, and maximization according to one scale would likely not correspond to the maximum on the other scale. A possible solution to the problem is to recognize the limits of precision in the analysis (i.e., to acknowledge error and perhaps to estimate the degree of that error).

In the case in which each individual evaluates his or her own happiness (perhaps after some training), the assumption of objective aggregation implies that individuals understand how their experiences should be evaluated according to a fixed scale—such as a scale from -10 to +10 that represents the domain from the most grievous unhappiness to the greatest happiness possible. If happiness is rated on a nonlinear scale, then individuals would need to understand how to rate their happiness with respect to the appropriate data transformation data transformation algorithm.

Measurement that does not allow for maximization in the strict sense might (intentionally or otherwise) base decisions on rank-sums. Such systems would not systematically maximize happiness, but might be able to maximize the number of people who are happy. Modifications of such systems might allow for a rough weighting to account for the most extreme types of happiness or unhappiness. In other words, such a system would lack precision to differentiate between many types of pleasures and pains but might be able to approximate the utility or disutility of torture and extreme pleasure. The degree of precision and the degree of objectivity might be far from perfect but might be better than alternatives.

(* Expand this section by referring to research on economic modeling of utility *)

Agreement on the Time Value of Happiness Edit

One factor that affects the objectivity of aggregating measures of utility is the weight that is applied to the time value of happiness (the degree to which future happiness or unhappiness is discounted or adjusted to account for time delays or uncertainty). In Jeremy Bentham's calculation of utility, duration of a pleasure or pain and its distance in the future affect valuation of the experience. This assumes that the method of discounting the time value of pleasure is either standardized or weighted according to the individual's preference. If there is no standardization, then one person can assign higher (or lower) priority toward immediate gratification than another person would. What makes time valuation more difficult is the evaluation of the current value of indirect consequences.

Psychological research has demonstrated that preference for immediate versus distant reward varies as a relatively persistent individual difference[2][3]. In other words, some people tend to favor immediate gratification and others are willing to delay immediate gratification for what they consider to be a greater reward in the future. People with different time-value dispositions would tend to assign different utility scores for events in the future. If individuals do not accept the same time value of happiness, maximization of aggregate utility (e.g., for society) becomes impossible or inaccurate. As with any lack of an objective measurement scale, attempts to maximize aggregate utility would be compromised, but other solutions might be possible (see Aggregation is Objective (assumption)).

Ratio Level of Measurement Edit

Although Jeremy Bentham's version of utilitarianism is sometimes considered subjective because he asserted that one person cannot judge the value of another person's pleasure (chp. 1), his method of calculation asserted that observers (legislators) can evaluate the relevant attributes of an individual's utility and determine positive or negative scores for individuals (chp. 4). The assumption that the value of utility can be added and compared to a criteria assumes that the calculations are of ratio level of measurement. A lesser assumption, that the measurements are of ordinal scale, would imply that happiness (within or between people) can be ranked but cannot be aggregated.

Different methods of utilitarian calculus make different assumptions. A method that counts happiness as either +1 for positive and 0 for negative (a forced ranking) makes less demanding assumptions than a method that might count one person's happiness higher than another (such as counting torture as more negative than a $1 tax). The first method, the rank-sum method, amounts to a sum of ranks that are of ordinal level of measurement and whose aggregation is more closely related to a democratic vote than to a direct measurement of utility. The second method, allowing for one person's happiness to attain a higher score than another's, assumes ratio level of measurement throughout.

The appropriateness of measures of utility and their corresponding scales can be evaluated using the representational theory of measurement or the theory of conjoint measurement. Conjoint analysis might be useful in testing the composition rules that are used to add, subtract, multiply or divide numbers that were obtained from empirical measurements and that are presumably of ordinal scale (such as utility ratings or measurement of psychological constructs)[4]. A more recent article presented a new justification for the representational theory of measurement[5], but application to utilitarian theory is lacking (please improve this section by contributing here if you can).

Economists have developed methods for working with data that is only of ordinal scale. These methods include construction of indifference curves, Edgeworth boxes, and Pareto frontier graphs. Although the methods can become complicated when many goods are involved, and precision can be low, the can be useful.

Observer Valuations Are Independent of Cognitive Bias Edit

For versions of utilitarianism other than those that use rank-sum aggregation of self-reported happiness, valuations of happiness that are made by observers must be largely free from cognitive bias. Utilitarian systems might assume that individuals or legislators accept the logical task of basing decisions on utility and that these people strive to rid themselves of beliefs that are inconsistent with this task. If the assumption holds, then people would be working toward diminishment of cognitive bias. If the assumption fails, people who claim to be making utilitarian judgments might have rejected the first principle of utilitarianism (the principle of utility) or failed to judiciously rid themselves of their old beliefs.

Even without overt cognitive bias, one observer (legislator) might assign different values to utility than would another. In the presence of cognitive bias, the difference could become substantial. An example is that one person might assign a nearly infinite negative value to a death caused by euthanasia while another assigns a large positive value according to the perceived reduction in suffering. The difference of opinion is attributable to cognitive bias when the decision-makers are either unaware of or mistaken about the origins of their beliefs while they argue that their beliefs are either objective or derived from an unbiased intuition. As the biases grow, the system becomes increasingly similar to theological, deontological, or virtue-based morality disguised in a thin wrapper of allegedly objective measures of utility.

The challenge of debiasing observers is one that has plagued researchers (CITATION NEEDED). The lay person can look toward political divisiveness and polarization as an indication of the degree to which humans tend to adhere to a worldview that alters their interpretation of new evidence. Some of the mental processes behind valuations are described by research of cognitive dissonance, which has documented the tendency of people to alter valuations to resolve conflicting beliefs and to do so in a self-serving manner[6][7]. The implication is that unconscious processes impede the ability of decision-makers to both make objective evaluations and to acknowledge the internal, mental processes that underlie that process.

Another line of research suggests a structure to human values. The structure of values itself is not problematic, but lack of insight can lead people to assume that their value orientation is revealed to them through a process that connects them to objective information--research supports an alternative hypothesis. Shalom Schwartz found relatively consistent structure to human values such that the perceived value of power was correlated with that for need for security and achievement and inversely correlated with a collection of traits called universalism, which includes value in equality, social justice, and protection of the environment [8][9]. Universalism is closely related to perceived value of benevolence and self-direction. Somewhat orthogonal to these values were those for tradition, hedonism, and stimulation. Similar analysis can be performed on a variety of other dispositions. One example is the structure of right-wing authoritarianism. One study suggested a three-factor structure of right-wing authoritarianism comprised of Conventionalism, Aggression, and Submission[10]. Some part of this and other dispositions arises from physiological origins, as is the case with empathy[11] and aggression [12]

Given the origins of and persistence of some dispositions to value one thing over another, and considering the difficulties of obtaining insight into our own valuations[13], reaching consensus on valuation would be difficult. For utilitarianism to maximize the legitimacy of the objective aggregation of utility, it must advance observers' ability to acknowledge the nature of their preferences and perhaps must enable them to free themselves from unwarranted biases that underlie the valuation process.

The Is-Ought Problem Edit

A challenge for all moral philosophers is the is-ought problem (also known as fact-value entanglement, Hume's guillotine, the naturalistic fallacy, or G.E. Moore's open question). Many philosophers, including Hume, have argued that observations of the physical universe cannot lead to an ethical valuation of right or wrong. In this way, the is-ought problem is a problem of the transition from empirical evidence to ethical valuation. Hampshire suggested that this "unbridgeable separation between moral judgments and factual judgments...must be the starting-point of moral philosophy"[14] (p. 466).

Solutions to some forms of the is-ought problem can be found in truth-functional propositions, such as saying that if individuals want to maximize their pleasure, then they must avoid adopting self-mutilation as a pastime. The problem becomes more challenging when trying to justify the first principle of such systems (e.g., justify that people must act altruistically). The challenge for utilitarianism is the degree to which people can objectively assign a numeric value to utility.

The transition from empirical evidence to ethical valuation might be different for utilitarianism as a personal philosophy as opposed to utilitarianism as a political philosophy. The difference lay in the way that utility-ratings (preferences) are treated. Psychologists often consider stated preferences as empirical evidence or as evidence for latent traits. Utilitarianism as a personal philosophy might attain internal consistency by simply retaining preferences (based on perceived pleasure and pain of self and others) as ordinal-scale rankings. Additional difficulties arise when legislators attempt to measure and add the utility of others, thereby complicating the transition from empirical evidence to ethical valuation. Efforts to reduce the problems with this transition include adopting only a minimum set of the most defensible values (as is intended by negative utilitarianism and perhaps libertarian political philosophy), although the choice of such approaches is itself affected by an implicit statement of values.

Sam Harris argued that science will increasingly enable people to answer ethical questions[15] (p. 28). His founding principle is that the well-being of conscious beings is the criterion upon which ethical questions can be evaluated (p. 32). He appeared to rely on an a priori argument: "we can know, through reason alone, that consciousness is the only intelligible domain of value" (p. 32). In other words, he asserted that if no conscious being is affected by an act in any way, then the act is not morally relevant, otherwise it is morally relevant (see also p. 62). He suggested that focusing our attention on human cooperation is of the greatest importance and that increasing scientific understanding of the brain will help people to evaluate the emotionally relevant components of morality (p. 55-56, 64). Harris did not offer a full explanation of how to count or weight positive versus negative mental states to resolve the measurement problem, but he suggested that our scientific understanding of these things will improve.

Embedded in the framing of the is-ought problem is the assertion that there exists some attribute of right and wrong that is beyond the domain of the physical universe (and therefore beyond the domain of consequentialism). A moral realist approach to the problem would be to not assume that any such things exist and to instead evaluate actions on their observable consequences. In this way, traditional views of right and wrong might carry with them too many vestiges of old belief-systems to be used to discuss scientifically preferred actions. Consequently, a moral realist might determine the preferred course of action by using the chosen criteria (such as happiness or well-being) instead of trying to map that measure to a scale of goodness or rightness. Without the need to express a measurement on a scale of goodness or rightness, there would be no need to address the is-ought problem except perhaps to justify the first principle of the system (e.g., that actions should be judged according to their impact on happiness).

Primary Commitment to Truth Edit

If humans are to recognize and objectively view their own values and physiological dispositions toward empathy, aggression, and the like, then they might first need a collection of skills, beliefs, dispositions, or other attributes that facilitate the process. Prichard suggested that the solution to endless debate between moral philosophers is to first find a criterion of knowledge that allows philosophers to distinguish between true ideas and false ones[16]. Success of the endeavor will be determined by future research.

It seems that if humans are to view their pre-existing values and dispositions objectively (or nearly objectively, they must have a prior commitment to truth that would compel them to pursue, to the best of their ability, an objective understanding of their existing values and dispositions. That commitment must be strong enough to compel people to free themselves from beliefs or values that are not supported by an adequate foundation. Some have asked for a justification for assuming or requiring a commitment to truth, and Popper suggested that those who adopt a rational attitude have tentatively or from force of habit adopted "an irrational faith in reason" [17] (p. 231). He also considered this to be a moral choice because the adoption of an attitude toward reason would affect how societies divide themselves into leaders and the led or perhaps masters and slaves, and this is clearly a moral decision (p. 232; for an overview of Popper's informal moral philosophy, see Koertge, N. (2007). The Moral Underpinnings of Popper’s Philosophy).

References Edit

  1. Plato (380 BCE). Euthyphro. Translated by Benjamin Jowett [1]
  2. Zimbardo, P. G., & Boyd, J. N. (1999). Putting Time in Perspective: A Valid, Reliable Individual-Differences Metric. Journal of Personality and Social Psychology, 6 1271-1288. [2]
  3. Strathman, A., Gleicher, F., Boninger, D. S. & Edwards, C. S. (1994). The Consideration of Future Consequences: Weighing Immediate and Distant Outcomes of Behavior. Journal of Personality and Social Psychology, 66(4), 742-752. [3]
  4. Krantz, D. H., & Tversky, A. (1971). Conjoint-Measurement analysis of composition rules in psychology. Psychological Review, 78(2) 151–169. DOI: 10.1037/h0030637
  5. Narens, L. (2002). A Meaningful Justification for the Representational Theory of Measurement. Journal of Mathematical Psychology, (46) 746–768. DOI: 10.1006/jmps.2002.1428
  6. Festinger, L. (1958). A theory of cognitive dissonance. Stanford University Press: Evanston, IL
  7. Aronson, J., Cohen, G. &. Nail, P. R. (1999) Self-Affirmation theory: an update and Appraisal in Harmon-Jones, E. and Mills, J. (Eds), Cognitive Dissonance: Progress on a Pivotal Theory in Social Psychology. American Psychological Association: Washington, DC
  8. Schwartz, S. H., (1992). Universals in the Content and Structure of Values: Theoretical Advances and Empirical Tests in 20 Countries in Zanna, M. P. (Ed.) Advances in Experimental Social Psychology 25, pp. 1-62. Academic Press: San Diego, CA
  9. Schwartz, S. H. (1994). Are There Universal Aspects in the Structure and Contents of Human Values? Journal of Social Issues 50(4), 19-45. DOI 10.1111/j.1540-4560.1994.tb01196.x
  10. Mavor, K. I., Louis, W. R. & Sibley, C. G. (2010). A bias-corrected exploratory and confirmatory factor analysis of right-wing authoritarianism: Support for a three-factor structure, Personality and Individual Differences 48, 28-33. DOI: 10.1016/j.paid.2009.08.006
  11. Decety, J. & Jackson, P. L. (2004) The Functional Architecture of Human Empathy. Behavioral and Cognitive Neuroscience Reviews 3, 71-100. DOI: 10.1177/1534582304267187
  12. Miczek, K. A., de Almeida, R. M. M., Kravitz, E. A., Rissman, E. F., de Boer, S. F. & Raine, A. (2011). Neurobiology of Escalated Aggression and Violence. The Journal of Neuroscience, 27(44), 11803-11806. DOI: 10.1523/JNEUROSCI.3500-07.2007
  13. Nisbett, R. E., & WilsonTelling, T. D. (1977). More Than We Can Know: Verbal Reports on Mental Processes. Psychological Review, 84(3), 231--259.
  14. Hampshire, S. (1949). Fallacies in Moral Philosophy. Mind, New Series, 58(232), 466–482. [4]
  15. Harris, S. (2010). The moral landscape: How science can determine human values. New York, NY: Free Press
  16. Prichard, H. A. (1912) Does moral philosophy rely on a mistake? Mind 21(81), p. 21—37. (p. 34)
  17. Popper, K. R. (1963). The open society and its enemies, vol II. (4th ed.). Princeton, NJ: Princeton University Press