Link Search Menu Expand Document

Ethics: Significance of Two Systems

Slides

Notes

Argument Outline

What is the two systems theory of ethical cognition significant? Because it conflicts with the widespread (in philosophy) use of not-justified-inferentially premises in arguments intended to provide knowledge of the truth of their conclusions.

  1. Ethical judgements are explained by a dual-process theory, which distinguishes faster from slower processes.

  2. Faster processes are unreliable in unfamiliar situations.

  3. Therefore, we should not rely on faster process in unfamiliar situations [from 2].

  4. When philosophers rely on not-justified-inferentially premises, they are relying on faster processes.

  5. We have reason to suspect that the moral scenarios and principles philosophers consider involve unfamiliar situations.

  6. Therefore, not-justified-inferentially premises about particular moral scenarios, and debatable principles, cannot be used in ethical arguments where the aim is to establish knowledge of their conclusions [from 3, 4 and 5].

Case Study: Thomson’s Method of Trolley Cases

To see why the conclusion of the argument above is significant, we need to see how many philosophers approach ethics.

Consider Thomson (1976) on what she calls ‘the trolley problem’:

‘why is it that Edward may turn that trolley to save his five, but David may not cut up his healthy specimen to save his five? I like to call this the trolley problem, in honor of Mrs. Foot’s example’ (Thomson, 1976, p. 206).

Foot (1967) had earlier suggested that it is at least in part because duties not to harm rank above duties to help. To counter this suggestion, Thomson adds a further trolley case:

Frank is a passenger on a trolley whose driver has just shouted that the trolley’s brakes have failed, and who then died of the shock. On the track ahead are five people; the banks are so steep that they will not be able to get off the track in time. The track has a spur leading off to the right, and Frank can turn the trolley onto it. Unfortunately there is one person on the right-hand track. Frank can turn the trolley, killing the one; or he can refrain from turning the trolley, letting the five die’ (Thomson, 1976, p. 207).

Frank’s case is constructed in such a way that (according to Thomson1) if he does nothing, he fails to help; whereas if turns the trolley, he harms one person in order to help five. His choice is between harming one or helping five. Thomson infers:

‘By her [Foot’s] principles, Frank may no more turn that trolley than David may cut up his healthy specimen’ (Thomson, 1976, p. 207).2

Thomson responds by relying on what appears to be an empirical claim:

‘Yet I take it that anyone who thinks Edward may turn his trolley will also think that Frank may turn his’ (Thomson, 1976, p. 207).

It is possible to interpret Thomson as offering this as a normative claim (anyone must take it to be so). Alternatively, she might consider her position as one that is relevant only to those who agree with her on this. So there is no obvious commitment to an empirical claim here.

In any case, Thomson takes the pattern of judgements about what David, Edward and Frank should do to justify rejecting Foot’s view3 in favour of her own:

‘what matters in these cases in which a threat is to be distributed is whether the agent distributes it by doing something to it, or whether he distributes it by doing something to a person’ (Thomson, 1976, p. 216).

If the above loose reconstruction of Greene’s argument is correct, Thomson’s method of trolley cases is misguided because it relies on not-justified-inferentially premises about particular moral scenarios.

Further Implication

The loose reconstruction of Greene’s argument, if successful, also implies the falsity of Audi’s view about ethics:

‘Episodic intuitions […] can serve as data […] … beliefs that derive from them receive prima facie justification’ (Audi, 2015, p. 65).

The above argument does not favour one type (e.g. deontological vs consequentialist) of ethical theory, nor one approach to doing ethics (e.g. case-based vs systematic).4 (We will eventually consider whether further arguments succeed in establishing either such favouritism.)

The above argument does not imply that philosophers should give up on arguments involving not-justified-inferentially premises about particular moral scenarios. Aristotelian theories of the physical, although much less useful than the successors which arose when scientists moved away from reliance on not-justified-inferentially premises, remain useful in some situations. And in the cases of ethics, there may be no better alternative approach.

The above argument implies that when using arguments involving not-justified-inferentially premises about particular moral scenarios, the aim should not be to establish knowledge of their conclusions. Instead it might be to characterise aspects of moral cognition (as Kozhevnikov & Hegarty (2001) use an Aristotelian theory of the physical to characterise physical cognition). Or the aim might be to understand what consistency with certain judgements would require.

Generalisation to Other Domains

Can the loose reconstruction of Greene’s argument concerning ethics be generalised to other domains? On the face of it, none of the arguments for the premises rely on features are specific to ethics.

Alternative Reconstructions

Kumar & Campbell (2012) provide an alternative reconstruction of Green’s argument (which, helpfully, is a refinement on a critique of Berker (2009)’s earlier reconstruction: Kumar and Campbell are probably easier to understand). They analyse Greene’s argument as a debunking argument. This means that (a) it depends on premises about which factors are morally relevant; and (b) is is open to the response that facts about which factors explain judgements are ethically irrelevant (see Rini, 2017, p. 14435).

Why bother with my loose reconstruction when we could just borrow Kumar & Campbell (2012)’s? While their reconstruction may be more faithful to the original (Greene, 2014), my loose reconstruction does not depend on premises about which factors are morally relevant nor does it require the premises that facts about which factors explain why certain judgements are made are ethically relevant. This enables the loose reconstruction to avoid some objections.

Glossary

automatic : On this course, a process is _automatic_ just if whether or not it occurs is to a significant extent independent of your current task, motivations and intentions. To say that _mindreading is automatic_ is to say that it involves only automatic processes. The term `automatic' has been used in a variety of ways by other authors: see Moors (2014, p. 22) for a one-page overview, Moors & De Houwer (2006) for a detailed theoretical review, or Bargh (1992) for a classic and very readable introduction
cognitively efficient : A process is cognitively efficient to the degree that it does not consume working memory and other scarce cognitive resources.
David : ‘David is a great transplant surgeon. Five of his patients need new parts—one needs a heart, the others need, respectively, liver, stomach, spleen, and spinal cord—but all are of the same, relatively rare, blood-type. By chance, David learns of a healthy specimen with that very blood-type. David can take the healthy specimen's parts, killing him, and install them in his patients, saving them. Or he can refrain from taking the healthy specimen's parts, letting his patients die’ (Thomson, 1976, p. 206).
debunking argument : A debunking argument aims to use facts about why people make a certain judgement together with facts about which factors are morally relevant in order to undermine the case for accepting it. Königs (2020, p. 2607) provides a useful outline of the logic of these arguments (which he calls ‘arguments from moral irrelevance’): ‘when we have different intuitions about similar moral cases, we take this to indicate that there is a moral difference between these cases. This is because we take our intuitions to have responded to a morally relevant difference. But if it turns out that our case-specific intuitions are responding to a factor that lacks moral significance, we no longer have reason to trust our case-specific intuitions suggesting that there really is a moral difference. This is the basic logic behind arguments from moral irrelevance’ (Königs, 2020, p. 2607).
Edward : ‘Edward is the driver of a trolley, whose brakes have just failed. On the track ahead of him are five people; the banks are so steep that they will not be able to get off the track in time. The track has a spur leading off to the right, and Edward can turn the trolley onto it. Unfortunately there is one person on the right-hand track. Edward can turn the trolley, killing the one; or he can refrain from turning the trolley, killing the five’ (Thomson, 1976, p. 206).
fast : A fast process is one that is to to some interesting degree cognitively efficient (and therefore likely also some interesting degree automatic). These processes are also sometimes characterised as able to yield rapid responses.
Since automaticity and cognitive efficiency are matters of degree, it is only strictly correct to identify some processes as faster than others.
The fast-slow distinction has been variously characterised in ways that do not entirely overlap (even individual author have offered differing characterisations at different times; e.g. Kahneman, 2013; Morewedge & Kahneman, 2010; Kahneman & Klein, 2009; Kahneman, 2002): as its advocates stress, it is a rough-and-ready tool rather than an element in a rigorous theory.
Frank : ‘Frank is a passenger on a trolley whose driver has just shouted that the trolley's brakes have failed, and who then died of the shock. On the track ahead are five people; the banks are so steep that they will not be able to get off the track in time. The track has a spur leading off to the right, and Frank can turn the trolley onto it. Unfortunately there is one person on the right-hand track. Frank can turn the trolley, killing the one; or he can refrain from turning the trolley, letting the five die’ (Thomson, 1976, p. 207).
loose reconstruction : (of an argument). A reconstruction which prioritises finding a correct argument for a significant conclusion over faithfully representing the argument being reconstructed.
not-justified-inferentially : A claim (or premise, or principle) is not-justified-inferentially if it is not justified in virtue of being inferred from some other claim (or premise, or principle).
Claims made on the basis of perception (_That jumper is red_, say) are typically not-justified-inferentially.
Why not just say ‘noninferentially justified’? Because that can be read as implying that the claim is justified, noninferentially. Whereas ‘not-justified-inferentially’ does not imply this. Any claim which is not justified at all is thereby not-justified-inferentially.
signature limit : A signature limit of a system is a pattern of behaviour the system exhibits which is both defective given what the system is for and peculiar to that system. A {signature limit} of a model is a set of predictions derivable from the model which are incorrect, and which are not predictions of other models under consideration.
slow : converse of fast.
trolley problem : ‘Why is it that Edward may turn that trolley to save his five, but David may not cut up his healthy specimen to save his five?’ (Thomson, 1976, p. 206).
unfamiliar problem : An unfamiliar problem (or situation) is one ‘with which we have inadequate evolutionary, cultural, or personal experience’ (Greene, 2014, p. 714).

References

Audi, R. (2015). Intuition and Its Place in Ethics. Journal of the American Philosophical Association, 1(1), 57–77. http://0-dx.doi.org.pugwash.lib.warwick.ac.uk/10.1017/apa.2014.29
Bargh, J. A. (1992). The Ecology of Automaticity: Toward Establishing the Conditions Needed to Produce Automatic Processing Effects. The American Journal of Psychology, 105(2), 181–199. https://doi.org/10.2307/1423027
Berker, S. (2009). The Normative Insignificance of Neuroscience. Philosophy & Public Affairs, 37(4), 293–329. https://doi.org/10.1111/j.1088-4963.2009.01164.x
Feigenson, L., Dehaene, S., & Spelke, E. S. (2004). Core systems of number. Trends in Cognitive Sciences, 8(7), 307–314. https://doi.org/10.1016/j.tics.2004.05.002
Foot, P. (1967). The problem of abortion and the doctrine of the double effect. Oxford Review, 5, 5–15.
Greene, J. D. (2014). Beyond Point-and-Shoot Morality: Why Cognitive (Neuro)Science Matters for Ethics. Ethics, 124(4), 695–726. https://doi.org/10.1086/675875
Hogarth, R. M. (2010). Intuition: A Challenge for Psychological Research on Decision Making. Psychological Inquiry, 21(4), 338–353. https://doi.org/10.1080/1047840X.2010.520260
Kahneman, D. (2002). Maps of bounded rationality: A perspective on intuitive judgment and choice. In T. Frangsmyr (Ed.), Le prix nobel, ed. T. Frangsmyr, 416–499. (Vol. 8, pp. 351–401). Stockholm, Sweden: Nobel Foundation.
Kahneman, D. (2013). Thinking, fast and slow. New York: Farrar, Straus; Giroux.
Kahneman, D., & Klein, G. (2009). Conditions for intuitive expertise: A failure to disagree. American Psychologist, 64(6), 515–526. https://doi.org/10.1037/a0016755
Königs, P. (2020). Experimental ethics, intuitions, and morally irrelevant factors. Philosophical Studies, forthcoming, 1–19.
Kozhevnikov, M., & Hegarty, M. (2001). Impetus beliefs as default heuristics: Dissociation between explicit and implicit knowledge about motion. Psychonomic Bulletin & Review, 8(3), 439–453. https://doi.org/10.3758/BF03196179
Kumar, V., & Campbell, R. (2012). On the normative significance of experimental moral psychology. Philosophical Psychology, 25(3), 311–330.
Low, J., Apperly, I. A., Butterfill, S. A., & Rakoczy, H. (2016). Cognitive Architecture of Belief Reasoning in Children and Adults: A Primer on the Two-Systems Account. Child Development Perspectives, 10(3), 184–189. https://doi.org/10.1111/cdep.12183
McCloskey, M., Caramazza, A., & Green, B. (1980). Curvilinear Motion in the Absence of External Forces: Naive Beliefs about the Motion of Objects. Science, 210(4474), 1139–1141. https://doi.org/10.2307/1684819
Moors, A. (2014). Examining the mapping problem in dual process models. In Dual process theories of the social mind (pp. 20–34). Guilford.
Moors, A., & De Houwer, J. (2006). Automaticity: A Theoretical and Conceptual Analysis. Psychological Bulletin, 132(2), 297–326. https://doi.org/10.1037/0033-2909.132.2.297
Morewedge, C. K., & Kahneman, D. (2010). Associative processes in intuitive judgment. Trends in Cognitive Sciences, 14(10), 435–440. https://doi.org/10.1016/j.tics.2010.07.004
Nagel, T. (1997). The last word. Oxford: Oxford University Press.
Railton, P. (2014). The Affective Dog and Its Rational Tale: Intuition and Attunement. Ethics, 124(4), 813–859. https://doi.org/10.1086/675876
Rawls, J. (1999). A Theory of Justice (Revised edition). Cambridge, Mass: Harvard University Press.
Rini, R. A. (2017). Why moral psychology is disturbing. Philosophical Studies, 174(6), 1439–1458. https://doi.org/10.1007/s11098-016-0766-4
Singer, P. (1974). Sidgwick and Reflective Equilibrium. The Monist, 58(3), 490–517. https://doi.org/10.5840/monist197458330
Thomson, J. J. (1976). Killing, Letting Die, and The Trolley Problem. The Monist, 59(2), 204–217. https://doi.org/10.5840/monist197659224

Endnotes

  1. This qualification is necessary because there is a tricky issue about which, if any, omissions are actions. If Frank’s refraining from turning the trolley is an action which harms the five, then Frank’s choice is between harming one and harming five and so his case does not work against Foot in the way Thomson intends. 

  2. Here Thomson appears to misrepresent Foot’s position. Foot (1967, p. 17) stresses, ‘I have not, of course, argued that there are no other principles.‘ But the key issue is not whether Foot is right but whether the principle that duties not to harm rank above duties to help can justify the pattern of judgements. 

  3. Note that Thomson is rejecting only Foot’s answer to the trolley problem. Thomson (1976, p. 217) concedes, ‘Mrs. Foot and others may be right to say that negative duties are more stringent than positive duties.’ 

  4. The loose reconstruction may appear to favour systematic over case-based approaches to ethics because its conclusion concerns judgements about particular moral scenarios. This appearance is misleading. The conclusion is framed in this way for simplicity. The argument can be straightforwardly generalised to cover not-justified-inferentially premises about moral principles too. 

  5. In this passage, Rini cites Nagel (1997, p. 105) in support of the view that discoveries about moral psychology cannot ‘change our moral beliefs’. Note that the paragraph she cites from ends with a much weaker claim opposing ‘any blanket attempt to displace, defuse, or subjectivize‘ moral concerns. Further, Nagel’s essay starts with the observation that moral reasoning ‘is easily subject to distortion by morally irrelevant factors … as well as outright error’ (Nagel, 1997, p. 101). So while one of Nagel’s assertions supports Rini’s interpretation, it is unclear to me that Rini is right about Nagel’s considered position. But I could easily be wrong.