Conclusion: Six Questions
Slides
- If the slides are not working, or you prefer them full screen, please try this link
Notes
We started by asking six questions. In conclusion, we review discoveries about each.
In Which Domains Is There Substantial Evidence for a Two Systems Theory?
Substantial evidence is evidence from multiple studies from different labs using different approaches.
We have seen that there is substantial evidence for two systems theories of mindreading (in Mindreading: Signature Limits, and Development) and ethics (in Ethical Cognition). We have also seen some evidence for two systems theories of physical cognition (in Speed-Accuracy Trade-Offs (in Physical Cognition)). In no case is the evidence sufficient to entirely rule out alternative, one system theories.
There are also many other cases we did not consider (as listed in The Core Idea). In some of these cases a two systems theory is well established (e.g. memory, number and instrumental behaviour).
How Are the Two Systems Distinguished?
Our approach was to consider a stripped-down, core claim about processes which differ in how fast they are. Further respects in which systems can be distinguished—by appeal to automaticity, say—are captured by auxiliary hypotheses (see The Core Idea).
We saw that two systems for mindreading can be distinguished by (i) the different degrees to which they are automatic (see Mindreading: Automaticity) and (ii) the different models of minds and actions they employ (see Mindreading: Signature Limits, and Development).
Two systems for physical cognition can be distinguished by the range of models of the physical which can characterise their operations. One system appears limited to impetus mechanics while the other is more flexible. The more limited system also appears to be partly responsible for representational momentum and perhaps other broadly perceptual effects, and thereby to influence the overall phenomenal character of experiences (see Speed-Accuracy Trade-Offs (in Physical Cognition)).
In the ethical domain, two systems theories are wildly accepted but there is much uncertainty about what distinguishes them. In our view this is a significant open challenge (see Ethical Cognition).
What, If Any, Kind of Unity Is There Across Domains?
We did not identify substantial evidence for a hypothesis which generates readily testable predictions and could be used to characterise features of two systems in different domains.
Features commonly conjectured as common themes across domains include automaticity, informational encapsulation and domain specificity. We observed that there is not a lot of evidence to support these conjectures. For instance, evidence for or against automaticity is hard to identify in the domains of ethical and physical cognition.
Why Are There Two Systems?
speed–accuracy trade-offs: different challenges call for different trade-offs between speed and accuracy. Having more than one system allows for radically different trade-offs between speed and accuracy. (This was illustrated in The Core Idea.)
learning and development: having more than one system where the fast system is relatively unchanging over development can provide an optimal balance between reliably meeting everyday practical needs and making it possible to pursue learning where there is a high risk of error but also a large potential reward. (This was illustrated in Mindreading: Signature Limits, and Development.)
phylogeny and culture: the historical emergence of writing was a consequence of a slow system building on abilities made possible by some fast systems. As this suggests, there are some things best provided phylogenetically (or at least through learning processes that do not depend on large-scale cooperative cultural projects) and others that can be provided through large-scale cooperative cultural projects.
When, If Ever, Are Two Systems Better Than One?
If you are building a survival system you want quick and dirty heuristics that are good enough to keep it alive: you don’t necessarily care about the truth. If, by contrast, you are building a thinker, you want her to be able to think things that are true irrespective of their survival value. This cuts two ways. On the one hand, you want the thinker’s thoughts not to be constrained by heuristics that ensure her survival. On the other hand, in allowing the thinker freedom to pursue the truth there is an excellent chance she will end up profoundly mistaken or deeply confused about the nature of physical objects. If she turns to philosophy, she may even end up convincing herself that nothing exists apart from her. So you don’t want thought contaminated by survival heuristics and you don’t want survival heuristics contaminated by thought. Or if some contamination is inevitable, you at least want to limit it. This is beautifully achieved by giving your thinker two (or more) systems, one fast and the other slow. Providing, of course, that the two are not directly connected but rather linked only very loosely, via intentional isolators like metacognitive feelings.
How, If At All, Do the Two Systems Interact? What Are the Barriers to Interaction Between Them?
Because of how we characterised what it is for systems to be distinct, there is a tension between postulating two (or more) systems and postulating interactions between them. We suggested that the distinctness of systems consists in there being processes which differ in conditions which influence whether they occur, and which outputs they generate (in The Core Idea). As the scope for interaction increases, the grounds for distinguishing systems weaken.
In both mindreading and physical cognition, we saw that it is possible for distinct processes to yield incompatible outputs in response to a single stimulus. Importantly, in the case of mindreading we also saw that this can work both ways: there are situations in which fast processes support correct responses while slow processes support incorrect responses; and conversely (see Mindreading: Signature Limits, and Development). This suggests that there are barriers to interaction between systems. And perhaps that the representations they operate over are not inferentially integrated.
One conjecture, which we did not explore in depth, is that fast and slow processes differ in operating over representations which differ in format. This barrier to interaction may explain the lack of inferential integration.
We saw that it is possible for a fast process to influence a slow one indirectly and asynchronously if the fast system can modify the overall phenomenal character of experiences (see Speed-Accuracy Trade-Offs (in Physical Cognition)). This provides one model for understanding interactions between fast and slow systems.
It is also possible that metacognitive feelings provide a way for fast processes to influence slow processes synchronously (see Metacognitive Feelings: How Do Fast and Slow Processes Interact?).
Glossary
Since automaticity and cognitive efficiency are matters of degree, it is only strictly correct to identify some processes as faster than others.
The fast-slow distinction has been variously characterised in ways that do not entirely overlap (even individual author have offered differing characterisations at different times; e.g. Kahneman, 2013; Morewedge & Kahneman, 2010; Kahneman & Klein, 2009; Kahneman, 2002): as its advocates stress, it is a rough-and-ready tool rather than an element in a rigorous theory.