Mindreading in adults is sometimes entirely a consequence of relatively automatic processes and sometimes not. Further, automatic and nonautomatic mindreading processes are independent in this sense: different conditions influence whether they occur and which ascriptions they generate.
If the slides are not working, or you prefer them full screen, please try this link.
Mindreading is the process of identifying a mental state as a mental state that some particular individual, another or yourself, has. To say someone has a theory of mind is another way of saying that she is capable of mindreading.1
Wimmer & Perner (1983) set out to determine when humans can know facts about others’ beliefs. They told children a story like this:
‘Maxi puts his chocolate in the BLUE box and leaves the room to play. While he is away (and cannot see), his mother moves the chocolate from the BLUE box to the GREEN box. Later Maxi returns. He wants his chocolate.’
They then asked the children, ‘Where will Maxi look for his chocolate?’
The core feature of a standard false belief task is this:
‘[t]he subject is aware that he/she and another person [Maxi] witness a certain state of affairs x. Then, in the absence of the other person the subject witnesses an unexpected change in the state of affairs from x to y’ (Wimmer & Perner, 1983, p. \ 106).
The task is designed to measure the subject’s sensitivity to the probability that Maxi will falsely believe x to obtain.
A model is a way the world could logically be, or a set of ways the world could logically be.
We can contrast a fact model of minds and actions with a belief model.
On the fact model, it is facts about where things are which explain an agents’ actions.
On the belief model, it is an agents’ beliefs about where things are which explain her actions.
False belief tasks can be used to distinguish the hypothesis that a subject is using a fact model from the hypothesis that she is using a belief model of minds and actions.
A process is automatic to the degree that whether it occurs is independent of its relevance to the particulars of the subject’s task, motives and aims.
There is evidence that some mindreading in human adults is entirely a consequence of relatively automatic processes (Kovács, Téglás, & Endress, 2010; Schneider, Bayliss, Becker, & Dux, 2012; Wel, Sebanz, & Knoblich, 2014; Edwards & Low, 2017; Edwards & Low, 2019), and that not all mindreading in human adults is (Apperly, Back, Samson, & France, 2008; Apperly et al., 2010; Wel et al., 2014).
Qureshi, Apperly, & Samson (2010) found that automatic and nonautomatic mindreading processes are differently influenced by cognitive load, and Todd, Cameron, & Simpson (2016) provided evidence that adding time pressure affects nonautomatic but not automatic mindreading processes.
There is also limited evidence that people are unaware of automatic belief tracking processes:
‘Participants never reported belief tracking when questioned in an open format after the experiment (“What do you think this experiment was about?”). Furthermore, this verbal debriefing about the experiment’s purpose never triggered participants to indicate that they followed the actor’s belief state’ (Schneider et al., 2012, p. 2)
Level 1 perspective-taking in the Samson ‘dot task’ does not appear to be more automatic than Level 2 perspective-taking (Todd, Cameron, & Simpson, 2020).2 This finding is puzzing if we take the evidence for automatic belief-tracking at face value: why would belief-tracking but not level-1 perspective taking be automatic? Todd et al.’s finding is also incompatible with, and therefore evidence against, the conjecture that automatic belief-tracking processes rely on minimal theory of mind because minimal theory of mind involves Level-1 perspective-taking.
Christensen and Michael argue that the dual process theory is less well supported overall than an alternative:
‘A cooperative multi-system architecture is better able to explain infant belief representation than a parallel architecture, and causal representation, schemas and models provide a more promising basis for flexible belief representation than does a rule-based approach of the kind described by Butterfill and Apperly’ (Christensen & Michael, 2016; see also Michael & Christensen, 2016; Michael, Christensen, & Overgaard, 2013).
According to an influential definition offered by Premack & Woodruff (1978, p. 515), for an individual to have a theory of mind its for her to ‘impute mental states to himself _and_ to others’ (my italics). (I have slightly relaxed their definition by changing their ‘and’ to ‘or’ in order to allow for the possibility that there are mindreaders who can identify others’ but not their own mental states.)
According to an influential definition offered by Premack & Woodruff (1978, p. 515), for an individual to have a theory of mind its for her to ‘impute mental states to himself and to others’ (my italics). I have slightly relaxed their definition by changing their ‘and’ to ‘or’ in order to allow for the possibility that there are mindreaders who can identify others’ but not their own mental states. ↩
These authors comment:
‘not only did we consistently observe that altercentric interference was weaker when the avatar’s perspective was less relevant to participants’ task goal; we also consistently failed to observe any evidence of altercentric interference in L1-VPT in these conditions’ (Todd et al., 2020, p. 16).
‘reducing the goal-relevance of a cartoon avatar’s perspective weakened both Level-1 and Level-2 visual perspective calculation. … both Level-1 and Level-2 visual perspective calculation may be dependent on having a (remote) goal to process a target agent’s perspective’ (Todd et al., 2020, p. 18).