Heuristics, Biases and the Psychology of Reasoning: State of the Art

Abstract

Investigations on heuristics and biases have had a great impact on the study of reasoning and related higher cognitive processes, such as judgment and decision making. Specifically, the research in cognitive psychology of reasoning has revealed that people frequently activate mental shortcuts, or heuristics, to make inferences. These are non-logical strategies and could lead subjects to commit systematic deviations from the tenets of normative principles, that is, cognitive biases. The key objective of this paper is to present some of the most relevant theories on heuristics and biases in reasoning, focusing on the dual process theories of deduction. According to these theories, there are two kinds of thinking. Type 1, automatic, unconscious, implicit, fast and effortless and Type 2, reflective, controlled, conscious, explicit, slow and effortful. Much debate on these theories has emphasized on the relationship between both types of processes and the underlying factors that could triggered one or other. In this regard, different dual-process theories propose distinct answers to these questions. The results in the literature have registered that the likelihood of activation of Type 1 and Type 2 processes has important consequences on reasoning, both in experimental laboratory tasks and in everyday situations. Recent empirical investigations that have studied the critical role that intuitive and deliberative processes play in different professional areas are displayed. It is a key question that future research continues with the study of the underlying procedures that professionals activate for reasoning and decision making.

Share and Cite:

Martín, M. and Valiña, M. (2023) Heuristics, Biases and the Psychology of Reasoning: State of the Art. Psychology, 14, 264-294. doi: 10.4236/psych.2023.142016.

1. Introduction

From the 70s, a number of research headed by Kahneman and Tversky showed that people frequently use fast cognitive strategies for reasoning and judgment. These authors have defended that subjects often make predictions and probability judgments based on the activation of intuitive mechanisms or heuristics. For example, it might be possible to make a probabilistic judgment based on the degree of similarity between a process or model and some example or event related to that model (representativeness heuristic). Furthermore, people could also evaluate the probability of an uncertain event occurring, based on the ease with which they can retrieve from memory situations related to that event (availability heuristic). Also, the facility to imagine alternative scenarios related to a specific situation could influence a probability judgment (simulation heuristic). Because these strategies are non-logical mechanisms, they are responsible for suboptimal deviations from normative principles, that is, cognitive biases. For example, people might think that their actions can influence situations that are the result of chance (such as in the context of gambling) or they might overestimate the frequency with which a very salient event may occur (such as a plane crash or a terrorist attack).

The study of heuristics and biases has been a key topic in cognitive psychology of reasoning for over four decades now (e.g.: Gigerenzer et al., 1999 ; Gilovich et al., 2002 ; Kahneman et al., 1982 ; Kahneman & Tversky, 1973, 1996, 2000 ; Tversky & Kahneman, 1974, 1983, 1986 ). See also Hertwig & Todd (2002) ; Mumford & Leritz (2005) . In the first papers, Tversky & Kahneman (1974) ; Kahneman & Tversky (1982) , there wasn’t explicit differentiation in terms of dual process (see for example Cortada de Kohan & Macbeth, 2006 ; Gilovich et al., 2002 ). Later Kahneman & Frederick (2002) began to use system 1, to refer to intuitive answers to judgment problems and system 2 for monitoring the quality of these responses. In fact, Frederick (2005) designed the CRT (Cognitive Reflection Test) in order to analyze the intuitive or reflective nature of the subjects’ performance to three specific tasks. Moreover, Kahneman (2011) related his theory to reasoning or judgment in social and cognitive psychology (see also Kahneman et al., 2021 ). Reviewing these theories within the framework of judgment and decision making is beyond the scope of the current paper.

The main objective of this work is to explain the principal dual process theories of deduction. Firstly, the roots of these theories will be presented. Then, within the dual process framework, the dual process theories and some empirical investigations that support them will be displayed. Later, debiasing studies that have analyzed what are the keys to reduce or avoid cognitive biases will be shown. In addition, to study the origins and the underlying mechanisms of heuristics and biases is essential to understand human behavior both in experimental laboratory tasks and in everyday situations. Specifically, some recent research lines that have studied the role of intuitive-deliberative processes in political and judicial contexts will be presented. Moreover, investigations on subjects’ attitudes towards science and unwarranted beliefs (such as religion, fake news, etc.) will be also displayed. Finally, this work ends with the presentation of some open questions and concluding remarks on heuristics, biases and the dual process theories of reasoning.

Closest roots to the aforementioned theories are the “Dual Process Hypothesis” ( Wason & Evans, 1975 ) and the “Two-Factor Theory” ( Evans, 1982 ; see for example Frankish & Evans, 2009 , for an historical overview). On one hand, one of the first objectives of Wason & Evans was to explain the discrepancy between participants’ behavior on a conditional reasoning task and their introspective reports about how they had solved it. According to these authors, the differences reflected “some form of dual processing between behavior and conscious thought” (p. 141). More recently, it has been proposed by Mercier & Sperber (2011) a new explanation for this discrepancy between reasoning and justification, in terms of Argumentative Theory of Reasoning: reasoning is a justification based on argumentation. On the other hand, the two-factor theory ( Evans, 1982 ) has defended the influence of logical and non-logical factors on reasoning. Some years later this author proposed a new theoretical perspective for explaining the cognitive processes that underlie reasoning: the Heuristic-Analytic Theory.

2. The Heuristic-Analytic Theory: A Bridge between the Two Factor Theory and the Dual Process Theory

Evans (1984, 1989) extended the previous explanations in the Heuristic-Analytic Theory (see for example Evans, 2004, 2008, 2013 for reviews). The author proposed the origin of reasoning bias via heuristic processes. In this theory, heuristics are preconscious and represent relevant information, retrieve and add knowledge from memory (according to linguistic, semantic and/or pragmatic keys). Then, subjects reason with these personalized representations. Participants might make mistakes in the heuristic phase, if they choose logically irrelevant information or do not take relevant information into account when reasoning in a second analytical phase.

In a review of the Heuristic-Analytic Theory ( Evans, 2006, 2007a ), it is proposed that said errors can occur in the heuristic or analytic process. Therefore, both types of processing could be influenced by the participants’ beliefs, empirical knowledge or experience. In this sense, Stanovich (1999) described the fundamental computational bias in order to explain the tendency to contextualize the problems with reference to prior knowledge. Nevertheless this author observed that those subjects high in general intelligence could avoid pragmatic influences and reason in a logical way, on different reasoning tasks. Moreover, Evans (2006) presented the fundamental heuristic bias and the fundamental analytic bias for explaining the role of Type 1 and Type 2 processing in the causes of cognitive biases. In consequence, Type 1 processing could lead to correct answers and Type 2 to biases, depending on the circumstances ( Evans, 2007a, 2007b ; Evans & Stanovich, 2013 ). This revised heuristic-analytic theory included three principles of hypothetical thinking (see Evans et al., 2003 ). These are: singularity (reasoners consider only a single hypothetical possibility at one time), relevance (subjects focus on the model most relevant, plausible or probable, depending on the context) and satisficing (people accept it if it is satisfactory).

In general terms, dual process theory of thinking and reasoning defends the existence of two processes. Type 1 processes provide fast, intuitive and heuristic responses. These are automatic and context dependent, so they can add premises to reasoning tasks through pragmatic implicatures and background knowledge. Type 2 processes are slow, deliberative and analytic. These are context independent and responsible for monitoring performance, allocation of working memory, attentional resources or mental simulation. Type 2 processes can be activated, for example, when type 1 processes do not provide a response or suggest, for different reasons, conflicting answers (see Thompson, 2014 ).

Within this dual process framework, much debate has focused on some of the next questions: are there one or more dual process theories?; what are the features that differentiate both processes?; how the two processes interact? The answers to these topics lead to differentiate several dual process theories. Let’s see some of them in the following sections.

3. Dual Process Theory or Dual Process Theories?

Dual Process Theory ( Evans & Over, 1996 ) explains the interaction between tacit thought process and explicit thought process in the development of human thinking. Both the correct performance and biases are explained. But what are the differences between type 1 and type 2? Are there one or two processes? This is a debated question for more than three decades. The answer to this topic leads to two different theoretical lines: the “Single Process Theory” and the “Dual Process Theory”. These are two general explanations about the quantitative vs. qualitatively models of thinking (see De Neys, 2021 , for a review).

On one hand, the “Single Process Theory” proposes that there is only a single type of reasoning, that exists on a continuum of speed and complexity. This process continuously varies from unconscious and conscious, intuitive to deliberative, automatic to controlled or fast and slow. Thus, this type of theory argues for a single processing and the difference between Type 1 and Type 2 reasoning is only quantitative (e.g.: Dewey, 2022 ; Keren & Schul 2009 ; Osman, 2004, 2013 , among others). On the other hand, the Dual Process Theory (see for example Bago & De Neys, 2017, 2020 ; Evans & Stanovich, 2013 ; Kahneman, 2011 ; Sloman, 1996 ; Smith & DeCoster, 1999 ), proposes two types of reasoning qualitatively distinct: Systems 1 and 2 ( Stanovich & West, 2000 ), Type 1 and Type 2, intuitive and deliberative or quick and slow (see Evans & Stanovich, 2013 , for a review about different attributes related to both processes. See also Kelly & Barron (2022) , for a more general perspective on dual systems of reasoning in animals and AI.

Initially, the dual process theories arose from studies in deductive reasoning and “now form part of a more general set of theories of higher cognition” ( Evans, 2012: p. 115 ). The way in which the processes interact, has also been widely discussed. In this sense, other important issue that has been raised for debate is: How Type 1 and Type 2 processes operate? Let’s see different proposals in the next section.

4. The Serial Default Interventionist and the Parallel-Competitive Processing Models

How Type 1 and Type 2 processes structurally interact? The answer to this question leads to two distinct explanations: The Serial Default Interventionist Model and the Parallel-Competitive Processing Model (see for example Evans, 2007b ; Evans & Stanovich, 2013 or Handley & Trippas, 2015 , for a comparison between both models; see also Table 1).

On one hand, in the fields of reasoning and decision making, the serial Default Interventionist Model (e.g.: Evans, 2006, 2007b ; Kahneman, 2011 ; Kahneman & Frederick, 2002 ; Stanovich, 1999, 2011 ), involves the cueing of default responses by the heuristic system, which may or may not be modified by later intervention of the analytic system. Thus, this analytical system acts as a filter on the intuitive heuristic answers. Two key defining features between the two types of reasoning are: Type 1 processes do not demand working memory resources and Type 2 require working memory. Moreover, Type 2 processing are related to hypothetical thinking and cognitive decoupling ( Stanovich, 2009 ). From this view, human are cognitive misers ( Kahneman, 2011 ) and, consequently, the majority of subjects’ behavior is controlled by type 1 effortless processes. Working memory and cognitive decoupling would be activated only when the results of Type 1 processes should be inhibited.

Later, Evans (2019) proposed the “Default Interventionist Model-Revised”, in which different factors that set the degree of effort were specified: 1) motivational (for example thinking dispositions, FOR); 2) situational factors (context, time available, competing tasks, etc.) and 3) cognitive resources (such as working memory or mindware); see Table 1.

On the other hand, other authors have defended that Type 1 and Type 2 processes operate in parallel and compete for the control of subjects’ answer. This is the Parallel-Competitive Processing Model (e.g.: Sloman, 1996 ; Smith & DeCoster, 1999 ). These processes, typically describing them as associative and rule-based, may produce conflicting answers. Some researchers (for example Trippas & Hanley, 2018 ) have proposed that the parallel model permits explain the interaction between structure (logical validity) and knowledge (conclusion believability).

From a different perspective, multiple empirical studies have defended that the key factor that explains the interaction between type 1 and type 2 processes is the emotion. Some of these investigations analyzed if the emotions or mood increased people’s tendency to activate heuristics or analytical processing when reasoning (see for example Perham & Rosser, 2012 ; Tomljenovic & Bubic, 2021 . See Blanchette et al., 2018 , for a review).

Furthermore, some empirical findings have also revealed the key role of subjects’ confidence in performing their tasks. This is a variable that could modulate correct performance or biased reasoning (e.g.: underconfidence and overconfidence biases). In this way, see for example Macbeth et al. (2009) ; Macbeth et al. (2010) , among others.

Our interest now is to present a metacognitive perspective of the dual process theory, that highlights the role of emotions and the feelings of confidence on the probability of the activation of Type 2 processes. Let’s see it.

5. Dual Process Theory: The Metacognitive Perspective

Thompson (2009) ; Thompson et al. (2011) , have defended that subjects’ reasoning is modulated by the regulation of the emotions. These authors have proposed the “Metacognitive Perspective of the Dual Process Theory” (see Table 1). Specifically, this model explains that the key factors on subjects’ responses are the reasoners’ metacognitive intuitions and the feeling of rightness (FOR). This FOR is the monitoring process that modulates the probability of activation of Type 2 reasoning. Nevertheless, “it is an open empirical question how the monitoring processes differ” ( Ackerman et al., 2020: p. 20 ).

Using a two choice paradigm (see Table 2), Thompson et al. (2011) gave the subjects a reasoning task and asked to give a fast response without deliberation. After, they rate their FOR in this answer. Then reasoners had to think about the problem, as long as they need, with the opportunity to change the initial response. The authors observed that the feeling of rightness (FOR) modulated the option to change the initial answer. Specifically, results showed that the lower the initial FOR, the more time subjects will take rethinking the response and more likely they will be to change it. The faster the initial answer, the higher the FOR. One year later, Thompson & Morsanyi (2012) confirmed that a key determinant of Type 2 processes activation was the affective response that accompanies Type 1 processing.

Then, are emotional factors the key that explains the interaction between 1 and 2 processes? Some authors have suggested that it is not clear what leads to activate an effortful type 2 reasoning from an automatic and intuitive Type 1. Also, some empirical results (such as Stanovich, 2009 ) indicated low correlations between measures of intelligence and measures of rational thinking dispositions. In this sense, it seemed necessary to propose a new process to explain Type 1-Type 2 interaction. Let’s see in the next section some of the Tri-Process Models.

6. How Many Processes Are Necessary to Explain Reasoning? The Tri-Process Theories

Some authors have proposed that it is necessary to explain the interaction Type 1-Type 2 through a third process ( Evans, 2009 ; Stanovich, 2009, 2012 ), or three

Table 1. Some of the main dual-process theories in the psychology of reasoning.

stage ( Pennycook et al., 2015a ).

From Stanovich’s Tri-Process Theory (2009), System 1 included different and multiple autonomous set of systems, TASS (see Table 1). Two years later, Stanovich (2011) presented a categorization of these processes in four groups: 1) hard-wired processes, 2) emotional processes, 3) processes that become embedded in our cognitive and behavioural repertoires through overlearning (such as explicit cultural and social habits or those related to specific knowledge domains), and 4) implicitly learned processes, through deliberate explicit learning (such as formal training) and through implicit learning without conscious awareness. Such learning plays an important role in our skills, perceptions, attitudes and overall behaviour, such as biases towards age, socioeconomic status, gender, race, etc. (in this context, see for example Greenwald et al., 2022 ).

These TASS operate in response to specific and heterogeneous stimuli (for example domain general processes, domain specific processing modules, inference and decision making innate rules or preattentive processes). System 2 is called the algorithmic and the reflective mind. The first can override the answer from system 1 processes and it is related to mechanisms such as those demanded in solving the problems of a test of fluid intelligence. It is responsible for mental simulation or hypothetical thought. This mind is related to the capacity of effective type 2 and accesses micro-strategies for cognitive operations and production system rules for sequencing behaviors and thoughts. Reflective mind includes control states and its function is to regulate processing according to higher-level goals. It is related to the disposition to engage such reasoning and accesses general knowledge structures, the person’s opinions or beliefs. So, this mind is associated with thinking dispositions (some people are more disposed than others to change their own beliefs, open mind, etc.).

Since the two minds present different functions on reasoning, they also cause distinct types of reasoning biases, such as: a) biases due to contamined mindware (knowledge of the rules or procedures required to solve the task are contamined); b) mindware gaps: Type 2 rules or algorithms are not available or may not have been learned; c) serial associative cognition with a focal bias: System 2 processes focus on a mental representation that does not permit the access to the correct answer (system 2 processes are not available due to different causes, such as the lack of capacity or motivation to make an effort). More recently, Stanovich (2018a) has proposed three processing defects: 1) detection (inadequately learned mindware), 2) failure to detect the necessity of overriding the miserly response, and 3) failure to sustain the override process.

In summary, the differences between the algorithmic and the reflective mind are related to the differentiation in the measurement of individual differences between cognitive ability and thinking dispositions. In other words, the differences between the ability to sustain decoupled representations and the regulatory states of the reflective mind, such as the tendency to collect information before making up one’s mind ( Evans & Stanovich, 2013 ).

Evans (2009) has also referred to Type 3 processes, similar to Stanovich’s reflective mind, and whose function is to allocate resources and conflict resolution between the answers coming from type 1 and 2 processes.

Which are the factors that lead to type 2 engagement? A new perspective around the sources of analytic reasoning was presented by Pennycook et al. (2015a) : the three-stage dual-process model of analytic engagement (see Table 1). One of the main authors’ goals was to analyze the bottom-up (i.e., stimulus-triggered) processes that lead to increase in deliberative thought, independent from top-down factors, such as instructional manipulations (e.g.: Evans et al., 2006 ), or individual differences in analytic thinking disposition (e.g.: Stanovich & West, 2008b ).

The authors have suggested three different stages of reasoning. In the first stage, problems automatically lead to the activation of distinct initial answers (intuitive responses), with different generation speed or “fluency” (how quickly they come to mind). The speed will modulate the likelihood of conflict detection. The stage 2 is related to conflict monitoring (relies on the coactivation of potential competing answers). What make us think is located in the third stage and, specifically, when the conflict has been detected. The final answer is based on the distinction between “rationalization” (the Type 1 output is verified post hoc) and “decoupling” (the Type 1 output is falsified). The cognitive decoupling is a key feature of Type 2 processing ( Stanovich, 2009, 2011 ; see Stanovich et al., 2008 , for a taxonomy of heuristics and biases. See also Blanco, 2022 ).

More recently, Pennycook (2018a) , considered that this three-stage model “is certainly incorrect, but it may be correct enough to be useful” (p. 20). In this sense, previously Pennycook et al. (2015b) analyzed the everyday consequences of the subjects’ analytic thinking style (related to religion, moral judgments or even smartphone technologies). These authors proposed that the ability to think analytically has an effect on religiosity, paranormal concepts, moral values, creativity or the use of smartphones (for research on these topics, see for example Barr et al., 2015 ). See also Pennycook (2018b) , for a review about different investigations showing that analytical thinking was a good predictor of key psychological outcomes in different areas of everyday life (such as conspiratorial beliefs, religious and paranormal beliefs, human morality, creativity, etc.). We will return later to the investigation around these topics.

The proposal from Pennycook et al. (2015a) , the three-stage dual-process model of analytic engagement, has been included in a “new generation of the Dual-Process Theories” and it is considered “a prototypical example of a dual-process model 2.0” ( De Neys & Pennycook, 2019: p. 11 ). Specifically, in recent years, there has been a growth of investigations with new experimental designs and novel experimental paradigms (see Table 2): the two-response paradigm (e.g.: Thompson et al, 2011 ); the conflict detection paradigm (e.g.: De Neys & Pennycook, 2019 ); the instructional set paradigm (e.g.: Handley et al., 2011 ); the logic-liking paradigm (e.g.: Morsanyi & Handley, 2012 ) or the two block paradigm

Table 2. Some of the most recent experimental paradigms.

(e.g.: Raoelison et al., 2021 ); see also De Neys (2018) . Results from these empirical studies have led to new answers to classical questions raised by the traditional dual-process theories. For example: all errors must be fast and all correct responses must be slow?; logical reasoning needs the activation of Type 2 processes?; is it correct to equate Type 1 processing with bias and Type 2 processing with normatively correct answers?; are type 1 answers heuristic, logical or both?. The answers to these and other questions were also investigated in the context of this new vision, called “the Dual Process Theories 2.0” ( De Neys, 2018 ). This “next generation of dual-process theories of deduction will have both the rigor and flexibility to explain a wide range of reasoning phenomena” ( Evans et al., 2019: p. 164 ). In this context, let’s see, the Dual Process-Logical Intuition Model.

7. The Dual Process-Logical Intuition Model: Are Type 1 Answers Heuristic, Logical or Both?

In classical dual process terms, thinking is the result of an interaction between intuitive and deliberative processes. Consequently, logical reasoning demands effortful thinking. Nevertheless in recent years the “Hybrid Models” have defended the possibility to reach logical answers via Type 1 processes ( De Neys, 2012, 2014 ; Pennycook et al., 2015a, 2015b ; Thompson et al., 2011 ). For example, De Neys (2012) studied the relationship between logical intuitions and the underlying automatization process based on practice. This process, precisely, causes logical intuitions. The author proposed that subjects are sensitive to conflict between their heuristic and normative response. Precisely this conflict detection seems to be the empirical proof that logical and probabilistic knowledge is intuitive and automatically activated when people engage in a reasoning problem.

This new proposal ( De Neys, 2012, 2014 ; De Neys & Penycook, 2019 ) is a “recent competitor to the serial and parallel model” ( De Neys, 2018: p. 50 ). In this sense, the default interventionist model has proposed that heuristic answers cued in Type 1 thinking, but can be overridden and corrected by Type 2 processes. In contrast, the logical intuition model has defended the presence of distinct intuitive responses: heuristic (based on semantic and stereotypical associations) and logical (based on the activation of logical principles); see Table 1. Thus, when people engage in a reasoning task, they could activate a fast intuitive logical knowledge and spontaneous access to logical principles (that is “intuitive logic”). From this logical-intuition model, logical reasoning does not need the activation of deliberate processes and the intuitive response is already a logical answer. Morsanyi & Handley (2012) have obtained the first empirical evidence of type 1 logic in syllogistic reasoning tasks. Later, Trippas et al. (2016) supported the implicit nature of the sensitivity to logical validity with conditionals, disjunctions and also syllogisms.

Other authors (such as Stanovich, 2018b ) shared the previous proposal, considering that the autonomous mind includes normative rules and rational strategies that are automatic by previous practice. In this sense, the “normative mindware” can compete with other non-normative answers and could be modulated by other factors (for example, the feeling of rightness).

As has been seen, De Neys’ research group has defended the presence of distinct intuitive responses: heuristic and logical intuitions. In consequence, logical knowledge is also intuitive and it is activated automatically. Analytical thinking will be modulated by the conflict detection between both types of intuitive answers. Specifically, conflict detection studies (e.g.: De Neys & Glumicic, 2008 ; De Neys et al., 2008 ) are designed to analyze subjects’ processing while solving conflict (intuitive cue response is not correct) and no-conflict versions (intuitive cue response is correct). Intuitive reasoning is modulated by the strength of competing intuitions, because different intuitions can vary in their strength or activation level (e.g.: Pennycook et al., 2015a, 2015b ; Thompson et al., 2018 ). If there is a strong difference between the two activation levels from both answers, there will be little conflict in the elaboration of the first response, the subject is not involved in modifying it and, in consequence, there is not analytical thinking. Nevertheless, if the two intuitive responses have similar activation levels, then conflict arises and reasoner will engage in deliberation to correct the first initial intuitive answer. So, the same system 1 (intuitive) could activate two different types of responses: heuristic and logical.

In summary, the results of the conflict detection investigations have displayed that people have heuristic and normative knowledge, implicit in nature and, therefore, automatically activated. The conflict between the activation levels from heuristic and logical answers will modulate the activation of the system 2.

Other key question is if subjects detect their biased responses or, on the contrary, they are blind to their failed answers. In this context, some researchers have explored the bias detection sensitivity. For example, in order to analyze the biased nature of the judgments, Morsanyi & Handley (2012) or Trippas et al. (2016) focused on the detection process, using a new logic-liking paradigm (see Table 2). It consists in analyzing if people discriminated valid or invalid conclusions related to trivial judgments (such as bright) or the valid inferences are liked more compared to conclusions of logically invalid ones. In this sense, people showed a sensitivity to possible conflict between their heuristic judgment and elementary logical principles. Even when subjects did not have to reason with a given task and they only had to determine “how much they like it or how bright they perceive it”, they implicitly differentiated valid or non valid answers (indicating that they liked valid conclusions more than invalid and judged valid conclusions to be brighter than invalid answers). So, reasoners showed sensitivity to logical validity driven by Type 1 processing and seemed to take logical validity into account and process basic logical characteristics intuitively. In summary, it was observed that people judged the conclusion of logically valid statements to be more valid, more likable, and more physically bright than invalid statements (see also Šrol, 2022 ; Thompson et al., 2018 , for some recent investigations on conflict detection and individual differences in bias susceptibility). Consequently, these findings seemed to demonstrate that the validity of an argument could be modulated by judgments that require the analysis of features that are independent of the structure of the argument.

In general terms, findings on the conflict detection studies seem to demonstrate that people have heuristic and normative implicit knowledge. In this sense, detecting errors appear to occur quite automatically and System 1 could be smarter than traditionally assumed ( Johnson et al., 2016 ).

Additionally, empirical converging evidence for conflict detection was obtained using distinct methods, tasks and measures, such as response time ( De Neys & Glumicic, 2008 ); eye tracking and gaze tracking ( De Neys & Glumicic, 2008 ; Morsanyi & Handley, 2012 ), memory probing ( De Neys & Glumicic, 2008 ), confidence measures ( De Neys & Feremans, 2013 ), skin conductance responses ( De Neys, Moyens et al., 2010 ), EEG ( De Neys, Novitskiy et al., 2010 ), or fMRI Scanning Technique ( De Neys et al., 2008 ). Thus, different empirical investigations using instruction response paradigm or two-response paradigm supported the “intuitive logic” which may be the default response, processed without deliberation. Nevertheless, although empirical results based on different paradigms, methods and measures, have confirmed the existence of the “intuitive logic”, recently Howarth et al. (2022) questioned if these measures really reflect logical intuition. So, they wanted to test the “intuitive logic hypothesis” with a new method without explicit reference to the logical features of the task. Subjects were instructed to answer regardless of any structural or presentational characteristics of the problem. Specifically, these authors designed two experiments with syllogisms in which subjects had to make random judgments about the logical validity of them. Results obtained supported evidence for the existence of logical intuitions using a different approach in which subjects were not instructed to reason logically.

Other recent investigations were designed to analyze the nature of the “logical intuitions”. On one hand, some authors (e.g.: De Neys, 2012 ; De Neys & Pennycook, 2019 ; Handley & Trippas, 2015 ), have defended the existence of intuitive logic effects driven by an assessment of the logical structure of an argument. On the other hand, Ghasemi et al., 2022 have questioned the existence of these “logical intuitions”, defending that intuitive processes lack any access to the logical rules and rely exclusively on superficial problem features to determine a response. Using conditional inference rules, Ghasemi et al. (2022) have suggested that people could elaborate conclusions intuitively, and these inferences impact belief judgments, but these are not logical intuitions. The underlying mechanism of these inferences seemed to be the processing of more superficial structural features that happen to align with logical validity (see also Meyer-Grant et al, 2022 , for other recent study that questions the existence of logical intuitions, using conditional and categorical syllogisms tasks).

In recent times, some key objectives for De Neys’ research group are: 1) analyze the relationship between conflict detection and the temporal stability of subjects’ answers (e.g.: Voudouri et al., 2022 ); 2) investigate the relationship between cognitive capacity and intuitive or deliberate thinking (e.g.: Raoelison et al., 2020 ), and 3) study the effects of practice and training on thinking types ( Boissin, Caparos et al., 2022 ).

In relation to the first question, Voudouri et al. (2022) explored the temporal stability of intuitive and deliberative responses, using the two-response paradigm. Subjects were given the same problems twice, in two experimental sessions, separated two weeks. Results registered that performance on the tasks was very stable two weeks later. Moreover, conflict detection in session one was stronger in the cases that subjects modified their response between session one and two than when they did not modify their answer between sessions.

Related to the second question, Raoelison et al. (2020) , using a two-response paradigm, have obtained a positive correlation between cognitive capacity and correct intuitive thinking. So, rather than being good at deliberately correcting erroneous intuitions, smart reasoners seemed to have more accurate intuitions. Consequently, the authors ratified the “smart intuitor hypothesis” (see also Svedholm-Häkkinen & Kiikeri, 2022 , for other recent investigation supporting the smart intuitions, with everyday arguments and a mouse tracking methodology).

In relation to the third question, frequently, experimental evidence for logical intuitions accounts have focused on formal reasoning tasks. These investigations have presented evidence that for many subjects, some principles of logical reasoning have been automatized to the point of having become intuitive ( De Neys, 2012, 2018 ; Handley et al., 2011 ). Then, if subjects can activate automatically logical responses, the question is: is it necessary to propose a Type 2 effortful process? De Neys & Pennycook (2019) considered that the deliberation process may have different functions, depending on the situation: sometimes, this process serves for overriding and sometimes serves for justifying the intuitive heuristic or logical response. In this line, De Neys’ group seems to be interested in analyzing the effects of practice and training on thinking types and cognitive biases (see Boissin, Caparos et al., 2022 , for a recent study). Could practice improve reasoning and avoid biases? This question is related to the debiasing studies, which will be explained in the next section.

8. Debiasing Strategies: An Antidote to Cognitive Biases?

One of the main objectives on debiasing is to analyze if people can learn to correct erroneous intuitions and to develop correct intuitions via practice and experience. Debiasing studies highlight the importance on training intuiting system 1 because this intervention could help subjects to correct erroneous intuitions consciously.

First of all, to inhibit biases, previous subjects’ features seem to be necessary. Stanovich & West (2008a) have proposed the following: 1) subjects have to be aware of their mindware (rules, strategies) previously learned and stored; these procedures are important to overcome the bias; 2) people should detect the need for bias override, and 3) they should suppress automatic intuitive answers by decoupling.

Moreover, subjects’ answer to these strategies could be influenced by individual differences in cognitive ability, thinking styles, culture, etc. Even some debiasing interventions might cause the opposite effect (increasing the bias or activating other biases).

Recently Belton & Dhami (2021) have proposed two key elements in debiasing intervention. First, subjects had to receive instructions or training in order to increase understanding and awareness of cognitive bias. In relation to this, it is important that people be able to identify scenarios in which intuitive answer is probably biased. Consequently, they could override their intuition and use a correct deliberative strategy. The key question on debiasing would be to force strategies or deliberate suppressing impulsivity in specific situations (for example: if somebody send you a notification, asking your account number because you have won five hundred euros, you should be skeptic because “it is too good to be true”). Second, the goal of the intervention is to fill mindware gaps by teaching relevant formal rules or concrete strategies to use in a specific task.

In order to achieve success in debiasing strategies, it seems also important to know the bias blind spot: people are less aware of their own biases than of those of others, and they assume that they’re less susceptible to biases than other people. This tendency could modulate the success of training or instructions.

How subjects correct and switch from bias to correct intuitive answer? Precisely, the issue about automation has come to the forefront of dual process theorizing. It seems that people with experience are more likely to use autonomous, non-working-memory dependent, logical intuitions ( De Neys & Pennycook, 2019 ; Evans, 2019 ; Purcell et al., 2021 ; Raoelison et al., 2021 ; Raoelison et al., 2020 ; Stanovich, 2018a , among others). So, it is frequently accepted that practice leads to better results, faster and with less effort.

These debiasing studies have shown that a short single-shot explanation about the intuitive bias and correct solution strategy often helps reasoners produce a correct answer. Once the problem has been explained to subjects, they could solve structurally similar problems later. Nevertheless “practice is just the tip of the iceberg” ( Varga & Hamburguer, 2014: p. 1 ), because “a person’s repertoire of strategies may depend upon many factors, such as cognitive development, experience or formal education” ( Payne et al., 1993: p. 33 ; see also Fischhoff, 1982, 2002 or Larrick, 2004 , for an explanation about distinct debiasing strategies).

It is important to specify that debiasing techniques that have been successful in one domain may not necessarily be effective when applied to other biases or tasks or even when the intervention affects to more general cognitive abilities. In this sense, in the last two decades researchers are interested in investigating the effects of practicing cognitive-training programs. It seems that these activities do not enhanced general cognitive ability and such interventions improved performance on similar problems to the trained task (see Sala & Govet, 2018 for several meta-analytic reviews about cognitive training programs). From a more general perspective, see Osman (2021) and the unconscious bias training.

In the framework of Dual Process Theories, the “trainer instructor point of view” has suggested that training intervention on specific tasks could generate insight about the solution strategy, and this training could boost correct intuitive responses. In this context, recently Boissin, Caparos et al. (2022) analyzed if subjects’ reasoning could be boosted by training that focus on the underlying problem logic. Using a two-response paradigm, the results showed that the short training could debias reasoning at an intuitive level and subjects solved the tasks using logical principles over stereotypical intuitions. In this sense, a single-shot explanation about the intuitive bias and correct solution strategy, could guide subjects to the correct response (see for example Boissin, Caparos, & De Neys, 2022 ; Purcell et al., 2021 ).

In short, as has already been said, these “debiasing” studies have shown that once the problem has been explained to subjects, they could later solve structurally similar problems. Then, is the “trainer instructor” the key solution for removing or, at least, reduce the reasoners’ biases?

Some experimental works suggested that the nature of this training effect is not clear. Does training help participants correct erroneous stereotypical intuitions through deliberation? Or does it help them develop correct intuitions? Recently, Boissin, Caparos, & De Neys (2022) analyzed this question. They observed the impact of deliberation on the efficiency of a debias training in which the problem logic (bat-and-ball task) was explained to participants. The results suggested that deliberation helped reasoners benefit from the training, but it was not essential. Specifically, they varied the degree of possible deliberation during the training session, by manipulating time constraints and cognitive load. The authors have registered that the less constrained the deliberation, the more participants improved. Even with high time-pressure and dual task load, subjects still showed a significant improvement and this “intuitive” insight effect persisted over two months.

The “mental contamination” provoked by biased reasoning has been investigated using mostly formal tasks. Nevertheless, the results of the studies on debiasing could be also beneficial, not only in experimental reasoning tasks, but also in real life problems. Work on debiasing has also suggested that contextual factors may create high risk situations that dispose subjects to commit specific biases when they are reasoning or making decisions. In this context, clinical diagnoses, court decisions, gender and racial biases, political ideologies, religion, moral values, fake news or conspiracy theories on social media, are topics recently developed in the framework of dual process theories. Let’s see in the next section an exposition of experimental investigations on some of these topics.

9. Heuristics, Biases and Reasoning in Everyday Contexts

Through the years, numerous experimental studies have revealed that the tendency to give the correct answer or to make biases in different reasoning tasks, could be modulated by distinct variables, such as the content of the problem, the experimental instructions, the time available, the subjects’ knowledge, etc. (see for example Asensio, et al., 1990 ; Martín et al., 1998 ; Seoane & Valiña 1988 ; Valiña, 1988 ; Valiña & De Vega, 1988 ; Valiña & Martín, 2016, 2021 ; Valiña et al., 1999 ; Valiña et al., 2014 ).

The tendency to think analytically has very important consequences both in experimental laboratory tasks and in everyday situations. In the last years, the investigations within the framework of dual-process theories of higher cognition and domain-specific cognition, has expanded to analytical and intuitive thinking and to the content of beliefs (religious and paranormal beliefs, anti-scientific attitudes, fake news, etc). These new lines of investigations have grown enormously and novelty key information was obtained from them. Making an exhaustive review of these works is outside the objectives of this section (see for example Pennycook, 2018b ; Pennycook & Rand, 2021 ). The main aim here is to display some empirical studies that analyzed the critical role that intuitive and deliberative processes play in different everyday life contexts. In this regard, it is important to gain more insight into origins and underlying mechanisms of heuristics and cognitive biases, in order to understand and predict human behavior, not only in formal tasks but also in daily life scenarios. Consequently, in this section we focus on some lines of investigation about the role of intuitive and deliberative thinking in political and judicial contexts. Moreover, some empirical studies related to the subjects’ attitudes towards science and unwarranted beliefs (fake news, religion or paranormal beliefs) are presented.

Recently, Lindeman et al. (2023) highlight two key questions related to intuitive thinking. First, due to the intuitive information processing is the default mode for people in most of contexts ( Evans & Stanovich, 2013 ), in principle, we are receptive to the similar concrete information. Second, many cognitive biases are common when subjects process information in an intuitive mode. In general terms, people often evaluate evidence and test hypotheses in a manner biased by prior knowledge and beliefs (belief bias), opinions and attitudes (myside bias and motivated reasoning bias) or even depending on their religious, paranormal and conspiracy beliefs (see for example, Baron, 2020 ; Yilmaz, 2021 ).

The late 1990s, Stanovich’s research group program analyzed the relation between individual differences in reasoning and cognitive biases. Their findings have shown that the tendency to override various cognitive biases was correlated with the individual differences in cognitive ability and thinking dispositions (see Stanovich et al., 2016 , for a review of the main results). Nevertheless there was an “outlier bias” that did not correlate with cognitive capacity or thinking dispositions: the Myside Bias ( Stanovich & Toplak, 2022 ; Stanovich & West, 2007, 2008a ; Stanovich et al., 2013 ). It occurs when the subjects’ belief is a conviction (or “distal belief”, that cannot be directly verified by experience). This bias “that divide us” ( Stanovich, 2021a ), embody our values and it is related to the own prior opinions, attitudes, emotional commitment or ego preoccupation. It derives from our general worldviews or, in political terms, from our ideologies ( Stanovich, 2021a ; see also Rocha, 2022 , for a recent publication about thinking styles and politicians).

In this sense, political science studies showed that cognitive sophistication (high cognitive ability, high educational level, knowledge level, political awareness, personal dispositions, etc.), not only do not reduce myside bias but increase it. Stanovich (2021a) analyzed the sociopolitical implications of myside bias and studied how to avoid this error, which is particularly strong in “cognitive elites”. In fact, these subjects presented an important “meta-bias” called “bias blind spot”. For example, West et al. (2012) registered positive correlations between the subjects’ level of cognitive sophistication and the likely to commit a bias blind spot. In other words, subjects believed that various motivational biases were far more persistent in others than in themselves. Related to this, for example, in political scenarios, people often show ideological blindness (later, experimental studies related to political ideology and the ability to reason will be presented).

Subjects believe that only they perceive the world objectively. Most cognitive biases in the heuristics and biases literature are in fact negatively correlated with cognitive ability, that is, more intelligent people are less biased. In order to avoid this peculiar bias, we must train “cognitive decoupling” (suppress automatic response, abstract the relevant features, disregard the context and enabling hypothetical reasoning). Moreover we must practice the perspective switching. This is very important and “allow us to conceptualize the world in new ways” ( Stanovich, 2021a: p. 161 ).

Related to the political and ideological beliefs, cognitive sophistication and science beliefs, Pennycook et al. (2023) observed that one’s level of basic science knowledge was the most consistent predictor of people’s beliefs about science. Moreover, reasoning ability was associated with pro-science beliefs. So, these authors have suggested that educators and policymakers should focus on increasing basic science literacy and critical thinking (see for example Aini et al., 2021 , for a recent review of cognitive biases in scientific work).

In this context, Lewandowsky & Oberauer (2016) studied the motivated rejection of science, that is the subjects’ tendency to reject results that question their basic beliefs or worldview. These authors have defended that the rejection of scientific findings is explained by motivated cognition (subjects tend to reject results that question their main beliefs or worldviews). Moreover, cognitive mechanisms that facilitated the rejection of science (for example, the superficial data processing towards the interpretation that is wanted), were registered regardless of political orientation. In addition to this, general education and scientific literacy did not reduce or prevent rejection of science but, rather, increased the polarization of opinions along partisan lines. Later, Lewandowsky & Oberauer (2021) designed other recent investigation on the relationship between people’s political views and their attitudes towards science (such as attitudes to accept vaccinations or climate change).

Related to these topics, the research by Tappin et al. (2021) , displayed that cognitive sophistication magnifies politically motivated reasoning (that is reasoning modulated by the necessity to reach conclusions congenial to one’s political group identity). The authors have registered direct effect of political group identity on reasoning, but the cognitive sophistication did not magnify this effect. Previous studies such as Calvillo et al. (2020) have analyzed ideological beliefs bias using syllogisms containing political content. Participants (from two ideologies: conservatives and liberals) had to judge the validity of the conclusions presented. Results showed that subjects with different ideologies may accept distinct conclusions from the same evidence. So, political beliefs seemed to provoke biases in subjects’ reasoning.

Political ideology could also affect judges and prosecutors in their daily work. In this sense, other line of investigation is related to heuristics and biases in court decisions. The recent years have seen a growing interest for judicial decision-making. This topic covers issues, such as cognitive models of judicial decision-making (e.g., the story model), the impact of extralegal factors on decisions, prejudice (e.g., gender and racial bias), moral judgments, group decision-making, or the comparison of lay and professional judges (see for example Berthet, 2022 ).

Eyal & Eyal (2013) reviewed several studies on cognitive biases relating to elements of the hearing process (considering evidence and information), ruling or sentencing. Their findings have suggested that irrelevant factors that should not affect judgment might cause systemic and predictable biases in judges’ decision-making processes. In this scenario, during a trial, judges are presented with evidence; they may ask for additional or other evidence, they may judge evidence as inappropriate, or they may decide to give more (or less) weight to certain pieces of evidence or reject others. In this process, such tasks might be affected by several cognitive biases (for example, confirmation bias can affect judges when they hear and evaluate information; motivated reasoning bias or non-conscious tendencies to reason towards a preferred outcome, etc.).

The awareness to the heuristic thinking and the possible biases affecting judicial decisions seem an important prerequisite for trying to reduce or avoid these biases (see Fischhoff, 1982 , for the identification of several techniques to limit biases, including warning people in advance about the existence of bias, describing the likely direction of a bias, illustrating biases to the judges and providing training, feedback, coaching, etc.).

In summary, investigations around this topic showed that factors beyond the law, affected judges in their role. To improve their judgment, it is important the judges, prosecutors and even popular juries know their biases and how the related biases work (see for example Fariña et al., 1998 ; Jólluskin et al., 1998 ).

Other recent and important line of investigation on intuitive-deliberative processes is related to look for the underlying cognitive bases for attitudes towards religion, science, paranormal beliefs or the subjects’ cognitive susceptibility to fake news. Nowadays, the way in which people receive information has radically changed. The proliferation of the internet and social networks could be affecting how subjects reason, make decisions and, ultimately, how they behave towards science and unwarranted beliefs (from fake news, religion, paranormal beliefs, etc.).

In this sense, for example Lindeman et al. (2023) , designed an investigation to study “misbeliefs”, related to anti-scientific attitudes and conspiracy beliefs. They analyzed the conceptual factors and mechanisms that could led to specific attitudes (such as anti-vaccination). The results have indicated that the same background cognitive factors lead both to a general susceptibility towards anti-scientific beliefs and other unwarranted beliefs, such as fake news, paranormal and conspiracy beliefs. Moreover the strongest relationships with anti-vaccination attitudes were registered for: poor scientific literacy, intuitive thinking, religious and non religious supernatural beliefs and ontological confusions. In line with previous results, it has been shown that epistemically and suspect beliefs were more strongly associated to intuitive than analytic thinking style.

Other investigation by Rizeq et al. (2021) analyzed three domains of “contamined mindware” (paranormal beliefs, conspiracy beliefs and anti-science attitudes). Some predictors were cognitive ability, cognitive reflection, or disposal tendency of actively open-minded thinking. The results showed that all of them were significantly correlated with the three aforementioned domains. For other studies on these topics, see for example Barr et al., (2015) ; Pennycook et al. (2015b) ; Šrol (2022) . See also Sanz Blasco & Carro de Francisco (2019) , for a theoretical review.

The results from these lines of investigation have key practical implications for future studies and to learn techniques to improve reasoning and decision making, in different everyday scenarios. Analyze the strategies used by subjects to test a hypotheses (see for example Carretero, 1980 ) or elucidating the cognitive aspects of the subjects’ attitudes may help to guide in the future effective educational interventions aimed at improving, for example, public health and, in general, public politics. These programs could help reasoners encouraging them to reduce or even avoid biases thinking that could be present in laboratory contexts and also in daily life situations. In this line, it is a key issue to provide suitable instructional environments and to emphasize the positive views and attitudes towards scientific-warranted beliefs (such as health issues, climate change, etc.).

Many of the literature on heuristics and thinking biases relate to issues of rationality (see for example Evans, 2021 ; Fiedler et al., 2021 ; Goel, 2022 ; Hahn & Harris, 2014 ; Kahneman & Tversky, 1983 ; Stanovich, 1999, 2021b ; Stanovich et al., 2008 ; Stanovich & West, 2000 ; Viale, 2021 ). In this sense as proposed Pinker (2021) , just as citizens should understand the basics of history, science, and the written word, they should master the intellectual tools of accurate reasoning, such as logic, critical thinking, … essential to calibrate risky decisions as well as evaluate doubtful statements, in our lives. See also Carretero & Sobrino López (2020) ; Wagner (2022) .

10. The Dual Process Theories: Some Open Questions and Concluding Remarks

Some of the key unresolved questions around the Dual Process Theories are the following: 1) what make us think? 2) how intuitive and analytic processes operate? 3) how is the type of interaction between both processes, sequential, parallel? 4) what are the key factors that determine the intervention of type 2 processes? The cognitive ability, the rational thinking dispositions, the instructions, the time available? ... 5) is reasoning the implementation of concrete beliefs, abstract structures, or something else? 6) what is the nature of the intuition: heuristic, logic, both? 7) what are the underlying mechanisms to heuristic and biases? Related to these questions, next, some final remarks are presented.

More research is needed to understand how the types of thought work ( Evans, 2018 ), and so there is still much work to be done. In this sense, for example, the distinction between explicit versus implicit, is an important question, because “it can improve the understanding of some phenomena typical of human thought... implicit processes could explain the acquisition of beliefs or stereotypes capable of influencing subsequent judgments” ( Tubau & López Moliner, 1998: p. 20 ).

Moreover, it would be useful for people to be familiarized with the intuitive biases, and learn the debiasing strategies to avoid them. This should start from as early as school age, because many reasoning biases grow from childhood to early adolescence, when intuition-based reasoning develops ( Lindeman et al., 2023 ). Additionally, it is also a key question to highlight the role of the adaptative primary knowledge (acquired quickly and effortlessly) and the secondary knowledge (it requires time, cognitive resources and it is hardly motivating) related to the two types of reasoning processes (see for example Lespiau & Tricot, 2022 ).

To investigate the origins and underlying mechanisms of heuristics and cognitive biases may serve to find better ways to predict their occurrence. This has important consequences on many practical levels, daily life situations and subjects’ attitudes. Besides, intuitive and reflective beliefs provide key cues for everyday human activities and could be beneficial for reasoning (see Trémolière & Lespiau, 2022 , for an analysis on the two types of beliefs in the framework of Dual Process Theories). Additionally, professionals in the real world frequently operate under limited time, or work overload. These factors could modulate their behaviour. Thus, it would be very important that they learn strategies to make better inferences and decisions. Consequently, future investigations should continue to studying how doctors, judges, or even politicians choose between distinct heuristics when they are reasoning and making decisions.

Some authors highlight that there are gaps in the literature ( Pennycook, 2018a ), and it might be necessary to re-think some of the fundamental assumptions of the original Dual Process Theory ( De Neys, 2018 ). Nevertheless, the existence of dual processes or systems of thought is “one of the most widespread and influential theoretical ideas in contemporary cognitive psychology” ( Rhodes et al., 2020: p. 185 ).

Future research should continue to explore the mechanisms underlying the processes of reasoning and decision making. So heuristics and cognitive biases should continue to be investigated in depth. Most importantly, in the real world professionals have to reason and make decisions under conditions of incomplete information, fatigue or very limited time; consequently, the insights into the interplay between their environments and heuristics should be implemented to help them to make better inferences and decisions (see also Berthet, 2022 ; Hertwig & Pachur, 2015 ).

In general terms, Dual-Process Theory provides a “valuable high-level framework within which more specific and testable models can be developed” ( Evans, 2018: p. 163 ).

NOTES

*Part of this work was presented at 22nd Conference of the European Society for Cognitive Psychology-ESCOP, held in Lille, France, August, 2022: Martín, M. & Valiña, M.D. (2022) . Reasoning with Heuristics: Theoretical Explanations and Beyond. http://hdl.handle.net/10347/29319.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Ackerman, R., Douven, I., Elqayam, S., & Teodorescu, K. (2020). Satisficing, Meta-Reasoning, and the Rationality of Further Deliberation. In S. Elqayam, I. Douven, Jonathan, St. B. T. Evans, & N. Cruz (Eds.), Logic and Uncertainty in the Human Mind: A Tribute to David E. Over (pp. 10-26). Routledge.
https://doi.org/10.4324/9781315111902-2
[2] Aini, R. Q., Bandari, Y. S., Rusmana, A. N., & Ha, N. (2021). Addressing Challenges to a Systematic Thinking Pattern of Scientist: A Literature Review of Cognitive Bias in Scientific Work. Brain, Digital, & Learning, 11, 417-430.
https://doi.org/10.31216/BDL.20210027
[3] Asensio, M., Martín Cordero, J., García-Madruga, J. A., & Recio, J. (1990). “Ningún Iroqués era mohicano”: La influencia del contenido en las tareas de razonamiento lógico. Estudios de Psicología, 43-44, 35-60. https://doi.org/10.1080/02109395.1990.10821142
[4] Bago, B., & De Neys, W. (2017). Fast Logic?: Examining the Time Course Assumption of Dual Process Theory. Cognition, 158, 90-109.
https://doi.org/10.1016/j.cognition.2016.10.014
[5] Bago, B., & De Neys, W. (2020). Advancing the Specification of Dual Process Models of Higher Cognition: A Critical Test of the Hybrid Model View. Thinking & Reasoning, 26, 1-30. https://doi.org/10.1080/13546783.2018.1552194
[6] Baron, J. (2020). Religion, Cognitive Style, and Rational Thinking. Current Opinion in Behavioral Sciences, 34, 64-68. https://doi.org/10.1016/j.cobeha.2019.12.015
[7] Barr, N., Pennycook, G., Stolz, J. A., & Fugelsang, J. A. (2015). The Brain in Your Pocket: Evidence That the Smartphones That Used to Supplant Thinking. Computers in Human Behavior, 48, 473-480. https://doi.org/10.1016/j.chb.2015.02.029
[8] Belton, I. K., & Dhami, M. K. (2021). Cognitive Biases and Debiasing in Intelligence Analysis. In R. Viale (Ed.), Routledge Handbook of Bounded Rationality (pp. 548-560). Routledge. https://doi.org/10.4324/9781315658353-42
[9] Berthet, V. (2022). The Impact of Cognitive Biases on Professionals’ Decision-Making: A Review of Four Occupational Areas. Frontiers in Psychology, 12, Article 802439.
https://doi.org/10.3389/fpsyg.2021.802439
[10] Blanchette, I., Caparos, S., & Trémolière, B. (2018). Emotion and Reasoning. In L. J. Ball, & V. A. Thompson (Eds.), The Routledge International Handbook of Thinking and Reasoning (pp. 57-70). Routledge.
[11] Blanco, F. (2022). Cognitive Bias. In J. Vonk, & T. K. Shackelford (Eds.), Encyclopedia of Animal Cognition and Behavior (pp. 1487-1493). Springer.
https://doi.org/10.1007/978-3-319-55065-7_1244
[12] Boissin, E., Caparos, S., & De Neys, W. (2022). Examining the Role of Deliberation in De-Bias Training. https://ssrn.com/abstract=4106107
https://doi.org/10.2139/ssrn.4106107
[13] Boissin, E., Caparos, S., Voudouri, A., & De Neys, W. (2022). Debiasing System 1: Training Favours Logical over Stereotypical Intuiting. Judgment and Decision Making, 17, 646-690. https://doi.org/10.1017/S1930297500008895
[14] Calvillo, D. P., Swan, A. B., & Rutchick, A. M. (2020). Ideological Belief Bias with Political Syllogisms. Thinking & Reasoning, 26, 291-310.
https://doi.org/10.1080/13546783.2019.1688188
[15] Carretero, M. (1980). Tropezando muchas veces en la misma piedra. Cuadernos de Pedagogía, 67-68, 10-12.
[16] Carretero, M., & Sobrino López, D. (2020). “Fake News” y pensamiento crítico. Iber: Didáctica de las Ciencias Sociales, Geografía e Historia, 101, 4-7.
[17] Cortada de Kohan, N., & Macbeth, G. (2006). Los sesgos cognitivos en la toma de decisiones. Revista de Psicología, 2, 55-72.
https://repositorio.uca.edu.ar/handle/123456789/6131
[18] De Neys, W. (2012). Bias and Conflict: A Case of Logical Intuitions. Perspectives on Psychological Science, 7, 28-38. https://doi.org/10.1177/1745691611429354
[19] De Neys, W. (2014). Conflict Detection, Dual Processes, and Logical Intuitions: Some Clarifications. Thinking & Reasoning, 20, 169-187.
https://doi.org/10.1080/13546783.2013.854725
[20] De Neys, W. (2018). Bias, Conflict, and Fast Logic: Towards a Hybrid Dual Process Future? In W. De Neys (Ed.), Dual Process Theory 2.0 (pp. 28-65). Routledge.
https://doi.org/10.4324/9781315204550-4
[21] De Neys, W. (2021). On Dual and Single Process Models of Thinking. Perspectives on Psychological Science, 16, 1412-1427. https://doi.org/10.1177/1745691620964172
[22] De Neys, W., & Feremans, V. (2013). Development of Heuristic Bias Detection in Elementary School. Developmental Psychology, 49, 258-269.
https://doi.org/10.1037/a0028320
[23] De Neys, W., & Glumicic, T. (2008). Conflict Monitoring in Dual Process Theories of Thinking. Cognition, 106, 1284-1299. https://doi.org/10.1016/j.cognition.2007.06.002
[24] De Neys, W., Moyens, E., & Vansteenwegen, D. (2010). Feeling We’re Biased: Autonomic Arousal and Reasoning Conflict. Cognitive, Affective, & Behavioral Neuroscience, 10, 208-216. https://doi.org/10.3758/CABN.10.2.208
[25] De Neys, W., Novitskiy, N., Ramautar, J., & Wagemans, J. (2010). What Makes a Good Reasoner? Brain Potentials and Heuristic Bias Susceptibility. Proceedings of the 32nd Annual Meeting of the Cognitive Science Society, 32, 1020-1025.
https://escholarship.org/uc/item/1b8261kk
[26] De Neys, W., & Pennycook, G. (2019). Logic, Fast and Slow: Advances in Dual-Process Theorizing. Current Directions in Psychological Science, 28, 503-509.
https://doi.org/10.1177/0963721419855658
[27] De Neys, W., Vartanian, O., & Goel, V. (2008). When Our Brains Detect That We Are Biased. Psychological Science, 19, 483-489.
https://doi.org/10.1111/j.1467-9280.2008.02113.x
[28] Dewey, C. (2022). Metacognitive Control in Single-vs. Dual-process theory. Thinking & Reasoning. https://doi.org/10.1080/13546783.2022.2047106
[29] Evans, J. St. B. T. (1982). The Psychology of Deductive Reasoning. Taylor & Francis Group.
[30] Evans, J. St. B. T. (1984). Heuristic and Analytic Processes in Reasoning. British Journal of Psychology, 75, 451-468. https://doi.org/10.1111/j.2044-8295.1984.tb01915.x
[31] Evans, J. St. B. T. (1989). Bias in Human Reasoning: Causes and Consequences. Lawrence Erlbaum Associates, Inc.
[32] Evans, J. St. B. T. (2003). In Two Minds: Dual-Process Accounts of Reasoning. Trends in Cognitive Sciences, 7, 454-459. https://doi.org/10.1016/j.tics.2003.08.012
[33] Evans, J. St. B. T. (2004). History of the Dual Process Theory of Reasoning. In K. Manktelow, & M. Ch. Cheng (Eds.), Psychology of Reasoning. Theoretical and Historical Perspectives (pp. 241-266). Psychology Press.
[34] Evans, J. St. B. T. (2006). The Heuristic-Analytic Theory of Reasoning: Extension and Evaluation. Psychonomic Bulletin & Review, 13, 378-395.
https://doi.org/10.3758/BF03193858
[35] Evans, J. St. B. T. (2007a). Hypothetical Thinking: Dual Processes in Reasoning and Judgement. Psychology Press.
[36] Evans, J. St. B. T. (2007b). On the Resolution of Conflict in Dual Process Theories of Reasoning. Thinking & Reasoning, 13, 321-339.
https://doi.org/10.1080/13546780601008825
[37] Evans, J. St. B. T. (2008). Dual-Processing Accounts of Reasoning, Judgment and Social Cognition. Annual Review of Psychology, 59, 255-278.
https://doi.org/10.1146/annurev.psych.59.103006.093629
[38] Evans, J. St. B. T. (2009). How Many Process Theories Do We Need?: One, Two or Many? In J. St. B. T. Evans, & K. Frankish (Eds.), In Two Minds: Dual Processes and beyond (pp. 33-54). Oxford University Press.
https://doi.org/10.1093/acprof:oso/9780199230167.003.0002
[39] Evans, J. St. B. T. (2010). Thinking Twice. Two Minds in One Brain. Oxford University Press.
[40] Evans, J. St. B. T. (2012). Dual-Process Theories of Deductive Reasoning: Facts and Fallacies. In K. J. Holyoak, & R. G. Morrison (Eds.), The Oxford Handbook of Thinking and Reasoning (pp. 115-133). Oxford University Press.
https://doi.org/10.1093/oxfordhb/9780199734689.013.0008
[41] Evans, J. St. B. T. (2013). Reasoning, Rationality and Dual Processes. Selected Works of Jonathan St. B. T. Evans. Psychology Press. https://doi.org/10.4324/9781315886268
[42] Evans, J. St. B. T. (2018). Dual-Process Theories. In L. J. Ball, & V. A. Thompson (Eds.), The Routledge International Handbook of Thinking and Reasoning (pp. 151-166). Routledge.
[43] Evans, J. St. B. T. (2019). Reflections on Reflection: The Nature and Function of Type 2 Processes in Dual-Process Theories of Reasoning. Thinking & Reasoning, 25, 383-415.
https://doi.org/10.1080/13546783.2019.1623071
[44] Evans, J. St. B. T. (2021). Bounded Rationality, Reasoning and Dual Processing. In R. Viale (Ed.), Routledge Handbook of Bounded Rationality (pp. 185-195). Routledge.
https://doi.org/10.4324/9781315658353-11
[45] Evans, J. St. B. T., Ball, L., & Thompson, V. A. (2019). Belief Bias in Deductive Reasoning. In R. Pohl (Ed.), Cognitive Illusions: Intriguing Phenomena in Thinking, Judgment, and Memory (pp. 154-172). Psychology Press.
https://doi.org/10.4324/9781003154730-12
[46] Evans, J. St. B. T., Handley, S. J., Neilens, H., & Over, D. E. (2006). The Influence of Cognitive Ability and Instructional Set on Causal Conditional Inference. Quarterly Journal of Experimental Psychology, 63, 892-909. https://doi.org/10.1080/17470210903111821
[47] Evans, J. St. B. T., & Over, D. E. (1996). Rationality and Reasoning. Psychology Press.
[48] Evans, J. St. B. T., Over, D. E., & Handley, S. H. (2003). A Theory of Hypothetical Thinking. In D. Hardman, & L. Maachi (Eds.), Thinking: Psychological Perspectives on Reasoning, Judgement and Decision Making (pp. 3-22). Wiley.
[49] Evans, J. St. B. T., & Stanovich, K. E. (2013). Dual-Process Theories of Higher Cognition: Advancing the Debate. Perspectives on Psychological Science, 8, 223-241.
https://doi.org/10.1177/1745691612460685
[50] Eyal, P., & Eyal, G. (2013). Heuristics and Biases in Judicial Decisions. Court Review: The Journal of the American Judges Association, 49, 114-118.
https://digitalcommons.unl.edu/ajacourtreview/422
[51] Fariña, F., Real, S., & Arce, M. (1998). Por qué los jurados pro-culpabilidad reconocen más información sobre el caso que los pro-inocencia? La formulación de la hipótesis de un “procesamiento de verificación”. In M. D. Valiña, & M. J. Blanco (Eds.), I Jornadas de Psicología del Pensamiento (Actas) (pp. 433-441). Cursos e Congresos da Universidadede Santiago de Compostela. N˚ 114. Servicio de Publicacións da Universidade de Santiago de Compostela. http://hdl.handle.net/10347/11960
[52] Fiedler, K., Prager, J., & McCaughey, L. (2021). Heuristics and Biases. In M. Knauff, & W. Spohn (Eds.), The Handbook of Rationality (pp. 159-171). The MIT Press.
https://doi.org/10.7551/mitpress/11252.003.0015
[53] Fischhoff, B. (1982). Debiasing. In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgment under Uncertainty: Heuristics and Biases (pp. 422-444). Cambridge University Press. https://doi.org/10.1017/CBO9780511809477.032
[54] Fischhoff, B. (2002). Heuristics and Biases in Application. In T. Gilovich, D. Griffin, D., & D. Kahneman (Eds.), Heuristics and Biases. The Psychology of Intuitive Judgment (pp. 730-748). Cambridge University Press.
https://doi.org/10.1017/CBO9780511808098.043
[55] Frankish, K., & Evans, J. St. B. T. (2009). The Duality of Mind: An Historical Perspective. In J. St. B. T. Evans, & K. Frankish (Eds.), In Two Minds: Dual Processes and Beyond (pp. 1-29). Oxford University Press.
https://doi.org/10.1093/acprof:oso/9780199230167.003.0001
[56] Frederick, S. (2005). Cognitive Reflection and Decision Making. Journal of Economic Perspectives, 19, 25-42. https://doi.org/10.1257/089533005775196732
[57] Ghasemi, O., Handley, S., Howarth, S., Newman, I. R., & Thompson, V. A. (2022). Logical Intuition Is Not Really about Logic. Journal of Experimental Psychology: General, 151, 2009-2028. https://doi.org/10.1037/xge0001179
[58] Gigerenzer, G., Todd, P. M., & The ABC Research Group (1999). Simple Heuristics That Make Us Smart. Oxford University Press.
[59] Gilovich, T., Griffin, D., & Kahneman, D. (2002). Heuristics and Biases. The Psychology of Intuitive Judgment. Cambridge University Press.
https://doi.org/10.1017/CBO9780511808098
[60] Goel, V. (2022). Reason and Less. Pursuing Food, Sex, and Politics. The MIT Press.
https://doi.org/10.7551/mitpress/12811.001.0001
[61] Greenwald, A. G., Dasgupta, N., Dovidio, J. F., Kang, J., Moss-Racusin, C. A., & Teachman, B. A. (2022). Implicit-Bias Remedies: Treating Discriminatory Bias as a Public-Health Problem. Psychological Science in the Public Interest, 23, 7-40.
https://doi.org/10.1177/15291006211070781
[62] Hahn, U., & Harris, A. J. L. (2014). What Does It Mean to Be Biased: Motivated Reasoning and Rationality. In B. H. Ross (Ed.), Psychology of Learning and Motivation (Vol. 61, pp. 41-102). Elsevier Academic Press.
https://doi.org/10.1016/B978-0-12-800283-4.00002-2
[63] Handley, S., Newstead, S. E., & Trippas, D. (2011). Logic, Beliefs, and Instruction: A Test of the Default Interventionist Account of Belief Bias. Journal of Experimental Psychology: Learning, Memory and Cognition, 37, 28-43. https://doi.org/10.1037/a0021098
[64] Handley, S., & Trippas, D. (2015). Dual Processes and the Interplay between Knowledge and Structure: A New Parallel Processing Model. In B. H. Ross (Ed.), Psychology of Learning and Motivation (Vol. 62, pp. 33-58).
https://doi.org/10.1016/bs.plm.2014.09.002
[65] Hertwig, R., & Pachur, T. (2015). Heuristics, History of. In James D. Wright (Ed.), International Encyclopedia of the Social & Behavioral Sciences (Second Edition) (pp. 829-835). Elsevier. https://doi.org/10.1016/B978-0-08-097086-8.03221-9
[66] Hertwig, R., & Todd, P. M. (2002). Heuristics. In V. S. Ramachandran (Ed.), Encyclopedia of the Human Brain (pp. 449-460). Academic Press.
https://doi.org/10.1016/B0-12-227210-2/00162-X
[67] Howarth, S., Handley, S., & Polito, V. (2022). Uncontrolled Logic: Intuitive Sensitivity to Logical Structure in Random Responding. Thinking & Reasoning, 28, 61-96.
https://doi.org/10.1080/13546783.2021.1934119
[68] Johnson, E. D., Tubau, E., & De Neys, W. (2016). The Doubting System 1: Evidence for Automatic Substitution Sensitivity. Acta Psychologica, 164, 56-64.
https://doi.org/10.1016/j.actpsy.2015.12.008
[69] Jólluskin, G., Fariña, F., & Real, S. (1998). Incidencia de la distribución de staus en la actividad cognitiva, heurística y recuerdo. In M. D. Valiña, & M. J. Blanco (Eds.), I Jornadas de Psicología del Pensamiento (Actas) (pp. 445-456). Cursos e Congresos da Universidadede Santiago de Compostela. N˚ 114. Servicio de Publicacións da Universidade de Santiago de Compostela. http://hdl.handle.net/10347/11961
[70] Kahneman, D. (2011). Thinking, Fast and Slow. Macmillan.
[71] Kahneman, D., & Frederick, S. (2002). Representativeness Revisited: Attributes Substitution in Intuitive Judgment. In T. Gilovich, D. Griffin, & D. Kahneman (Eds.), Heuristics and Biases: The Psychology of Intuitive Judgment (pp. 49-81). Cambridge University Press. https://doi.org/10.1017/CBO9780511808098.004
[72] Kahneman, D., Olivier, S., & Sunstein, C. R. (2021). Noise: A Flaw in Human Judgment. Hachette Book Group.
[73] Kahneman, D., Slovic, P., & Tversky, A. (Eds.) (1982). Judgment under Uncertainty: Heuristics and Biases. Cambridge University Press.
https://doi.org/10.1017/CBO9780511809477
[74] Kahneman, D., & Tversky, A. (1973). On the Psychology of Prediction. Psychological Review, 80, 237-251. https://doi.org/10.1037/h0034747
[75] Kahneman, D., & Tversky, A. (1982). On the Study of Statistical Intuitions. Cognition, 11, 123-141. https://doi.org/10.1016/0010-0277(82)90022-1
[76] Kahneman, D., & Tversky, A. (1983). Can Irrationality Be Intelligently Discussed? The Behavioral and Brain Sciences, 6, 509-510. https://doi.org/10.1017/S0140525X00017246
[77] Kahneman, D., & Tversky, A. (1996). On the Reality of Cognitive Illusions. Psychological Review, 103, 582-591. https://doi.org/10.1037/0033-295X.103.3.582
[78] Kahneman, D., & Tversky, A. (Eds.) (2000). Choices, Values and Frames. Cambridge University Press. https://doi.org/10.1017/CBO9780511803475
[79] Kelly, M., & Barron, A. B. (2022). The Best of Both Worlds: Dual Systems of Reasoning in Animals and AI. Cognition, 225, Article ID: 105118.
https://doi.org/10.1016/j.cognition.2022.105118
[80] Keren, G., & Schul, Y. (2009). Two Is Not Always Better than One: A Critical Evaluation of Two-System Theories. Perspectives on Psychological Science, 4, 533-550.
https://doi.org/10.1111/j.1745-6924.2009.01164.x
[81] Larrick, R. P. (2004). Debiasing. In D. J. Koehler, & N. Harvey (Eds.), Blackwell Handbook of Judgment and Decision Making (pp. 316-337). Blackwell Publishing.
https://doi.org/10.1002/9780470752937.ch16
[82] Lespiau, F., & Tricot, A. (2022). Primary vs. Secondary Knowledge Contents in Reasoning: Motivated and Efficient vs. Overburdened. Acta Psychologica, 227, Article ID: 103610. https://doi.org/10.1016/j.actpsy.2022.103610
[83] Lewandowsky, S., & Oberauer, K. (2016). Motivated Rejection of Science. Current Directions in Psychological Science, 25, 217-222. https://doi.org/10.1177/0963721416654436
[84] Lewandowsky, S., & Oberauer, K. (2021). Worldview-Motivated Rejection of Science and the Norms of Science. Cognition, 215, Article ID: 104820.
https://doi.org/10.1016/j.cognition.2021.104820
[85] Lindeman, M., Svedholm-Häkkinen, A. M., & Riekki, T. J. J. (2023). Searching for the Cognitive Basis of Anti-Vaccination Attitudes. Thinking & Reasoning, 29, 111-136.
https://doi.org/10.1080/13546783.2022.2046158
[86] Macbeth, G., López Alonso, A. O., Razumiejczyk, E., Sosa, R. A., Pereyra, C. I., & Fernández, H. (2009). Sesgos de la calibración en tareas de razonamiento lógico. SUMMA Psicológica UST, 6, 19-30. https://doi.org/10.18774/448x.2009.6.59
[87] Macbeth, G., Razumiejczyk, E., López Alonso, A. O., & Cortada de Kohan, N. (2010). Correlación entre autoestima y calibración en tareas de razonamiento abstracto. Revista CES Psicología, 3, 48-61.
[88] Martín, M., & Valiña, M. D. (2022). Reasoning with Heuristics: Theoretical Explanations and Beyond. In 22nd Conference of the European Society for Cognitive Psychology, ESCOP. http://hdl.handle.net/10347/29319
[89] Martín, M., Carretero, M., Asensio, M., & Valiña, M. D. (1998). Importancia de factores pragmáticos en inferencia condicional: Un estudio cronométrico. In M. D. Valiña, & M. J. Blanco (Eds.), I Jornadas de Psicología del Pensamiento (Actas) (pp. 79-96). Cursos e Congresos da Universidadede Santiago de Compostela. N˚ 114. Servicio de Publicacións da Universidade de Santiago de Compostela.
http://hdl.handle.net/10347/11934
[90] Mercier, H., & Sperber, D. (2011). Why Do Humans Reasoning? Arguments for an Argumentative Theory. Behavioral and Brain Sciences, 34, 57-74.
https://doi.org/10.1017/S0140525X10000968
[91] Meyer-Grant, C. G., Cruz, N., Singmann, H., Winiger, S., Goswami, S., Hayes, B. K., & Klauer, K. C. (2022). Are Logical Intuitions Only Make-Believe? Reexamining the Logic-Liking Effect. Journal of Experimental Psychology: Learning, Memory, and Cognition. https://doi.org/10.1037/xlm0001152
[92] Morsanyi, K., & Handley, S. J. (2012). Logic Feels So Good-I Like It! Evidence for Intuitive Detection of Logicality in Syllogistic Reasoning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 38, 596-616. https://doi.org/10.1037/a0026099
[93] Mumford, M. D., & Leritz, L. E. (2005). Heuristics. In K. Kempf-Leonard (Ed.), Encyclopedia of Social Measurement (pp. 203-208). Elsevier.
https://doi.org/10.1016/B0-12-369398-5/00168-7
[94] Osman, M. (2004). An Evaluation of Dual Process Theories of Reasoning. Psychonomic Bulletin and Review, 11, 998-1010. https://doi.org/10.3758/BF03196730
[95] Osman, M. (2013). A Case Study: Dual Process Theory of Higher Cognition—Commentary on Evans & Stanovich (2013). Perspectives on Psychological Science, 8, 248-252.
https://doi.org/10.1177/1745691613483475
[96] Osman, M. (2021). UK Public Understanding of Unconscious Bias and Unconscious Bias Training. Psychology, 12, 1058-1069. https://doi.org/10.4236/psych.2021.127063
[97] Payne, J. W., Bettman, J. R., & Johnson, E. J. (1993). The Adaptive Decision Maker. Cambridge University Press. https://doi.org/10.1017/CBO9781139173933
[98] Pennycook, G. (2018a). A Perspective on the Theoretical Foundation of Dual Process Models. In W. De Neys (Ed.), Dual Process Theory 2.0 (pp. 5-27). Routledge.
https://doi.org/10.4324/9781315204550-2
[99] Pennycook, G. (Ed.) (2018b). The New Reflectionism in Cognitive Psychology. Routledge. https://doi.org/10.4324/9781315460178
[100] Pennycook, G., Bago, B., & McPhetres, J. (2023). Science Beliefs, Political Ideology, and Cognitive Sophistication. Journal of Experimental Psychology: General, 152, 80-97.
https://doi.org/10.1037/xge0001267
[101] Pennycook, G., Fugelsang, J. A., & Koehler, D. J. (2015a). What Makes Us Think? A Three Stage Dual-Process Model of Analytic Engagement. Cognitive Psychology, 80, 34-72. https://doi.org/10.1016/j.cogpsych.2015.05.001
[102] Pennycook, G., Fugelsang, J. A., & Koehler, D. J. (2015b). Everyday Consequences of Analytic Thinking. Current Directions in Psychological Science, 24, 425-443.
https://doi.org/10.1177/0963721415604610
[103] Pennycook, G., & Rand, D. G. (2021). The Psychology of Fake News. Trends in Cognitive Sciences, 25, 388-402. https://doi.org/10.1016/j.tics.2021.02.007
[104] Perham, N., & Rosser, J. (2012). “Not Thinking” Helps Reasoning. Current Psychology, 31, 160-167. https://doi.org/10.1007/s12144-012-9140-7
[105] Pinker, S. (2021). Racionalidad. Qué es, por qué escasea y cómo promoverla. Planeta.
[106] Purcell, Z. A., Wastell, C. A., & Sweller, N. (2021). Domain-Specific Experience and Dual-Process Thinking. Thinking & Reasoning, 27, 239-267.
https://doi.org/10.1080/13546783.2020.1793813
[107] Raoelison, M., Keime, M., & De Neys, W. (2021). Think Slow, Then Fast: Does Repeated Deliberation Boost Correct Intuitive Responding? Memory and Cognition, 49, 873-883.
https://doi.org/10.3758/s13421-021-01140-x
[108] Raoelison, M., Thompson, V. A., & De Neys, W. (2020). The Smart Intuitor: Cognitive Capacity Predicts Intuitive than Deliberative Thinking. Cognition, 204, Article ID: 104381. https://doi.org/10.1016/j.cognition.2020.104381
[109] Rhodes, S., Galbraith, N., & Manktelow, K. (2020). Delusional Rationality. In S. Elqayam, I. Douven, J. St. B. T. Evans, & N. Cruz (Eds.), Logic and Uncertainty in the Human Mind (pp. 178-191). Routledge. https://doi.org/10.4324/9781315111902-11
[110] Rizeq, J., Flora, D. B., & Toplak, M. E. (2021). An Examination of the Underlying Dimensional Structure of Three Domains of Contaminated Mindware: Paranormal Beliefs, Conspiracy Beliefs, and Anti-Science Attitudes. Thinking & Reasoning, 27, 187-211.
https://doi.org/10.1080/13546783.2020.1759688
[111] Rocha, R. (Ed.) (2022). Estilos de Pensamiento en Políticos Profesionales. Hacia una Mejor Representación Política Sustantiva en Méjico. UNAM.
[112] Sala, G., & Govet, F. (2018). Cognitive Training Does Not Enhance General Cognition. Trends in Cognitive Science, 23, 9-20. https://doi.org/10.1016/j.tics.2018.10.004
[113] Sanz Blasco, R., & Carro de Francisco, C. (2019). Susceptibilidad cognitiva a las falsas informaciones. Historia y Comunicación Social, 24, 521-531.
https://doi.org/10.5209/hics.66296
[114] Seoane, G., & Valiña, M. D. (1988). Efecto del contenido y microgénesis de la tarea en inferencia condicional. Cognitiva, 1, 271-298.
[115] Sloman, S. A. (1996). The Empirical Case for Two Systems of Reasoning. Psychological Bulletin, 119, 3-22. https://doi.org/10.1037/0033-2909.119.1.3
[116] Smith, E. R., & DeCoster, J. (1999). Associative and Rule-Based Processing: A Connectionist Interpretation of Dual-Process Models. In S. Chaiken, & Y. Trope (Eds.), Dual-Process Theories in Social Psychology (pp. 323-336). The Guilford Press.
[117] Šrol, J. (2022). Individual Differences in Epistemically Suspect Beliefs: The Role of Analytic Thinking and Susceptibility to Cognitive Biases. Thinking & Reasoning, 28, 125-162. https://doi.org/10.1080/13546783.2021.1938220
[118] Stanovich, K. E. (1999). Who Is Rational? Studies of Individual Differences in Reasoning. Psychology Press. https://doi.org/10.4324/9781410603432
[119] Stanovich, K. E. (2009). Distinguishing the Reflective, Algorithmic, and Autonomous Minds: Is It Time for a Tri-Process Theory? In J. S. B. T. Evans, & K. Frankish (Eds.), In Two Minds: Dual Processes and Beyond (pp. 55-88). Oxford University Press.
https://doi.org/10.1093/acprof:oso/9780199230167.003.0003
[120] Stanovich, K. E. (2011). Rationality and the Reflective Mind. Oxford University Press.
https://doi.org/10.1093/acprof:oso/9780195341140.001.0001
[121] Stanovich, K. E. (2012). On the Distinction between Rationality and Intelligence: Implications for Understanding Individual Differences in Reasoning. In K. J. Holyoak, & R. G. Morrison (Eds.), The Oxford Handbook of Thinking and Reasoning (pp. 433-455). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780199734689.013.0022
[122] Stanovich, K. E. (2018a). Miserliness in Human Cognition: The Interaction of Detection, Override and Mindware. Thinking & Reasoning, 24, 423-444.
https://doi.org/10.1080/13546783.2018.1459314
[123] Stanovich, K. E. (2018b). How to Think Rationally about World Problems. Journal of Intelligence, 6, Article 25. https://doi.org/10.3390/jintelligence6020025
[124] Stanovich, K. E. (2021a). The Bias That Divides Us: The Science and Politics of Myside Thinking. MIT Press. https://doi.org/10.7551/mitpress/13367.001.0001
[125] Stanovich, K. E. (2021b). Why Humans Are Cognitive Misers and What It Means for the Great Rationality Debate. In R. Viale (Ed.), Routledge Handbook of Bounded Rationality (pp. 196-206). Routledge. https://doi.org/10.4324/9781315658353-12
[126] Stanovich, K. E., & Toplak, M. E. (2022). The Elusive Search for Individual Differences in Myside Thinking. In J. Musolino, J. Sommer, & P. Hemmer (Eds.), The Cognitive Science of Belief. A Multidisciplinary Approach (pp. 465-487). Cambridge University Press. https://doi.org/10.1017/9781009001021.033
[127] Stanovich, K. E., Toplak, M. E., & West, R. F. (2008). The Development of Rational Thought: A Taxonomy of Heuristics and Biases. In R. V. Kail (Ed.), Advances in Child Development and Behavior (Vol. 36, pp. 251-285). Elsevier Academic Press.
https://doi.org/10.1016/S0065-2407(08)00006-2
[128] Stanovich, K. E., & West, R. F. (2000). Individual Differences in Reasoning: Implications for the Rationality Debate? Behavioral and Brain Sciences, 23, 645-665.
https://doi.org/10.1017/S0140525X00003435
[129] Stanovich, K. E., & West, R. F. (2007). Natural Myside Bias Is Independent of Cognitive Ability. Thinking & Reasoning, 13, 225-247.
https://doi.org/10.1080/13546780600780796
[130] Stanovich, K. E. &, West, R. F. (2008a). On the Failure of Cognitive Ability to Predict Myside and One-Sided Thinking Biases. Thinking & Reasoning, 14, 129-167.
https://doi.org/10.1080/13546780701679764
[131] Stanovich, K. E., & West, R. F. (2008b). On the Relative Independence of Thinking Biases and Cognitive Ability. Journal of Personality and Social Psychology, 94, 672-695.
https://doi.org/10.1037/0022-3514.94.4.672
[132] Stanovich, K. E., West, R. F., & Toplak, M. E. (2013). Myside Bias, Rational Thinking, and Intelligence. Current Directions in Psychological Science, 22, 259-264.
https://doi.org/10.1177/0963721413480174
[133] Stanovich, K. E., West, R. F., & Toplak, M. E. (2016). The Rationality Quotient: Toward a Test of Rational Thinking. MIT Press.
https://doi.org/10.7551/mitpress/9780262034845.001.0001
[134] Svedholm-Häkkinen, A. M., & Kiikeri, M. (2022). Cognitive Miserliness in Argument Literacy? Effects of Intuitive and Analytic Thinking on Recognizing Fallacies. Judgment and Decision Making, 17, 331-361. https://urn.fi/URN:NBN:fi:tuni-202204203310
https://doi.org/10.1017/S193029750000913X
[135] Tappin, B. M., Pennycook, G., & Rand, D. G. (2021). Rethinking the Link between Cognitive Sophistication and Politically Motivated Reasoning. Journal of Experimental Psychology: General, 150, 1095-1114. https://doi.org/10.1037/xge0000974
[136] Thompson, V. A. (2009). Dual-Process Theories: A Metacognitive Perspective. In J. St. B. T. Evans, & K. Frankish (Eds.), In Two Minds: Dual Processes and Beyond (pp. 171-195). Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199230167.003.0008
[137] Thompson, V. A. (2014). What Intuitions Are... and Are Not. In B. H. Ross (Ed.), Psychology of Learning and Motivation (Vol. 60, pp. 35-75). Elsevier Academic Press.
https://doi.org/10.1016/B978-0-12-800090-8.00002-0
[138] Thompson, V. A., & Morsanyi, K. (2012). Analytic Thinking: Do You Feel Like It? Mind & Society, 11, 93-105. https://doi.org/10.1007/s11299-012-0100-6
[139] Thompson, V. A., Pennycook, G., Trippas, D., & Evans, J. St. B. T. (2018). Do Smart People Have Better Intuitions? Journal of Experimental Psychology: General, 147, 945-961. https://doi.org/10.1037/xge0000457
[140] Thompson, V. A., Prowse Turner, J. A., & Pennycook, G. (2011). Intuition, Reason, and Metacognition. Cognitive Psychology, 63, 107-140.
https://doi.org/10.1016/j.cogpsych.2011.06.001
[141] Tomljenovic, H., & Bubic, A. (2021). Cognitive and Emotional Factors in Health Behaviour: Dual-Process Reasoning, Cognitive Styles and Optimism as Predictors of Healthy Lifestyle, Healthy Behaviours and Medical Adherence. Current Psychology, 40, 3256-3264. https://doi.org/10.1007/s12144-019-00268-z
[142] Trémolière, B., & Lespiau, F. (2022). Intuitive and Reflective Beliefs in a Modern World. In J. Musolino, J., Sommer, & P. Hemmer (Eds.), The Cognitive Science of Believe. A Multidisciplinary Approach (pp. 172-190). Cambridge University Press.
https://doi.org/10.1017/9781009001021.012
[143] Trippas, D., & Handley, S. J. (2018). The Parallel Processing Model of Belief Bias: Review and Extensions. In W. De Neys (Ed.), Dual Process Theory 2.0 (pp. 28-46). Routledge.
https://doi.org/10.4324/9781315204550-3
[144] Trippas, D., Handley, S. J., Verde, M. F., & Morsanyi, K. (2016). Logic Brightens My Day: Evidence for Implicit Sensitivity to Logical Validity. Journal of Experimental Psychology: Learning, Memory, and Cognition, 42, 1448-1457.
https://doi.org/10.1037/xlm0000248
[145] Tubau, E., & López Moliner, J. (1998). Procesos Implícitos y Explícitos de Pensamiento. In M. D. Vali?a, & M. J. Blanco (Eds.), I Jornadas de Psicología del Pensamiento (Actas) (pp. 13-22). Cursos e Congresos da Universidade de Santiago de Compostela. No 114. Servicio de Publicacións da Universidade de Santiago de Compostela.
http://hdl.handle.net/10347/11926
[146] Tversky, A., & Kahneman, D. (1974). Judgment under Uncertainty: Heuristics and Biases. Science, 185, 1124-1131. https://doi.org/10.1126/science.185.4157.1124
[147] Tversky, A., & Kahneman, D. (1983). Extensional versus Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment. Psychological Review, 90, 293-315.
https://doi.org/10.1037/0033-295X.90.4.293
[148] Tversky, A., & Kahneman, D. (1986). Rational Choice and the Framing of Decisions. Journal of Business, 59, 251-278. https://doi.org/10.1086/296365
[149] Valiña, M. D. (1988). Efecto del contenido y microgénesis de la tarea en razonamiento silogístico con cuantificadores probabilísticos: Un estudio cronométrico. Cognitiva, 1, 199-212.
[150] Valiña, M. D., & De Vega, M. (1988). Un estudio experimental del razonamiento cotidiano en tareas de silogismos: Una aproximación pragmática. Cognitiva, 1, 33-62.
[151] Valiña, M. D., & Martín, M. (2016). The Influence of Semantic and Pragmatic Factors in Wason’s Selection Task: State of the Art. Psychology, 7, 925-940.
https://doi.org/10.4236/psych.2016.76094
[152] Valiña, M. D., & Martín, M. (2021). Reasoning with the THOG Problem: A Forty-Year Retrospective. Psychology, 12, 2042-2069. https://doi.org/10.4236/psych.2021.1212124
[153] Valiña, M. D., Martín, M., & Seoane, G. (2014). Importancia del conocimiento pragmático en inferencia condicional: Una aproximación experimental. Suma Psicológica, 21, 81-88. https://doi.org/10.1016/S0121-4381(14)70010-4
[154] Valiña, M. D., Seoane, G., Ferraces, M. J., & Martín, M. (1999). The Importance of Pragmatic Aspects in Conditional Reasoning. The Spanish Journal of Psychology, 2, 20-31.
https://doi.org/10.1017/S1138741600005424
[155] Varga, A. L., & Hamburguer, K. (2014). Beyond Type 1 vs. Type 2 Processing: The Tri-Dimensional Way. Frontiers in Psychology, 5, Article 993.
https://doi.org/10.3389/fpsyg.2014.00993
[156] Verschueren, N., Schaeken, W., & d’Ydewalle, G. (2005). A Dual-Process Specification of Causal Conditional Reasoning. Thinking & Reasoning, 11, 239-278.
https://doi.org/10.1080/13546780442000178
[157] Viale, R. (Ed.) (2021). Routledge Handbook of Bounded Rationality. Routlege.
https://doi.org/10.4324/9781315658353
[158] Voudouri, A., Bialek, M., Domurat, A., Kowal, M., & De Neys, W. (2022). Conflict Detection Predicts the Temporal Stability of Intuitive and Deliberate Reasoning. Thinking & Reasoning. https://doi.org/10.1080/13546783.2022.2077439
[159] Wagner, P. A. (2022). Tools for Teaching and Role-Modeling Critical Thinking. Psychology, 13, 1335-1341. https://doi.org/10.4236/psych.2022.138086
[160] Wason, P. C., & Evans, J. St. B. T. (1975). Dual Processes in Reasoning? Cognition, 3, 141-154. https://doi.org/10.1016/0010-0277(74)90017-1
[161] West, R. F., Meserve, R. J., & Stanovich, K. E. (2012). Cognitive Sophistication Does Not Attenuate the Bias Blind Spot. Journal of Personality and Social Psychology, 103, 506-519. https://doi.org/10.1037/a0028857
[162] Yilmaz, O. (2021). Cognitive Styles and Religion. Current Opinion in Psychology, 40, 150-154. https://doi.org/10.1016/j.copsyc.2020.09.014

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.