Decomposing the Seduction: The Role of Individual Differences, Relevant Knowledge, and Education on the Seductive Allure of Neuroscience Explanations (SANE Effect) ()
1. Introduction
Although individual differences are a normal part of human behavior, many people are impressed by the power of scientific explanations. While examples from different disciplines exist, one of the most compelling of these can be found in explanations grounded in neuroscience, a field that offers the promise of direct insights into the functions and processes within the human brain. Since the early 2000s, researchers have identified and scrutinized a phenomenon known as the Seductive Allure of Neuroscience Explanations (SANE), which refers to the tendency of individuals—especially non-experts—to perceive explanations containing neuroscientific information as more credible and satisfying than those without it, even when the added information is irrelevant or misleading [2]. While the discipline of neuroscience has provided a number of scientific advances with knowledge increasing at a rapid pace [3], the implications of the SANE phenomenon are far-reaching, affecting science communication [4] [5] education [6]-[11], public policy, and even legal proceedings [12]-[15].
Weisberg et al. (2008) [2] were among the first to systematically explore this effect. They conducted multiple experiments demonstrating that the inclusion of superfluous neuroscience content made otherwise unsatisfactory psychological explanations more convincing. Participants were more likely to judge flawed explanations as scientifically sound when they contained irrelevant neuroscience references. Crucially, this effect was powerful among non-experts, suggesting a knowledge-based boundary condition.
McCabe and Castel (2008) [16] extended these findings by focusing on the role of brain images. In their study, participants rated scientific articles as more credible when they were accompanied by brain scans, even when the scans added no new information. This “neuro-realism” [17] reflects the belief that brain-based explanations are inherently more trustworthy.
Fernandez-Duque et al. (2015) [18] explored whether the SANE effect generalized across different types of explanations and found that the allure of neuroscience is not limited to psychological content. They concluded that the perceived objectivity and precision of neuroscience explanations increase their persuasive power across diverse domains. Rhodes et al. (2014) [19] proposed that the SANE effect is partially driven by the complexity of neuroscience language, which may act as a cue for epistemic authority. When laypeople encounter complex or jargon-filled language, they may infer that the explanation is based on rigorous science, thus increasing its persuasive impact. The SANE effect remains demonstrable, even when potentially relevant confounds are controlled for, including the use of neuroscience terminology or the length of the neuroscience-based text [19] [20].
Often, the SANE effect is considered within the broader scope of research and discussion that includes a contextual examination of cognitive heuristics and biases [21] [22]. Briefly, within this framework, it was postulated that individuals rely on the use of a variety of mental shortcuts in order to make judgments, particularly in complex or unfamiliar domains. The field of neuroscience, with its technical jargon, complex statistical techniques, and visually compelling brain images, may serve as a heuristic cue, suggesting both scientific legitimacy and explanatory depth.
Other scholars have elaborated on this idea. For example, Churchland (1986) [23] and Gazzaniga (2005) [24] have argued that reductionist explanations—those that attempt to explain complex phenomena by referring to simpler underlying mechanisms—have considerable intuitive appeal. Indeed, neuroscience, which quite often describes various psychological states in terms of brain activity, is consistent with such appeal. Consequently, individuals may favor neuroscience explanations because they appear to offer a fundamental, mechanistic account of behavior.
1.1. Psychological Mechanisms
One way the SANE effect can be understood through the lens of dual-process theories of cognition, which distinguish between fast, intuitive thinking (System 1) and slow, analytical reasoning (System 2) [21] [25]. Neuroscience explanations may appeal to System 1 by offering intuitively satisfying answers that seem authoritative.
Research by Gruber and Dickerson (2012) [4] found that participants were more likely to endorse pseudoscientific explanations when they were presented with scientific-sounding language. The use of “sciency” wording suggests that the mere presence of scientific terminology—especially from neuroscience—can trigger heuristic processing and increase perceived plausibility.
As noted earlier, reductionist thinking contributes to the seductive allure. Simply put, reductionism is the idea that often-complex phenomena can be explained by straightforward, more fundamental parts [23]. Neuroscience offers this kind of explanation by mapping mental processes to neural activity. In a culture that values empirical and mechanistic accounts, this style of reasoning appears especially compelling. For example, Hook and Farah (2013) [26] examined how people interpret neuroscience-based explanations and found that participants often misattribute causality to correlational data, particularly when brain scans are included in the explanations. This further underscores the role of cognitive shortcuts in the acceptance of neuroscientific explanations.
1.2. Personality Traits and Information Evaluation
Previous research has indicated that Individual differences in effect, cognitive style, and epistemic motivation can significantly shape how individuals process and interpret scientific information [19]. Given this, several personality variables may be relevant in accounting for the variability in susceptibility to the SANE effect. For example, one of the Big Five personality traits, openness to experience, encompasses intellectual curiosity, aesthetic sensitivity, and a preference for novelty and complexity [27]. Individuals who score high on this trait may be more receptive to new scientific paradigms, including neuroscience. However, they may also overvalue complex-sounding explanations [18]. On the other hand, empirical evidence exists that individuals score high in measures associated with actively open-minded thinking are less susceptible to being influenced by their prior beliefs [28].
1.3. Boundary Conditions and Moderators of the SANE Effect
While several factors appear to moderate the SANE effect, one that is more obvious and salient is scientific literacy. Farah and Hook (2013) [29] found that individuals with higher levels of scientific understanding were less susceptible to the SANE effect. This suggests that education can mitigate the influence of irrelevant neuroscience information.
In line with this result, Hopkins et al. (2016) [30] replicated these findings using undergraduate students as participants. The results revealed that students who had completed coursework in critical thinking or the field of neuroscience were less likely to be influenced by irrelevant neuroscientific details. Expertise in a relevant field, such as neuroscience and related fields that incorporate knowledge of neuroscience minimizes the impact of extraneous neuroscience information [31] [2]. However, when such expertise is missing due to training in other fields, the SANE effect persists [32]. Conferred immunity to the SANE effect appears to include a minimum of an undergraduate degree in a major where neuroscience information is part of the pedological content [33]. Simply put, this supports the notion that the SANE effect is not universal but contingent on individual differences in areas such as training and cognition.
In addition, whether the SANE effect is present and pronounced may depend on the contextual relevance of the information. With this idea in mind, Im et al. (2020) [34] conducted a series of experiments to test whether the relevance of neuroscience information has an impact on the SANE effect. One key finding was that when participants were explicitly informed that the neuroscience content was irrelevant or unlikely to be relevant, a significant reduction in the SANE effect was observed. The Im et al. study implies that critical engagement with the content can neutralize the bias. Conversely, Fischer and Mundry (2021) [35] found that even when participants were exposed to subtly misleading neuroscience jargon, they still rated explanations as more credible. This suggests that the SANE effect is not easily extinguished and may even operate at an unconscious level.
1.4. Critiques and Challenges
Despite reasonable empirical support, the SANE phenomenon has not gone unchallenged. A number of investigations failed to find evidence of a SANE effect [4] [36]-[42]. Some researchers argue that the effect may be overstated or limited in scope. For example, Michael et al. (2013) [39] failed to replicate the findings of Weisberg et al. (2008) [2] in specific contexts, particularly when the neuroscience explanations were transparently irrelevant. In addition, the impact of the SANE effect on participant ratings of scientific explanations is present when coupled with poorly reasoned or circular explanations but absent when considered in the context of a well-reasoned argument [2] [18] [20] [30] [43] or when the participants disagree with the reported results [44]. Such contradictions emphasize the importance of replication and the examination of methodological rigor, as well as the boundaries of the SANE effect.
Further, given that the effect sizes reported in some studies are relatively modest, such results raise questions about the practical significance of the SANE effect [1] [18] [19]. Thus, while it may be statistically significant, its real-world impact may depend on additional factors such as media framing, cultural attitudes toward science, and the credibility of the source [45]. However, as Funder and Ozer (2019) [46] argue, even small effects can accumulate over time or manifest in consequential decisions, particularly in education, journalism, and policy domains where scientific claims are communicated to the public.
1.5. The Present Research
Building on previous research [1] [47], the present investigation was designed to examine factors such as academic discipline, academic rank, gender, personality traits, knowledge of psychology and neuroscience, and aspects of perceptions of superficial elements on perceived scientific authenticity on the SANE effect.
The purpose of the present study centered around four related research questions. First, the academic rank and academic major of a sample of college students were examined in order to determine if and how much college work is associated with the SANE effect and whether such differences exist as a function of training science versus nonscience majors and the amount of academic coursework completed for each respondent. Additional characteristics reflective of individual differences included differences in personality, gender, and the strength of religious faith. Further, participants were tested for their knowledge of psychological and neuroscience information.
Second, using the methods and research descriptions reported by Im and colleagues (2017) [1], I examined whether evidence of the Seductive Allure of Neuroscience Explanations (SANE) effect was present among undergraduate and graduate college students enrolled at an Evangelical Christian university. Briefly, the research descriptions were based on popular media articles that involved psychological research to address issues that were broadly educational in nature. Further, the research descriptions varied by whether they included irrelevant neuroscience findings to support the research claims. It was predicted that the perceived scientific credibility would increase as the research descriptions were shaped by the inclusion of differing amounts and types of extraneous neuroscience information. Specifically, consistent with Im et al.’s research, the research descriptions were shaped by the inclusion of four different levels of neuroscience information: (1) a control condition, where the research articles contained only a psychological description of the results and its relevance, (2) a verbal description that contained an irrelevant neuroscience result, (3) the same verbal description paired with a visual feature (a bar or line graph), and (4) the verbal description paired with a detailed feature (a structural brain image with superimposed fMRI data) [2] [16].
Third, using the framework of Krull and Silvera (2013) [47], the role of research topic area and tools that are considered more or less “scientific” was included. It was predicted that the superficial features of research content could systematically influence perceptions of scientific credibility. Thus, although the defining hallmark of science lies in its methodology, individuals may be inclined to judge a research endeavor as more scientific based on whether it involves characteristically scientific topics—such as the brain or cancer—and whether it employs tools normally associated with science—such as microscopes or a MRI. In contrast, research involving less traditionally “scientific” topics, such as personality or social interactions, and using less “scientific” tools, such as video games or questionnaires, may be perceived as less scientific.
Lastly, following consideration of the areas described above, an exploration of bivariate correlations was conducted, followed by a series of hierarchical regression analyses using composite scientific credibility scores based on research descriptions provided by Im and colleagues (2017) [1]. The explicit goal was to elucidate the role of the information present, including irrelevant information, as well as the measures that reflected individual differences among the participants. Here, this included consideration of major, science versus nonscience, as reflected in their choice of area of study at university. Last, the influence of academic rank and associated college experiences were considered as potentially relevant predictor variables.
2. Method
2.1. Participants
The present study included 423 full-time undergraduate and graduate students enrolled at a D/PU university [48] in South Florida. All participants were recruited through the university’s official LISTSERV. Data collection procedures were consistent with the provisions set by the American Psychological Association (2017) [49] and the U.S. Department of Health and Human Services guidelines for the protection of human subjects (2023) [50]. The final study protocol was approved by the Palm Beach Atlantic University Institutional Review Board (IRB ID#: 2023.11.09-22C). Of the original 423 participants, 16 were excluded due to incomplete data, leading to a final sample size of 407 participants. Of the remaining participants, 284 were undergraduates, with the remaining 123 enrolled in a graduate program.
Table 1 provides detailed demographic information. Participants represented various academic disciplines, and 89.2% identified as Christian. In terms of race, 55% identified as White, followed by 22.6% as Hispanic/Latino, and 22.4% identified as Black, as well as those of mixed race or another racial background. This distribution closely reflects the university’s broader undergraduate population, which is approximately 53% White. Last, the proportion of individuals majoring in a science discipline was 48.4% of the sample. However, at the graduate level, only 32.5% of the sample were majoring in a science or health science area.
Table 1. Participant characteristics.
Gender |
|
Academic Rank |
|
Female |
301 (74.0%)a,b |
Undergraduate |
284 (69.8%) |
Male |
106 (26.0%) |
Lower-Level |
148 (36.4%) |
Race/Ethnicity |
|
Upper-Level |
136 (33.4%) |
Hispanic/Latina |
92 (22.6%) |
Graduate |
123 (30.2%) |
Black, Multiracial, Other |
91 (22.4%) |
Area of Study |
|
White |
224 (55.0%) |
Nonscience |
210 (51.6%) |
|
|
Science |
197 (48.4%) |
Note. a,bn and proportion of sample.
2.2. Materials and Procedure
In order to maximize response rates and collect data from students, invitations to participate were initially sent at the beginning of the third week of the semester, with follow-up emails sent two more times at two-week intervals. All email invitations were sent via the official student Listserv. At the completion of the survey, students were invited to inquire about the study outcome by emailing or phoning the primary investigator.
2.2.1. Five-Factor Model of Personality (TIPI)
Because I anticipated that various personality traits might serve as meaningful factors in the SANE effect, all participants completed the Ten Item Personality Inventory (TIPI) [51]. The TIPI assesses the five domains of the Five-Factor Model (FFM) using two items per domain. Despite its brevity, the TIPI demonstrates acceptable reliability and validity when compared with longer FFM measures, particularly in terms of convergent validity [52].
2.2.2. The Santa Clara Strength of Religious Faith Questionnaire
Religiosity was assessed using the Santa Clara Strength of Religious Faith Questionnaire (SCSORF) [53]. The SCSORF is a ten-item self-report instrument designed to assess the strength of religious faith independent of specific religious affiliation. Using a 4-point Likert scale ranging from “strongly disagree” to “strongly agree”, research participants respond to items such as “My religious faith is extremely important to me” and “I look to my faith as providing meaning and purpose in my life,” Total scores range from 10 to 40, with higher scores reflective of higher levels of religiosity. The SCSORF has demonstrated strong internal consistency and split-half reliability across multiple studies [53]-[56]).
2.2.3. Knowledge about Psychology Questionnaire
The test of participant knowledge about psychological topics was adapted from a series of psychological myths discussed by Lilienfeld and colleagues (2010) [57]. Our test of psychological knowledge included 48 statements related to one of 11 areas of psychology and associated “facts” that, on the basis of research, are untrue. Examples included “Opposites Attract: We are most romantically attracted to people who differ from us.” and People with schizophrenia have multiple personalities.” While the original Lilienfeld et al. statements were all expressed as facts that are in reality “myths,” the wording on some of the items was changed to a statement where the correct answer was true. For example, the statement “Most mentally ill people are not violent” was modified to include not, thus changing the correct answer to true. The 48 items were presented in a randomized order, and respondents were instructed to indicate whether each statement was true or false. The final modified questionnaire and corresponding answer key can be provide upon request.
2.2.4. Knowledge about Neuroscience Questionnaire
Items for a test of participant knowledge about neuroscience topics were adapted from a series of neuroscience myths discussed by Jarrett (2014) [58] and Im et al. (2015, 2017 [1]). The resulting neuroscience questionnaire comprised a total of 21 items, with six items adapted from Im et al. (2015, 2017 [1]) and 15 items adapted from Jarrett (2014) [58]. Respondents were instructed to indicate whether each statement was true or false. Example items from the former included “fMRI can measure the activity of a single neuron.” and “We use our brains 24 hours per day”. Items adapted from Jarrett (2014) [58] included “Brain scans can read your mind.” and “There’s a God spot in the brain.” Similar to the knowledge of psychology questionnaire, the wording of some statements was phrased in a way that made the correct answer true. Last, the 21 items were presented in a randomized order. Knowledge was defined as the proportion of correct responses out of 21. The final modified questionnaire and corresponding answer key can be provide upon request.
2.2.5. Superficial Details & the Nature of Perceptions about the Sciences
To explore the impact of superficial details on perceptions of the scientific nature of research, the terminology associated with the research topic and the equipment used were examined. That is, from the perspective of the participants, how “scientific” does the research appear to be? Following the methodology Krull and Silvera (2013) [47], participants completed an online experiment in which they rated 20 brief scenarios presented in random order. The 20 items can be found in Appendix B of the Krull and Silvera paper. [47] Briefly, the scenarios reflected a 2 × 2 design, varying by research topic (natural science vs. behavioral science) and research equipment (natural science vs. behavioral science), with five scenarios representing each of the four conditions. For instance, a scenario in the natural science topic with natural science equipment stated, “Dr. Williams studies the brain. To do this research, Dr. Williams uses an MRI (magnetic resonance imaging)” (p. 1666). A scenario reflecting the behavioral science topic, utilizing behavioral science equipment: “Dr. Rogers studies social interactions. To do this research, Dr. Rogers uses questionnaires.” (p. 1666). In order to facilitate data analysis, the 20 research scenarios were classified into one of four combinations: a natural science topic paired with natural science equipment, a behavioral science topic paired with behavioral science equipment, a natural science topic paired with behavioral science equipment, or a behavioral science topic, paired with natural science equipment. After reading each scenario, participants rated the perceived scientific quality of the research on a 9-point scale ranging from Not at all scientific to Extremely scientific.
2.2.6. Summary of Research Articles
The eight research scenarios used in the present study are those employed first by Im and colleagues (2017) [1] and based on published research reported in their article. Briefly, participants read a brief description of eight different peer-reviewed publications with subject matter in psychology, neuroscience, and education. Each of the eight articles corresponded to one of the four conditions created by varying the variables of research Process (learning vs. development) and research Discipline (cognitive vs. affective), with two topics per condition. For example, one of the published articles dealt with the learning (process) and cognitive (discipline) and focused on the educational implications of psychological research on multitasking [59]. In conditions with a neuroscience content (or framing) factor, the article also included voxel-based morphometry and gray matter density in the anterior cingulate cortex [60]. Collectively, four distinct versions of each article were developed, that varied by the type of neuroscience information presented to the participant (i.e., content). The levels included – (1) a psychological result without any neuroscience result, (2) inclusion of an extraneous neuroscience finding presented as text, (3) the additional inclusion of a graph, or (4) a brain image [1], for an example. All four versions shared a common introduction and conclusion and were limited to approximately 250 words in length.
Participant evaluation of the credibility of each article was determined by ratings for each of the five statements. The statements were designed to determine the participant’s perception of various aspects of the arguments highlighted in the article, including its quality, clarity, plausibility, and validity, as well as participants’ overall agreement with the argument (e.g., The scientific arguments in the article made sense; The article offered strong empirical evidence for its conclusions). For each statement, responses were rated on a 7-point Likert scale ranging from 1 (strongly disagree) to 7 (strongly agree). The composite credibility score was calculated as the mean of the participant’s responses across the five items, with higher scores reflecting greater perceived credibility.
3. Results
3.1. Strength of Religious Faith
In order to consider whether any group differences in strength of religious faith existed among the participants, three variables—Gender, Academic Major, and Academic Rank were separately considered. Turning to Gender as an independent variable with scores on the Santa Clara Strength of Religious Faith Questionnaire as the dependent variable, a gender difference was found, F(1, 405) = 13.36, p < 0.001,
= 0.032. Female participants scored significantly higher than the male participants (Ms = 35.25 vs. 32.16). When Academic Major was a considered, a smaller but statistically significant effect was found, F(1, 405) = 4.96, p < 0.001,
= 0.026. Interestingly, science majors (M = 35.32, SD = 8.59) scored higher than nonscience majors (M = 33.64, SD = 8.42). Last, the comparison of undergraduate and graduate participants was nonsignificant (p > 0.10).
3.2. Knowledge about Psychology and Neuroscience
Given the popularity of the psychology major, the second largest at the university, I expected the participants to score well on a test of knowledge associated with psychological concepts and information. Both biology and psychology courses are popular choices among the university’s general education courses, and psychology and behavioral neuroscience are popular elective options for non-majors. With the exception of graduate programs in business and ministry, the students in the graduate programs have or have had significant course work in both areas. Across the sample, a correlation between scores on the tests of psychological and neuroscience knowledge was found, r(405) = 0.41, p < 0.001.
The data from the two tests, one assessing knowledge about psychological topics and a comparable test of neuroscience knowledge were explored. Specifically, the data were analyzed using a 2 (Academic Ranks) × 4 (academic areas of study) MANOVA, with knowledge of neuroscience and psychology as the dependent measures. Significant multivariate effects of Academic Rank, Wilks’ Λ = 0.968, F(2, 398) = 6.59, p = 0.002,
= 0.032, and area of study, Wilks’ Λ = 0.968, F(6, 796) = 8.25, p < 0.001,
= 0.059, were found. The Academic Rank X academic area of study interaction was significant as well, Wilks’ Λ = 0.969, F(6, 796) = 2.13, p < 0.048,
= 0.016.
Subsequent consideration of the univariate analyses revealed a significant effect of Academic Rank on both the neuroscience, F(1, 399) = 12.00, p < 0.001,
= 0.029 and psychology, (F(1, 399) = 5.63, p < 0.018,
= 0.014, dependent measures. Of interest, undergraduate students outperformed, albeit slightly, their graduate counterparts on knowledge of psychology (M = 61.89 vs. M = 63.02), with a larger mean difference for neuroscience (M = 59.48 vs. M = 68.10) information. Differences associated with the Academic Major were found for knowledge of psychological information, F(3, 399) = 4.88, p = 0.002,
= 0.035, and neuroscience information, F(3, 399) = 11.61, p < 0.001,
= 0.080. The scores among individuals studying in areas of ministry (M = 63.09, SD = 15.14) or the liberal and fine arts (M = 57.84, SD = 10.99). scored significantly lower than their social science (M = 68.61, SD = 14.93) and life, health, or physical science (M = 72.89, SD = 18.56). However, the latter two areas of study were not significantly different. Turning to the test of psychological knowledge, the means across the four grouped areas of study were generally similar, with a significant difference between the liberal and fine arts (M = 58.75, SD = 8.33) and the social (M = 63.45, SD = 9.91) and life, health, and physical science (M = 64.34, SD = 8.52) areas. As noted above, a significant multivariate effect of the Academic Rank by academic area of study was found. Univariate consideration of the interaction indicated a significant interaction only for the test of neuroscience knowledge, F(3, 399) = 3.03, p < 0.001,
= 0.022. The relevant results are presented in Figure 1. Post hoc decomposition of the interaction revealed the following. Among undergraduates, the scores for liberal and fine arts majors were significantly lower than their
![]()
Figure 1. Comparison of student knowledge of psychology and neuroscience information by one of four academic areas of study and academic rank, undergraduate and graduate students. Letters a, b, c, and d represent a significant difference (p < 0.05) from those of a different area of academic study, with academic rank. a = majors in the social sciences. b = majors in the life, health, or physical sciences. c = majors in the liberal and fine arts. d = majors in areas of Christian ministry.
social or life/physical sciences majors and ministry majors as well as ministry majors. When the performance of the graduate students was examined, a similar pattern emerged. However, ministry majors scored significantly lower than their science counterparts but not significantly higher than liberal and fine arts majors.
3.3. Gender by Major Knowledge about Psychology and Neuroscience
As part of our analyses, I wanted to examine the role of gender. As noted before, because of the general pattern of our results associated with the contribution of academic area of study, we assigned all areas of study into two categories—science and nonscience. Turning to performance on the knowledge of neuroscience and psychology information, I analyzed the data using a 2 (Gender) × 2 (Academic Major) MANOVA. Significant multivariate effects of Gender, Wilks’ Λ = 0.950, F(2, 402) = 10.51, p < 0.001,
= 0.050, and area of study, Wilks’ Λ = 0.897, F(2, 402) = 23.03, p < 0.001,
= 0.103, were revealed. The Gender by Academic Major interaction was significant as well, F(2, 402) = 3.67, p = 0.026,
= 0.018.
Following up with inspection of the univariate analyses revealed a Gender effect for the level of neuroscience knowledge F(1, 403) = 20.71, p < 0.001,
= 0.049. Here, male subjects higher than female subjects (Ms = 68.91 vs. 64.29). Performance on the test of psychological knowledge did not differ as a function of Gender. This pattern was repeated for the analyses of Academic Major, where test performances differenced on the test of neuroscience knowledge, F(1, 403) = 37.21, p < 0.001,
= 0.085. Knowledge among science majors (M = 70.12, SD = 13.78) was superior to that of nonscience majors (M = 61.14, SD = 14.26).
On the other hand, the interaction was significant for only the psychological knowledge dependent variable, F(1, 403) = 6.93, p = 0.009,
= 0.017. Post hoc examination of this finding revealed that among male participants, science majors (M = 61.67, SD = 11.84) scored lower than nonscience majors (M = 66.28, SD = 6.84). This pattern was reversed for female participants with higher scores found among science majors (M = 64.34, SD = 10.80) than nonscience majors (M = 60.41, SD = 9.44).
3.4. Perceived Credibility of Research by Scientific Category and Equipment—Role of Major and Education
The data were analyzed using a two-between (Academic Major, Academic Rank), one-within (Science & Equipment combination) mixed ANOVA. Examination of the results revealed the following. Significant main effects of Academic Major, F(1, 403) = 18.34, p < 0.001,
= 0.044, and Academic Rank, F(1, 403) = 12.63, p < 0.001,
= 0.030, were found as was the Academic Major × Academic Rank interaction, F(1, 403) = 13.58, p < 0.001,
= 0.033. Post hoc comparisons associated with this result revealed that among nonscience majors, ratings by academic rank were similar (Ms = 5.84 vs. 5.82, undergraduate & graduate, respectively). However, among science majors, scientific credibility ratings were higher for graduate students (M = 6.60, SD = 0.82) than for their undergraduate counterparts (M = 5.89, SD = 0.87).
In addition to these results, a main effect of research/equipment was found, F(3, 1209) = 213.83, p < 0.001,
= 0.347. Further, Academic Major × Research/ Equipment category and Academic Rank × Research/Equipment category interactions were detected, Fs(3, 1209) = 11.98 & 20.83, ps < 0.001. However, the results described above must be considered in the context of an Academic Major × Academic Rank × Research/Equipment category interaction, F(1, 403) = 10.81, p < 0.001,
= 0.026. The relevant results are presented in Figure 2.
![]()
Figure 2. Comparison of student perceived scientific credibility as a function of the topic and equipment area (i.e., neuroscience, behavioral science). *significant difference between undergraduate and graduate students (p < 0.05). The letters a, b, c, and d represent a significant difference (p < 0.05) from those of a different topic area/equipment area combination. a = neuroscience topic/neuroscience equipment. b = behavioral science topic/behavioral science equipment. c = behavioral science topic/neuroscience equipment. d = neuroscience topic/behavioral science equipment. See Method section for additional details.
Subsequent pairwise comparisons revealed the following. Among undergraduate nonscience majors, credibility scores for the NS/NS combination were rated higher than their graduate counterparts. Albeit with lower mean ratings, a similar result was found for the BS/NS combination. Conversely, graduate students rated BS/BS and NS/BS combinations as more scientifically credible than undergraduate students. When students majoring in the sciences were considered, across four combinations, graduate students rated the combinations as more scientifically credible than undergraduate science majors.
Turning to comparisons within academic rank in nonscience disciplines, undergraduates ranked the N/S as significantly more credible than the three remaining combinations. Among the three remaining combinations, the BS/BS combination was ranked more credible than the BS/NS and NS/BS combinations, which were comparable.
Among undergraduate students in the sciences, the NS/NS combination was rated more credible than the other three combinations. Similar to the ratings of the nonscience undergraduate majors, the BS/NS and NS/BS ratings were lower than the NS/NS and BS/BS categories. Among graduate students majoring in the sciences, a similar pattern was found, but as noted above, the ratings reflected significantly higher scientific credibility when compared to those of the undergraduate science majors.
3.5. SANE Effects by Research Vignette
To explore the potential influence of irrelevant neuroscience information on research credibility judgments, the overall credibility data for the eight scenarios were analyzed using a one-way MANOVA, followed by interpretation of the one-way ANOVAs and the use of relevant post-hoc tests. The resulting MANOVA was significant, Wilks’ Λ = 0.402, F(24, 1149.12) = 17.69, p < 0.001,
= 0.262. The relevant results are presented in Table 2.
When the Spacing Effect research was considered, a significant effect of research evidence was found, F(3, 403) = 17.11, p < 0.001,
= 0.113. Pairwise comparisons revealed significant differences among all four means, with the psychology text only scenario having the lowest mean credibility rating, followed by the extraneous neuroscience text plus image scenario. This latter scenario was rated lower than the remaining two scenarios, with the one containing extraneous neuroscience text and a graph rated as having the highest credibility.
Next, the data associated with the Multitasking research scenario was analyzed. Here, the effect of framing was significant, F(3, 403) = 4.45, p = 0.004,
= 0.032. The effect of extraneous neuroscience information was modest, with only the irrelevant neuroscience information and an image rated significantly higher than the remaining scenarios, which were rated similarly.
Turning to the Stereotype Threat research, the effect of the information contained in the scenario was significant, F(3, 403) = 12.32, p < 0.001,
= 0.084. However, post hoc examination of the means revealed that the scenario consisting
Table 2. Summary of the results rated credibility for each of eight research scenarios by content framing.
Research Scenario |
Content Framing |
M (SEM) |
One-Way ANOVA |
Spacing Effect |
Psychology Text Only |
4.46 (0.119)b,c,d |
F(3, 403) = 17.11
p < 0.001,
|
Extraneous Neuroscience Text |
5.23 (0.962)a,d |
Extraneous Neuroscience Text & Graph |
5.43 (1.02)a,d |
Extraneous Neuroscience Text & Image |
4.87 (0.945)a,b,c |
Multitasking |
Psychology Text Only |
5.56 (0.71)d |
F(3, 403) = 4.45,
p = 0.004,
|
Extraneous Neuroscience Text |
5.63 (1.30)d |
Extraneous Neuroscience Text & Graph |
5.51 (0.89)d |
Extraneous Neuroscience Text & Image |
5.99 (1.14)a,b,c |
Stereotype Threat |
Psychology Text Only |
4.17 (1.89)b,c,d |
F(3, 403) = 12.32,
p < 0.001,
|
Extraneous Neuroscience Text |
4.98 (0.75)a |
Extraneous Neuroscience Text & Graph |
5.18 (0.91)a |
Extraneous Neuroscience Text & Image |
4.93 (1.17)a |
Curiosity |
Psychology Text Only |
4.36 (0.164)b,c,d |
F(3, 403) = 14.80,
p < 0.001,
|
Extraneous Neuroscience Text |
5.35 (0.101)a |
Extraneous Neuroscience Text & Graph |
5.03 (0.163)a |
Extraneous Neuroscience Text & Image |
5.25 (0.883)a |
Math Learning Disability |
Psychology Text Only |
3.58 (0.185)b,c,d |
F(3, 403) = 18.28,
p < 0.001,
|
Extraneous Neuroscience Text |
4.59 (0.120)a |
Extraneous Neuroscience Text & Graph |
4.69 (0.121)a |
Extraneous Neuroscience Text & Image |
4.75 (0.067)a |
Deliberate Practice |
Psychology Text Only |
5.62 (0.116)c,d |
F(3, 403) = 8.40,
p < 0.001,
|
Extraneous Neuroscience Text |
5.54 (0.118)c |
Extraneous Neuroscience Text & Graph |
4.79 (0.153)a,b,d |
Extraneous Neuroscience Text & Image |
5.19 (0.133)a,d |
Delayed Gratification |
Psychology Text Only |
4.95 (1.26)b,d |
F(3, 403) = 22.31,
p < 0.001,
|
Extraneous Neuroscience Text |
3.88 (1.91)a,c,d |
Extraneous Neuroscience Text & Graph |
5.35 (1.13)b |
Extraneous Neuroscience Text & Image |
5.42 (1.63)a,b |
Emotional
Self-Regulation |
Psychology Text Only |
5.12 (1.25)d |
F(3, 403) = 4.85,
p = 0.003,
|
Extraneous Neuroscience Text |
5.00 (1.25)d |
Extraneous Neuroscience Text & Graph |
5.25 (1.12)d |
Extraneous Neuroscience Text & Image |
4.63 (1.14)a,b,c |
Note. psychological text information only. Extraneous neuroscience text added. Extraneous neuroscience text & graph added. Extraneous neuroscience text & image added. Also, where reported, the a, b, c, and d superscripts in the table reflect significant comparisons with the three other content framing conditions.
of only relevant psychology information was rated as less credible than the remaining three information scenarios, which produced similar ratings. Similar to the previous scenario, examination of the research scenario on Curiosity differed as a function of the presence of extraneous neuroscience information, F(3, 403) = 14.08, p < 0.001,
= 0.099. Post hoc comparison of the means revealed that once again, the psychological information only scenario was rated as less credible than the three extraneous neuroscience scenarios which all received similar credibility ratings.
Consideration of the Math Learning Disability research scenario revealed a similar pattern, with mean differences detected, F(3, 403) = 18.28, p < 0.001,
= 0.120. Post hoc comparisons revealed that the scenario containing psychological text information only was rated as less credible than the remaining three information scenarios, which once again produced similar credibility ratings.
Credibility ratings differed as a function of the framing of the Deliberate Practice research scenario as well, F(3, 403) = 8.40, p < 0.001,
= 0.059. However, here, the participants who reviewed the psychology-only scenario rated their confidence in the credibility of the research significantly higher than the two neuroscience scenarios containing figures (i.e., a graph or image). Last, the credibility rating of the neuroscience text only condition was comparable to that of the psychological information only scenario but differed significantly from the neuroscience text plus graph scenario. Here, too, the neuroscience text plus graph condition was rated as less credible than the extraneous neuroscience text plus image condition.
Turning to the examination of the Delayed Gratification revealed a significant effect of the information contained within the version of the scenario, F(3, 403) = 22.31, p < 0.001,
= 0.142. Here, post hoc examination of the data revealed lower credibility ratings for the extraneous neuroscience text condition, differing from the credibility ratings of the extraneous neuroscience information plus graph or image, as well as the psychological text only control condition. In addition, credibility ratings for the extraneous neuroscience plus image were significantly higher than those of the psychology control group.
Last, when the Emotional Self-Regulation scenario was examined, the credibility ratings differed among the four information groups, F(3, 403) = 4.85, p = 0.003,
= 0.035. Here, the participants found the extraneous neuroscience text plus image condition to be significantly less credible than the three remaining conditions, all of which were comparable (see Table 2). This result is noteworthy since the research scenario with only psychology information included was rated as more credible than a research scenario containing psychology and neuroscience information, as well as a scientific, albeit irrelevant, image.
3.6. The Role of the SANE Effect as a Function of Process and Discipline
The present effects of extraneous neuroscience information were not as unequivocal as those reported by IM et al. (2017) [1]. Nonetheless, I followed their strategy, comparing psychological findings versus the reports that included the neuroscience
Figure 3. Perceived credibility of the research described in the vignette as a function of psychological only vs. psychological vs. extraneous neuroscience information. *Significantly difference as a function of the framing of the information in the research vignettes (p < 0.05). Error bars represent the SEM. See text for more details.
text plus graph. In addition, the psychological process (Process; learning, development) and research disciplines (Discipline; cognitive psychology, affective psychology) were included as within-subjects factors, resulting in a one-between, two-within mixed ANOVA. The main effect of framing was significant, F(1, 203) = 13.58, p < 0.001,
= 0.063. with the credibility ratings of reports that included neuroscience information rated significantly higher (M = 4.77, SD = 0.67) than when the reports contained psychological information only (M = 5.09, SD = 0.58). A significant main effect of Process was found F(1, 203) = 13.88, p < 0.001,
= 0.064. with credibility ratings for learning topics significantly higher than for development topics (Ms = 5.12 vs. 4.90, SDs = 0.78 & 0.91). Unlike IM and colleagues, a significant main effect of Discipline was found, F(1, 203) = 8.79, p = 0.003,
= 0.041, with cognitive research topics (M = 5.09, SD = 0.73) rated as more credible than affective topics (M = 4.93, SD = 0.87). While the Vignette Framing × Discipline interaction was nonsignificant, a significant Vignette Framing × Process was detected, F(1, 203) = 81.14, p < 0.001,
= 0.286. However, these results must be considered in light of a significant Vignette Framing × Process × Discipline interaction, F(1, 203) = 50.29, p < 0.001,
= 0.199. The relevant results are provided in Figure 3. Turning to panel A, when the Discipline was a cognitive area, credibility scores were significantly higher for descriptions containing neuroscience information than when the information included only psychologically framed information. Conversely, when the discipline of the vignette dealt with an affective topic, credibility ratings were considerably higher when the information included psychological rather than neuroscience framing.
On the other hand, when the process of the scenarios was concerned Development topics, (Figure 3, panel B) those receiving scenarios included neuroscience information found the research to be significant higher in credibility than when the information contained psychology framed text. This was especially true when the framing was psychological information only, and the discipline was an affective research area (p < 0.05).
3.7. Hierarchical Regression Analyses of Variables Predictive of the Rated Credibility of the Research
The final step in our analytic plan was to examine the specific contributions of Academic Rank (2 levels: undergraduate, graduate) and Academic Major (2 levels: science, nonscience), familiarity with research topics, and the impact of the content framing on the effects of the perceived average credibility of the research scenarios. To facilitate an apparent effect, if any, of the content framing variable on the rated credibility of the research vignettes, the framing was limited to psychological text only and psychological text, as well as the neuroscience text and an image of the framing variable. In addition, given the suggested role of individual differences in the SANE effect, I examined the specific contributions of individual differences in personality and gender alongside the effect of content framing on the perceived average credibility of the research scenarios.
3.7.1. Hierarchical Regression Analyses—Research Familiarity, Academic Major and Rank, and Research Vignette Framing
The results of the regression analyses are presented in Tables 3-6. Turning first to the consideration of the academic variables in Table 3, the following was revealed. Examination of familiarity with research topics accounted for 7.6% of the variance in the composite ratings of vignette credibility for Learning scenarios (Model 1). The addition of the Academic Rank and measure variables in Model 2 accounted for an additional 11.9% of the variance in the dependent variable. Familiarity with research topics remained significant (β = 0.372) and Academic Rank (β = 0.319), but no Academic Major contributed significantly to the equation. Finally, the addition of the vignette framing manipulation led to a final model with an R2 of 0.250. Familiarity with research topics contributed significantly to the final equation (β = 0.455), as did Academic Rank (β = 0.224) and Academic Major (β = −0.175), as well as vignette framing (β = −0.278).
Table 3. Hierarchical regression analysis—learning research vignettes (N = 205).
Model |
R2 |
ΔR2 |
β |
t |
Model 1 F(1, 203) = 16.78, p < 0.001 |
|
|
|
|
Familiarity with Research Topics |
0.076 |
|
0.276 |
4.10*** |
Model 2 ΔF(3, 201) = 16.25, p < 0.001 |
0.195 |
0.119 |
|
|
Familiarity with Research Topics |
|
|
0.372 |
5.56*** |
Academic Rank |
|
|
0.319 |
4.67*** |
Academic Major |
|
|
−0.113 |
−1.74 |
Model 3 ΔF(4, 200) = 14.66, p < 0.001 |
0.250 |
0.055 |
|
|
Familiarity with Research Topics |
|
|
0.455 |
6.66*** |
Academic Rank |
|
|
0.224 |
3.19** |
Academic Major |
|
|
−0.175 |
−2.70** |
Framing Psych vs. Neuro text, image |
|
|
−0.278 |
−3.83*** |
Full Model - F(4, 200) = 16.68, p < 0.001. 0 = nonscience major, 1 = science major. 0 = undergraduate, 1 = graduate. Dependent variable = Mean credibility of the reported research.
Table 4. Hierarchical regression analysis—developmental research vignettes (N = 205).
Model |
R2 |
ΔR2 |
β |
t |
Model 1 F(1, 203) = 113.44, p < 0.001 |
0.358 |
|
|
|
Familiarity with Research Topics |
|
|
0.604 |
10.46*** |
Model 2 ΔF(3, 201) = 7.75, p < 0.001 |
0.404 |
0.044 |
|
|
Familiarity with Research Topics |
|
|
0.527 |
9.15*** |
Academic Rank |
|
|
−0.215 |
−3.66*** |
Academic Major |
|
|
−0.122 |
−2.20* |
Model 3 ΔF(4, 200) = 7.06, p = 0.009 |
0.425 |
0.018 |
|
|
Familiarity with Research Topics |
|
|
0.477 |
7.97*** |
Academic Rank |
|
|
−0.158 |
−2.56** |
Academic Major |
|
|
−0.084 |
−1.47 |
Framing Psych vs. Neuro text, image |
|
|
0.169 |
2.66** |
Full Model - F(4, 200) = 16.68, p < 0.001. 0 = nonscience major, 1 = science major. 0 = undergraduate, 1 = graduate. Dependent variable = Mean credibility of the reported research.
Turning to the Developmental research vignettes (see Table 4), familiarity with research topics accounted for 35.8% of the variance in the composite credibility ratings. The addition of Academic Rank and Major variables in Model 2 accounted for an additional 11.9% of the variance in the dependent variable. Here, familiarity with research topics remained significant (β = 0.527), with both Academic Rank (β = −0.215) and Academic Major (β = −0.122) contributing significantly to the equation. The addition of the vignette framing manipulation resulted in a final model with an R2 of 40.4%. The final model, which accounted for 42.5% of the variance in vignette credibility ratings, consisted of three components. Further, in Model 3, the addition of vignette framing was significant (β = −0.169). Familiarity with research topics contributed significantly to the final equation (β = 0.477), as did Academic Rank (β = −0.224), but not Academic Major.
As seen in Table 5, examining the prediction of research credibility in the context of a Cognitive research area, familiarity with research topics accounted for 27.7% of the variance in the composite ratings of vignette credibility (Model 1). The addition of the Academic Rank and Major variables in Model 2 accounted for an additional 3.4% of the variance in the dependent variable. As before, familiarity with research topics was significant (β = 0.547). Academic Major (β = −0.151) but not Academic Rank contributed significantly to the equation. Finally, the addition of the vignette framing manipulation resulted in a final model with an R2 of 31.9%. Familiarity with research topics contributed significantly to the final equation (β = 0.515), as did Academic Major (β = −0.128). Noteworthy, the contributions of vignette framing and Academic Rank were nonsignificant.
Table 5. Hierarchical regression analysis—cognitive research vignettes (N = 205).
Model |
R2 |
ΔR2 |
β |
t |
Model 1 F(1, 203) = 77.80, p < 0.001 |
0.277 |
|
|
|
Familiarity with Research Topics |
|
|
0.526 |
8.82*** |
Model 2 ΔF(3, 201) = 4.94, p < 0.001 |
0.311 |
0.034 |
|
|
Familiarity with Research Topics |
|
|
0.547 |
8.82*** |
Academic Rank |
|
|
0.082 |
1.30 |
Academic Major |
|
|
−0.151 |
−2.52* |
Model 3 ΔF(4, 200) = 2.33, N.S. |
0.319 |
0.008 |
|
|
Familiarity with Research Topics |
|
|
0.515 |
7.91*** |
Academic Rank |
|
|
0.118 |
1.75 |
Academic Major |
|
|
−0.128 |
−2.06* |
Framing Psych vs. Neuro text, image |
|
|
0.106 |
1.53 |
Full Model - F(4, 200) = 16.68, p < 0.001. 0 = nonscience major, 1 = science major. 0 = undergraduate, 1 = graduate. Dependent variable = Mean credibility of the reported research.
The final credibility variable considered here was for research in the area of Affective research (see Table 6). Familiarity with research topics accounted for 24.3% of the variance in the composite ratings of vignette credibility (Model 1). The addition of Academic Rank and Major to the equation in Model 2 accounted for a modest 1.3% additional variance in the dependent variable, with familiarity with research topics as the only significant predictor in the equation (β = 0.547). Turning to the final model (Model 3), the addition of the vignette framing manipulation resulted in a final model with an R2 = 26.5%. Familiarity with research topics remained a significant contribution to the final equation (β = 0.498), as did Academic Major (β = −0.133). Notably, the contributions of vignette framing and Academic Rank were nonsignificant.
Table 6. Hierarchical regression analysis—affective research vignettes (N = 205).
Model |
R2 |
ΔR2 |
β |
t |
Model 1 F(1, 203) = 77.80, p < 0.001 |
0.243 |
|
|
|
Familiarity with Research Topics |
|
|
0.493 |
8.07*** |
Model 2 ΔF(3, 201) = 1.82, N.S. |
0.256 |
0.013 |
|
|
Familiarity with Research Topics |
|
|
0.466 |
7.23*** |
Academic Rank |
|
|
−0.075 |
−1.14 |
Academic Major |
|
|
−0.108 |
−1.74 |
Model 3 ΔF(4, 200) = 2.35, N.S. |
0.265 |
0.009 |
|
|
Familiarity with Research Topics |
|
|
0.498 |
7.37*** |
Academic Rank |
|
|
0.112 |
−1.61 |
Academic Major |
|
|
−0.133 |
−2.07* |
Framing Psych vs. Neuro text, image |
|
|
−0.110 |
1.53 |
Full Model - F(4, 200) = 16.68, p < 0.001. 0 = nonscience major, 1 = science major. 0 = undergraduate, 1 = graduate. Dependent variable = Mean credibility of the reported research.
3.7.2. Hierarchical Regression Analyses—Gender, Personality, and Research Vignette Framing
The results of this group of regression analyses are presented in Tables 7-10. Turning first to the consideration of the academic variables in Table 7, the following was revealed. Examination of Gender accounted for less than 1% of the variance in the composite ratings of vignette credibility for the Learning scenarios (Model 1). The addition of the personality variables in Model 2 resulted in a slight increase in the total R2 in the dependent variable. Last, in the full model, only the influence of vignette framing was significant (β = −0.180).
Table 7. Hierarchical regression analysis—learning research vignettes (N = 205).
Model |
R2 |
ΔR2 |
β |
t |
Model 1 F(1, 203) = 0.015, N.S. |
0.001 |
|
|
|
Gender |
|
|
−0.009 |
−0.24 |
Model 2 ΔF(5, 198) = 2.84, p = 0.028 |
0.067 |
0.067 |
|
|
Gender |
|
|
0.018 |
0.25 |
Extraversion |
|
|
−0.042 |
−0.59 |
Agreeableness |
|
|
0.113 |
1.54 |
Conscientiousness |
|
|
0.122 |
1.60 |
Emotional Stability |
|
|
0.115 |
1.62 |
Openness to New Experiences |
|
|
0.116 |
1.61 |
Model 3 ΔF(1, 197) = 4.98, p = 0.027 |
0.090 |
0.023 |
|
|
Gender |
|
|
0.111 |
−1.36 |
Extraversion |
|
|
−0.027 |
−0.38 |
Agreeableness |
|
|
0.113 |
1.60 |
Conscientiousness |
|
|
0.122 |
1.70 |
Emotional Stability |
|
|
0.115 |
1.62 |
Openness to New Experiences |
|
|
0.124 |
1.73 |
Framing Psych vs. Neuro text, image |
|
|
−0.180 |
−2.23* |
Full Model - F(7, 197) = 2.78, p = 0.009, 0 = female, 1 = male. Dependent variable = Mean credibility of the reported research.
Table 8. Hierarchical regression analysis—developmental research vignettes (N = 205).
Model |
R2 |
ΔR2 |
β |
t |
Model 1 F(1, 203) = 0.015, N.S. |
0.290 |
|
|
|
Gender |
|
|
0.538 |
9.11*** |
Model 2 ΔF(5, 198) = 2.84, p = 0.028 |
0.313 |
0.023 |
|
|
Gender |
|
|
0.542 |
8.94*** |
Extraversion |
|
|
−0.088 |
−1.44 |
Agreeableness |
|
|
0.130 |
2.06* |
Conscientiousness |
|
|
0.080 |
1.60 |
Emotional Stability |
|
|
0.082 |
1.64 |
Openness to New Experiences |
|
|
0.009 |
0.14 |
Model 3 ΔF(1, 197) = 4.98, p = 0.027 |
0.365 |
0.052 |
|
|
Gender |
|
|
0.402 |
5.91*** |
Extraversion |
|
|
−0.111 |
−1.87 |
Agreeableness |
|
|
0.154 |
2.52* |
Conscientiousness |
|
|
0.993 |
1.45 |
Emotional Stability |
|
|
0.089 |
1.65 |
Openness to New Experiences |
|
|
−0.003 |
−0.05 |
Framing Psych vs. Neuro text, image |
|
|
0.271 |
4.01*** |
Full Model - F(7, 197) = 16.16, p < 0.001, 0 = female, 1 = male. Dependent variable = Mean credibility of the reported research.
Table 9. Hierarchical regression analysis—cognitive research vignettes (N = 205).
Model |
R2 |
ΔR2 |
β |
t |
Model 1 F(1, 203) = 19.88, p < 0.001 |
0.299 |
|
|
|
Gender |
|
|
0.299 |
4.46*** |
Model 2 ΔF(5, 198) = 2.91, p = 0.015 |
0.389 |
0.062 |
|
|
Gender |
|
|
0.318 |
4.72*** |
Extraversion |
|
|
−0.118 |
−1.74 |
Agreeableness |
|
|
0.139 |
1.97* |
Conscientiousness |
|
|
0.081 |
1.42 |
Emotional Stability |
|
|
0.087 |
1.49 |
Openness to New Experiences |
|
|
0.112 |
1.63 |
Model 3 ΔF(1, 197) = 8.34, p = 0.004 |
0.431 |
0.035 |
|
|
Gender |
|
|
0.204 |
2.64** |
Extraversion |
|
|
−0.136 |
−2.04* |
Agreeableness |
|
|
0.158 |
2.29* |
Conscientiousness |
|
|
0.010 |
1.73 |
Emotional Stability |
|
|
0.0.89 |
1.65 |
Openness to New Experiences |
|
|
0.103 |
1.52 |
Framing Psych vs. Neuro text, image |
|
|
0.221 |
2.89** |
Full Model - F(7, 197) = 6.43, p < 0.001, 0 = female, 1 = male. Dependent variable = Mean credibility of the reported research.
Consideration of the data associated with Developmental research vignettes (Table 8) revealed that Gender accounted for 29% of the variance in the composite credibility ratings. The addition of personality variables and vignette framing in Model 2 accounted for a modest 2.3% additional variance in the dependent variable. Here, the contribution of Gender remained significant (β = 0.542), as did the Agreeableness personality variable (β = 0.130). In Model 3, the addition of the vignette framing manipulation resulted in a final R2 of 36.5%. As before, the inclusion of Gender was significant (β = 0.542) as was Agreeableness (β = 0.154). The addition of vignette framing was significant as (β = −0.180), with an expected negative standardized regression coefficient.
As seen in Table 9, upon examining the analysis of the predictor variables for the vignettes associated with the two research areas, the following results were obtained. Turning first to vignettes about Cognitive topics, Gender was a significant factor, accounting for 29.9% of the variance in rated research credibility. The addition of the personality variables to the model resulted in an R2 of 38.9%. Gender remained significant, but aside from Gender (β = 0.318), only Agreeableness contributed significantly to the equation (β = 0.139). The inclusion content framing added 3.5% to the model, yielding a final R2 of 43.1%. Gender (β = 0.204) and Agreeableness (β = 0.158) both continued to make significant contributions to the equation. Interestingly, extraversion (β = −0.136) did as well. Last, vignette framing contributed significantly (β = 0.221). However, the addition of the neuroscience text and a graph was associated with higher credibility ratings.
Table 10. Hierarchical regression analysis—cognitive research vignettes (N = 205).
Model |
R2 |
ΔR2 |
β |
t |
Model 1 F(1, 203) = 35.84, p < 0.001 |
0.150 |
|
|
|
Gender |
|
|
0.387 |
5.99*** |
Model 2 ΔF(5, 198) = 1.40, N.S. |
0.179 |
0.029 |
|
|
Gender |
|
|
0.397 |
5.98*** |
Extraversion |
|
|
−0.038 |
−0.57 |
Agreeableness |
|
|
0.129 |
1.86 |
Conscientiousness |
|
|
0.081 |
1.30 |
Emotional Stability |
|
|
0.087 |
1.38 |
Openness to New Experiences |
|
|
0.007 |
0.10 |
Model 3 ΔF(1, 197) = 0.10, N.S. |
0.179 |
0.010 |
|
|
Gender |
|
|
0.401 |
5.17** |
Extraversion |
|
|
−0.037 |
−0.55 |
Agreeableness |
|
|
0.128 |
2.29* |
Conscientiousness |
|
|
0.001 |
0.05 |
Emotional Stability |
|
|
0.019 |
1.99 |
Openness to New Experiences |
|
|
0.007 |
0.10 |
Framing Psych vs. Neuro text, image |
|
|
−008 |
−0.10 |
Full Model - F(7, 197) = 6.14, p < 0.001, 0 = female, 1 = male. Dependent variable = Mean credibility of the reported research.
Finally, when Affective research topics were considered (see Table 10), the following results were found. In Model 1, Gender accounted for 15% of the variance in vignette credibility ratings. The addition of the personality variables produced little change (ΔR2 = 2.9%), and only Gender contributed significantly to the equation (β = 0.397). The final full model was essentially unchanged from Model 2 (ΔR2 = 1%), with Gender (β =−0.401) and Agreeableness (β = −0.128) as the only significant predictor variables.
4. Discussion
In summary, the present results show that female participants scored significantly higher than male participants on a measure of strength of religious faith. Science majors scored higher than nonscience majors, but undergraduate and graduate students were comparable. When the assessments of knowledge of psychological and neuroscience information were explored, undergraduate students surpassed their graduate classmates on knowledge of psychology and neuroscience. Knowledge varied on both tests as well. Male students outscored their female classmates on the neuroscience test, but no gender differences were found on the test of psychological knowledge. When compared to nonscience majors, science majors demonstrated higher neuroscience knowledge. However, among the male participants, science majors scored lower than nonscience majors, with the pattern reversed for female participants, with higher scores found among science majors.
Turning to the effect of scientific stereotypes associated with equipment and research topic, graduate students rated BS/BS and NS/BS combinations as more scientifically credible than undergraduate students. When students majoring in the sciences were considered, across four combinations, graduate students rated the combinations as more scientifically credible than undergraduate science majors.
Turning to the eight research vignettes, scientific credibility ratings generally differed as a function of the framing, with information that included neuroscience text plus an image typically seen as more credible than the other three conditions. The vignettes that only included the relevant psychological information were rated as less credible than those conditions that contained extraneous neuroscience information. However, where present, the effect was generally restricted to descriptions that included extraneous visual information.
When the research discipline was a Cognitive area, credibility scores were significantly higher for descriptions containing neuroscience information than when the information included only psychologically framed information. Conversely, when the scenario discipline dealt with an affective topic, credibility ratings were considerably higher when the information included psychological rather than neuroscience framing. On the other hand, when the process of the scenarios was concerned with Development topics, scenarios that included extraneous neuroscience information were rated as higher in credibility than when the information contained psychology-framed text. This was especially true when the framing was psychological information only, and the discipline was an affective research area.
In the present study, the predictive impact of different variables on rated credibility was associated with gender in many but not all categories of research vignettes. The personality trait of Agreeableness was generally predictive, as was familiarity with the research topic. Although not consistently significant across the regression models, the presence of extraneous neuroscience information (text plus an image) contributed a minor effect to some equations, with academic major being the primary factor.
Research has supported the proposal that the SANE effect is grounded in dual-process theories of reasoning [61], which suggest that individuals often rely on intuitive reasoning unless prompted to engage more analytically. Neuroscience explanations may trigger a sense of epistemic authority [19], leading to reduced critical engagement with the content.
4.1. The Influence of Academic Background on Susceptibility to the SANE Effect
Weisberg et al. (2008) [2] demonstrated that even flawed explanations were rated as more satisfying when accompanied by irrelevant neuroscience jargon and that this held even among undergraduates trained in cognitive neuroscience. One might expect that students majoring in neuroscience or psychology as well as other science disciplines would be less susceptible to the SANE effect due to domain familiarity and critical training. However, the present research as well as reports by others indicates that this is not consistently the case. In the present study, majoring in the sciences was related to the perceived credibility of the research reported in the vignettes, although academic major was only significant when cognitive vignettes were considered. To reiterate, Weisberg et al. (2008) [2] found that undergraduates with substantial coursework in neuroscience were still vulnerable to the effect. Similarly, Im et al. (2017) [1] found that although psychology majors were less likely than non-psychology majors to endorse neuromyths, they still showed elevated ratings for neuroscience-laden explanations.
Rhodes et al. (2014) [19] examined the impact of neuroscience training on explanation evaluation. Their results showed that while advanced students in psychology and neuroscience were more skeptical of flawed reasoning in general, they were not immune to the allure of neuroscience framing. These findings suggest that mere exposure to neuroscience content is insufficient to inoculate students against superficial cues.
Several studies have contrasted STEM majors with those in the humanities and social sciences. Fernandez-Duque et al. (2015) [18] conducted a study in which participants from various academic backgrounds were asked to evaluate psychological explanations with and without the inclusion of irrelevant neuroscience. They found that students in STEM majors (e.g., biology, engineering) were generally more critical of flawed reasoning but were equally susceptible to the SANE effect when neuroscience content was added. The authors proposed that a general scientific mindset may not translate into discipline-specific critical skills without targeted training.
In contrast, non-STEM students, particularly those in education and the humanities, may lack both scientific reasoning training and exposure to neuroscience, making them especially vulnerable. In contrast, students from nonscientific majors, including fields such as education, business, and the humanities, tend to be more vulnerable to the SANE effect. These students often have limited exposure to neuroscience and may lack training in scientific methodology and critical evaluation. As a result, they are more likely to rely on surface features, such as the inclusion of neuroscience terminology or brain images, when judging the quality of an explanation. For example, Macdonald et al. (2017) [62] found that education majors, in particular, were more likely to endorse neuromyths and more inclined to rate flawed explanations as credible when they were accompanied by neuroscience framing. In a related study, Dekker et al. (2012) [63] found high levels of neuromyth acceptance among teacher trainees, even among those who had completed introductory psychology courses. Rather than reducing misconceptions, brief exposure to neuroscience sometimes fostered unwarranted confidence in inaccurate beliefs. Fernandez-Fernandez-Duque et al. (2015) [18] also demonstrated that nonscience students were less capable of detecting flaws in reasoning and more influenced by neuroscientific embellishments than their science-trained peers.
Medical and allied health students occupy a unique position. While they receive scientific training, it often emphasizes applied rather than theoretical reasoning. Dekker et al. (2012) [63] demonstrated that even among teacher trainees and health sciences students, a belief in neuromyths was prevalent. Students who had taken courses in neuroscience or psychology were slightly better at identifying neuromyths but still endorsed explanations containing neuroscience at disproportionately high rates.
These results underscore that the seductive power of neuroscience transcends simple categorization by academic major and suggests a need for deeper critical reasoning instruction across fields. Last, while the effect was small or nonsignificant, Academic Rank was associated with the credibility measure. However, the results were mixed, with a more pronounced SANE effect among undergraduate majors when some but not all the categories of research vignettes were considered. Indeed, in other categories of research vignettes, a larger SANE effect was observed among graduate students than their undergraduate counterparts. However, the graduate programs included in the present research were generally in nonscience disciplines (e.g., counseling, ministry).
Mediators of the Relationship between Academic Major and Susceptibility to the SANE Effect
While an academic major can be a proxy for exposure to neuroscience, the actual content of instruction matters. Students who receive explicit instruction on research methods, the limitations of neuroimaging (e.g., fMRI), and the construction of scientific arguments may be better equipped to resist the SANE effect [64]. For example, a neuroscience major with a curriculum focused on statistical literacy and critical thinking may fare better than one primarily taught anatomical facts without interpretive context. However, in the present study, higher levels of familiarity with neuroscience and psychology research topics was associated with higher research credibility ratings, accounting between 7.6% and 35.8% of the variance in models where the familiarity variable was the sole predictor in the model (see Tables 3-6).
The relationship between academic major and SANE susceptibility is also mediated by individual cognitive traits. In one report, individuals who scored higher on the Cognitive Reflection Test (CRT) [65] were less likely to endorse flawed explanations, even when neuroscience was included [66]. Students in philosophy and mathematics, majors that typically emphasize formal reasoning, performed better on these tasks, suggesting that general analytical skill, rather than neuroscience familiarity alone, may offer protection against the SANE effect.
Last, an additional mediating variable of interest may be epistemic trust. For example, research has shown that neuroscience is perceived as more objective, rigorous, and credible compared to psychology [67]. Students in less quantitatively intensive majors may rely more on surface cues, interpreting the presence of brain images or neuroscience language as markers of scientific authority. Unwarranted trust can supersede logical analysis. Further, this effect can be particularly pronounced among individuals lacking knowledge with the methodological nuances of neuroscience research.
4.2. Gender Differences and the SANE Effect
As noted throughout the present research report, the presence of a brain image can enhance perceived credibility [16] [18]. Part of the effect has been attributed to perceptions about various disciplines. Thus, evidence that includes fMRI data in support of psychological processes is often viewed uncritically [17]. However, the perception that, as a discipline, neuroscience is considered high in neuro-realism (i.e., highly objective and authoritative; Racine et al., 2005) [17] has been proposed as a construct that may underlie the SANE effect. Given this framework, the SANE effect is considered a phenomenon that is susceptible to a number of individual differences, including gender. In the present study, gender was associated with many but not all of the categories of research vignettes, with male participants generally more susceptible to SANE effects.
4.2.1. Gender Differences in Cognitive and Epistemic Style and the SANE Effect
One line of inquiry that supports the plausibility of gender differences in the SANE effect comes from research on cognitive styles. Studies suggest that, on average, men and women differ in their preferred modes of information processing. Females may show greater empathy and intuitive reasoning, while males are somewhat more likely to engage in analytic and systemizing thinking [68]-[71]. Since the SANE effect involves the heuristic acceptance of surface-level cues (e.g., neuroscience terminology), individuals who rely more on intuitive or effectively driven processing may be more susceptible to such an influence. Some studies [72] have shown that women tend to report greater faith in intuition, which may increase vulnerability to persuasive but logically flawed explanations.
Moreover, dual-process models of cognition [61] suggest that individuals vary in their reliance on Type 1 (intuitive) versus Type 2 (analytical) reasoning. If gender is associated with these cognitive preferences, then it stands to reason that gender may moderate susceptibility to the SANE effect. However, empirical findings in this area are mixed, and effect sizes for gender differences in reasoning style tend to be small [73]. Therefore, gender may interact with other individual differences, such as education or domain knowledge, in determining SANE susceptibility.
Although the theoretical rationale for gender differences in the SANE effect is strong, empirical research directly examining this issue remains sparse. Weisberg et al. (2008) [2] did not analyze gender differences in their original sample, nor did many of the follow-up studies. However, Rhodes et al. (2014) [19] noted in exploratory analyses that gender did not significantly moderate the effect of neuroscience explanations on judgments of argument quality. These null findings suggest that if gender differences in the SANE effect do exist, they are likely subtle and context-dependent.
Nevertheless, a small number of studies in adjacent fields hint at possible gender-based divergences. For instance, McCabe and Castel (2008) [16] found that participants rated scientific explanations that included brain images as more credible but did not analyze whether this effect differed by gender. Given known gender differences in visual-spatial reasoning and scientific imagery processing [74], it would be worthwhile to examine whether brain images produce differential effects in male versus female participants. Given this, the relationship between gender and the SANE effect is complex, with much work remaining to fully capture gender differences, where they occur.
4.2.2. Gender, Trust in Science, Epistemic Authority, and Academic Background
Another potential pathway by which gender may influence the SANE effect is through differences in trust in science and epistemic authority. Neuroscience is often perceived as a highly authoritative field, and prior research has shown that men and women differ in their attitudes toward authority in scientific contexts [75]. For example, Allum et al. (2008) [76] found that women expressed slightly more trust in scientists than men but also showed greater concern about ethical issues in science. This nuanced pattern suggests that women may be more responsive to authoritative or moral cues in scientific communication, including those implicit in neuroscience conceptual framing.
Relatedly, a study by Im et al. (2017) [1] investigating individual differences in susceptibility to neuroscience explanations found preliminary evidence suggesting gender differences in specific subscales related to reasoning style. However, gender was not the central focus of their analysis. Additional research is needed to directly investigate whether trust in neuroscience as a perceived authority differs by gender and whether this translates into a differential endorsement of neuroscience-laden explanations.
Gender differences in science engagement, particularly in neuroscience and psychology, may also influence how males and females respond to explanations framed within the context of neuroscience as a discipline. Research suggests that women are overrepresented in psychology but underrepresented in neuroscience and physical sciences [77]. As a result, female participants may view neuroscience framing in the context of psychological phenomena as more novel or impressive, potentially increasing their susceptibility to the SANE effect. Conversely, male participants may be more skeptical of interdisciplinary framing of information, but dependent on their academic orientation and disciplinary familiarity.
While beyond the scope of this discussion, studies of educational interventions aimed at improving scientific reasoning occasionally have reported gender differences. Lawson and Weser (1990) [78], for example, found that female students benefited more from structured training in hypothesis testing and argument evaluation. If women are more responsive to pedagogical efforts that promote analytical reasoning, this suggests that any baseline gender differences in the SANE effect might be mitigated by adjustments in pedagogy.
4.3. The Influence of Academic Background on Susceptibility to the SANE Effect
While researchers have generally concluded that neuroscience exerts a special kind of epistemic authority due to its association with objectivity, complexity, and biological determinism [29] [79]. However, less attention has been paid to why some individuals may be more susceptible than others to such superficial cues. Among these, personality traits have received some attention.
However, when the Big Five personality traits are considered in the context of the SANE effect, the trait of openness to experience has received attention. This trait encompasses intellectual curiosity, creativity, and receptivity to novel ideas [80]. A recent study by Rhodes et al. (2014) [19] found that individuals high in openness were more likely to accept weak scientific explanations that included neuroscience framing, even when the neuroscience information was irrelevant to the research topic. The authors suggest that those high in openness may be particularly drawn to complex-sounding explanations, which neuroscience jargon readily provides. However, this curiosity may come at a cost: when openness is not paired with critical thinking, it may predispose individuals to accept seductive but logically flawed arguments. Of interest, in the present study openness to experience was not associated with any of the analyses where the dependent variable was the credibility of the reported research.
Agreeableness and the SANE Effect
As a personality trait, Agreeableness reflects the predilection of individuals to act in a compassionate, cooperative, trusting, and empathetic manner toward others [80]. Individuals who score high on measures of Agreeableness have a tendency to avoid conflict, value social harmony, and are generally more willing to defer to the views of others in order to maintain positive interpersonal relationships. Agreeable individuals are often described as warm, modest, compliant, and altruistic [81]. While Agreeableness is generally associated with prosocial behavior and better interpersonal functioning, it may also predispose individuals to certain cognitive biases—especially when those biases are socially or emotionally mediated. In the present study, generally Agreeableness was associated with the rated credibility of the reported research (see Tables 7-10). With one exception involving extraversion, no other personality factors contributed significantly to any of the equations.
Although direct empirical research on Agreeableness and the SANE effect is limited, adjacent studies provide supportive evidence. For example, Im et al. (2017) [1] found that susceptibility to the SANE effect was inversely related to analytical thinking and positively related to cognitive styles associated with intuitive reasoning. Agreeableness has been linked to such intuitive and affective styles of reasoning [82], suggesting a potential pathway by which agreeable individuals may be less inclined to scrutinize the logical coherence of neuroscience-laden explanations.
Given some evidence linking Agreeableness with the SANE effect, it is worthwhile to consider how this personality construct impacts judgments. One theoretical link between Agreeableness and the SANE effect is the concept of epistemic deference. Epistemic deference refers to the tendency to rely on perceived experts or authorities when evaluating claims that extend beyond one’s own knowledge [79] [83]. Neuroscience, as a discipline, is often viewed by the public as highly technical and authoritative [29]. Agreeable individuals, who are more trusting and inclined to avoid interpersonal confrontation, may be especially likely to accept neuroscientific claims at face value, particularly when those claims are presented by experts or are embedded in persuasive rhetoric. This deference may be amplified by a desire to maintain group cohesion or to avoid appearing critical of prestigious sources.
In addition, Agreeableness is strongly associated with interpersonal trust [84]. In the context of science communication, this trust may generalize to the perceived trustworthiness of scientific content, especially when presented with the rhetorical markers of scientific authority, such as brain images or neuroscience terminology. It is plausible that individuals high in Agreeableness are more prone to such persuasive effects because they are more likely to trust the communicator and less likely to engage in critical evaluation that could be construed as confrontational.
While speculative, this possibility aligns with dual-process theories of reasoning, which differentiate between intuitive (Type 1) and analytical (Type 2) processing [61]. Agreeableness may promote intuitive acceptance of information when it is affectively or socially congruent, especially if skepticism would involve questioning a perceived authority. In short, the social dimension of Agreeableness may increase the likelihood of “going along” with persuasive yet shallow explanations, such as those driven by the SANE effect.
Another relevant aspect of Agreeableness is conformity, or the tendency to align with group norms. High-agreeableness individuals often demonstrate greater conformity in social settings [85]. When neuroscience is perceived as the dominant or respected framework within a group or institution, agreeable individuals may be more inclined to endorse neuroscience-framed explanations in order to align with the perceived group consensus. This has implications for how the SANE effect may operate in educational, clinical, or policy contexts where neuroscience is valorized.
A study by Rhodes et al. (2014) [19] found that neuroscience explanations were judged as more satisfying, particularly by participants who were more deferential to expert opinion. While the study did not directly assess Agreeableness, deference to authority is conceptually linked to the trust and compliance dimensions of Agreeableness [86]. Thus, it is plausible that agreeable individuals are more influenced by neuroscience framing, especially in socially structured contexts that reward conformity and deference.
Despite its theoretical plausibility, the relationship between Agreeableness and the SANE effect remains underexplored. Future studies should include validated measures of personality traits alongside standard SANE paradigms. Experimental manipulations that vary the social authority of the source (e.g., a neuroscientist versus a layperson) could help clarify whether agreeable individuals show differential susceptibility based on perceived expertise. Furthermore, longitudinal research could investigate whether susceptibility to the SANE effect changes over time with increased scientific training and whether such changes interact with stable personality traits like Agreeableness.
Due to their heightened trust, conformity, and deference to authority, individuals high in Agreeableness may be particularly vulnerable to the persuasive pull of neuroscience framing. Recognizing these individual differences is crucial for designing more effective science communication and promoting critical engagement with scientific information.
4.4. Cognitive and Psychological Variables and the SANE Effect
4.4.1. Need for Cognition and Confidence in Intuition
Turning to other variables, the need for cognition has been discussed within the context of the SANE effect [1]. Need for cognition is associated with individual differences reflecting a tendency to seek engagement with and enjoy effortful cognitive activities [87] [88]. Individuals high in need for cognition typically scrutinize information more carefully and are less influenced by superficial cues. In the context of the SANE effect, Im and colleagues (2017) [1] found that neuroscience-laden explanations less persuaded by participants with a higher need for cognition compared to their lower-scoring counterparts. As discussed earlier, results such as those reported by Im et al. are consistent with dual-process theories of reasoning [61], which posit that individuals vary in their reliance on intuitive versus analytical thinking. A higher need for cognition is associated with greater engagement in Type 2 processing, which promotes the evaluation of argument quality and reduces vulnerability to irrelevant information [89] [90].
Individual differences reflecting faith in intuition is another personality variable that appears relevant to the SANE effect. Faith in intuition is manifested in terms of the degree to which individuals trust their gut feelings and immediate judgments [72]. People high in faith in intuition tend to rely on heuristics and are more susceptible to cognitive biases. A study by Fernandez-Duque et al. (2015) [18] found that participants who scored higher on faith in intuition were more likely to rate neuroscience explanations favorably, even when those explanations were logically flawed. The authors argue that this reflects a tendency to be influenced by the superficial appeal of neuroscience rather than the quality of the argument itself.
4.4.2. Psychological Mechanisms
Beyond the influence of visual imagery, the presence of neuroscientific language itself appears to modulate evaluative judgments. In a seminal study, Weisberg et al. (2008) [2] demonstrated that non-experts rated psychological explanations as more satisfying when they contained extraneous neuroscience terminology. These findings have been independently replicated using a larger and more diverse sample [39], and additional data further corroborate the effect [20]. Thus, considerable evidence has been brought to bear suggesting that SANE effects can drive an inclination to elevate the credibility of explanations that reference neuroscientific content, even when the content is irrelevant to the logic of the argument.
The psychological mechanisms underlying this phenomenon are multifaceted. One explanatory construct is the “vividness effect,” wherein emotionally salient or concrete exemplars disproportionately influence beliefs relative to abstract or statistical information [91] [92]. In applied domains such as medicine, for instance, clinical decisions are frequently influenced by anecdotal experiences despite contradictory evidence from randomized controlled trials [57] [93]. Another contributing factor is the tendency for individuals to defer to perceived epistemic authorities when assessing arguments outside their domain-specific expertise [79]. Such cognitive offloading, while efficient, may facilitate uncritical acceptance of information, particularly when the source is neuroscience, a field commonly perceived as technically sophisticated and scientifically rigorous.
Given reports such as these, it remains important to explore how, in what settings, and what disciplines impact superficial cues, such as visual imagery and framing arguments within the context of specific disciplines, produce that bias judgments of scientific merit. Neuroscience continues to occupy a position of cultural and epistemic authority. Further research is needed to delineate the boundary conditions under which its rhetorical deployment enhances or impairs critical reasoning.
Most of the research reported on the SANE effect has examined the influence of text and/or imagery, including superfluous brain scans to explore the effect. However, the effect appears to extend to the presence of perceived neuroscience technology. For example, in a fascinating experiment designed to assess the SANE effect using a bogus brain-scanning device, the investigators sought to further elucidate the effect.
The device was, in fact, assembled from a variety of computer and electronic components, included a repurposed salon hair dryer, and was of rather crude construction [94]. Participants were told that the apparatus could detect neural activity, process the data, and subsequently infer the content of complex mental states, creating the perception that the device could accurately reveal participants’ private thoughts. Noteworthy, the majority of participants, including upper-level undergraduates majoring in psychology and neuroscience who had previously received instruction on the methodological and interpretive limits of neuroimaging, judged the technology to be scientifically credible.
Their results provided clear evidence of the powerful effect of neuroscience framing on critical reasoning. Interestingly, when the participants had relevant academic training, the visual authority of neuroscience “technology” as a means of providing brain-based explanations may override skepticism, leading to the uncritical acceptance of improbable or unsupported claims. Such results support the implications of the cultural prestige ascribed to neuroscience. Furthermore, the results of Ali and colleagues provided additional justification for further training on epistemological literacy in scientific education. Finding effective methods to facilitate greater understanding in the public sphere is worth exploring.
4.5. Individual Differences in Attention and Working Memory as Contributors to the SANE Effect
The SANE effect suggests that explanations that include neuroscience information, even when logically irrelevant to the quality of the explanation, enhance judgments of the quality of the information. In addition to the discussion in the previous sections, additional cognitive factors are worthy of consideration and future research. Individual differences in working memory and attention control [95] may provide further understanding of the SANE effect, bolstered by current cognitive theory. Specifically, SANE-consistent judgments may reflect a failure to resist capture by salient but irrelevant neuroscience cues, compromising the ability to maintain evaluation goals and integrating the explanation’s logical structure.
4.5.1. Individual Differences in Attention Control as Resistance to “Neuroscience Cue Capture”
Considered as an effect that varies across individuals, SANE effects represent a specific instance of distractor influence. Specifically, the evaluator’s objective is to assess explanatory coherence, yet a prominent neuroscience cue competes for attention and biases judgment. Unsworth and Miller [96] [95] define attention control as the set of abilities that facilitate focusing on task goals, resisting distraction, and resolving conflict when competing response tendencies arise.
For example, their evidence, consisting of measures of conflict dynamics in Stroop- and flanker-type tasks, suggested that individuals with high attention control exhibit reduced attraction to incorrect responses and resolve conflict more efficiently. In contrast, individuals with low attention control demonstrate greater susceptibility to distraction and slower correction [96]. Applied to SANE, neuroscience embellishments function as a “prepotent pull” toward a heuristic response (“this sounds scientific/credible”), which must be inhibited to allow evaluators to assess the logic of the explanation. Thus, individuals with stronger attention control are more likely to maintain the evaluation criterion (such as causal coherence or fit between claim and evidence) and discount irrelevant neuroscience details. This would result in a negligible SANE effect. Conversely, those with weaker attention control may allow the neuroscience cue to dominate early processing. In turn, this may lead to inflated satisfaction judgments even when the reasoning is flawed.
An attention-control framework also clarifies why the SANE effect is most pronounced for poor explanations. Strong explanations succeed on multiple dimensions, such as coherence and surface plausibility, while weak explanations require evaluators to identify and prioritize logical gaps. The process of detecting these gaps exemplifies the type of goal maintenance and conflict resolution in which individual differences in attention control are most influential [96].
4.5.2. Working Memory Contributions in Goal Maintenance and Integration of Explanatory Structure
While often considered within a cognitive framework of storage, working memory is also implicated in terms of the capacity to actively maintain task goals and relevant representations while processing interference. When evaluating explanations, working memory plays a role in retaining the target phenomenon and the proposed mechanism, in tracking whether each sentence advances the chain of evidence, and in permitting comparisons of the explanation against alternative interpretations. If working memory resources are under stress, individuals may well default to surface cues such as neuroscience jargon. Such cues may provide the appearance and feel diagnostic but are not logically informative. In other words, working memory limitations may increase reliance on heuristic “credibility” signals, creating fertile ground for SANE-like inflation.
Zhao and Vogel (2025) [97] provided evidence that individual differences in working memory and attentional control can exert stable predictive effects on memory performance even after extensive learning and repeated exposure. Across multiple experiments, Zhao and Vogel found that working memory ability continued to predict long-term memory accuracy across many repetitions of the same materials, supporting a “stable demands” account rather than the idea that practice eliminates the need for working memory/attention control [97].
When applied to phenomena such as the SANE effect, this suggests an implication of considerable importance. That is, even if people repeatedly encounter neuroscience-framed explanations (e.g., across a semester, in popular media, or in repeated classroom examples), practice alone may not remove the role of working memory/attention control in regulating how such information is processed. If the task demands (evaluating causal coherence while ignoring irrelevant “science-y” cues) remain stable, then individual differences may continue to predict susceptibility. That is, low working memory/attention-control individuals may not simply “learn out” of the bias through exposure. Instead, the cognitive control requirements remain, and so does the vulnerability [97].
4.5.3. A Combined Mechanism—Control-Dependent Weighting of Cue vs. Coherence
Taken together, the research discussed here and other lines of work support a coherent account of the mechanisms associated with the SANE effect. Simply put, SANE effects emerge when neuroscience cues receive disproportionate weight relative to explanatory coherence, and that weighting is regulated by attention control and working memory. Attention control governs resistance to initial capture by irrelevant neuroscience details and supports conflict resolution when a heuristic response competes with analytic evaluation [96]. Working memory supports maintaining evaluative goals and integrating multiple elements of an explanation long enough to judge whether it actually explains the phenomenon [97]. When either system is weaker, the evaluator is more likely to rely on salient “credibility” cues, such as brain images, neuroterminology, or mechanistic-sounding phrases, which lead to higher satisfaction ratings even when the explanation is logically deficient [1] [2].
4.6. Applications and Implications
While interesting as a phenomenon in its own right, empirical examination of the SANE effect has significant implications for science communication. As noted earlier, there is clear evidence confirming the SANE effect, as well as the nuanced nature of the effect. Fernandez-Duque et al. (2015) [18] found that neuroscience images also enhanced the perceived credibility of scientific claims, suggesting that not just textual references but also visual elements associated with neuroscience can bias evaluative judgments. This has implications for science communicators who use brain scans or metaphorical brain analogies to popularize findings. Even well-intentioned communicators may inadvertently mislead audiences by increasing the perceived legitimacy of claims through superficial neuroscience-laden descriptions.
As previously stated, individual differences influence the susceptibility to the SANE effect. Individuals who score lower on measures of deliberate analytical thinking (i.e., cognitive reflection) are more likely to be influenced by neuroscience-enhanced explanations [1] [98]. Thus, when considered through the lens of dual-process theories of reasoning, intuitive thinkers may be more vulnerable to superficial markers of scientific legitimacy. Therefore, the SANE effect is not uniformly distributed across audiences and may be exacerbated by cognitive and motivational traits.
When scientific reasoning skills are considered as a moderating variable of the SANE effect, individuals with greater scientific literacy and epistemic vigilance are less susceptible to being misled by irrelevant neuroscience verbiage or imagery [99]. This highlights a critical educational implication: fostering scientific reasoning and critical thinking skills may inoculate individuals against the persuasive but fallacious influence of neuroscience language.
The implications for science communication are far-reaching. First, the SANE effect underscores the need for communicators to prioritize clarity and explanatory value over scientific “window dressing.” Neuroscience should not be used merely to impress or persuade but to clarify mechanisms or enrich understanding. When neuroscience is used rhetorically rather than substantively, it can erode public trust if audiences later discover that the neuroscience was irrelevant or misrepresented [100] [101]. Indeed, much can be learned following the loss of public trust during and following the COVID pandemic [102]-[105] and significant stressors impacting systematic knowledge and the severe lack of confidence in experts [106] [107].
Second, educators and science communicators must be aware of their audience’s cognitive tendencies and tailor messages to enhance comprehension without overloading them with jargon [108]. As Schwartz et al. (2016) [109] suggest, good science communication requires an understanding of audience psychology, including heuristics that influence judgments of credibility. Reducing the use of complex or unnecessary neuroscience terms can help avoid unintended SANE effects and promote genuine understanding.
Third, the SANE effect has policy implications in legal and medical contexts, where neuroscientific evidence can unduly influence jurors or patients. For example, McCabe and Castel (2008) [16] demonstrated that mock jurors found defendants less culpable when brain images were introduced, even if the images were tangential to the case. Gurley and Marcus (2008) [110] found that the inclusion of neuroimaging evidence significantly reduced the probability of a guilty by “reason of insanity” in a mock jury trial. Henry (2017) [111] found that brain images affected beliefs about the defendant, with a small effect of brain images on sentence recommendations detected. In addition, the SANE effect has been used to influence parental interest in learning new parenting skills if instructional videos included a neuroscience explanation [112]. Similarly, in medicine, the presence of neuroscience explanations can increase patient compliance with treatments [94], regardless of whether the explanation improves actual understanding. Results such as these examples suggest that ethical considerations must guide the inclusion of neuroscience in contexts where authority and persuasion can have real-world consequences.
Last, while the SANE effect can be used for positive effects, it is important to recognize the negative aspects of the SANE effect. Simply put, a multipronged approach is required. This may include educating the public in scientific reasoning, training communicators in the importance of ethical dissemination practices, and fostering a culture that values explanatory depth over superficial complexity. As such, the responsibility lies not just with neuroscientists but with all science communicators—including educators, journalists, and policy advocates—who must strive for integrity and transparency.
In conclusion, while the effect of intellectual interest can be leveraged to enhance outcomes, the SANE effect presents a cautionary tale in science communication. While neuroscience holds great promise, its language and imagery can be easily co-opted to enhance the persuasiveness of weak explanations. The challenge for science communicators is to present neuroscience in a way that is accurate, meaningful, and free from rhetorical misuse. Addressing this challenge requires greater awareness of the relevance if neuro-information, the impact of cognitive biases, a commitment to audience education, and ethical restraint in the deployment of scientific authority.
4.7. Conclusion and Future Research
While some available evidence suggests that personality differences are associated with the SANE effect, it is important to recognize that personality traits do not operate in isolation. For example, an individual high in openness but low in need for cognition may be especially vulnerable to the SANE effect: open to novel information yet insufficiently critical of its logical coherence. Similarly, an individual who scores high on measures of faith in intuition yet low on measures of analytical thinking may rely on surface cues, such as the presence of brain images or neuroscientific terminology, rather than evaluate the argument’s internal validity. Therefore, future research should consider examining the interactions among key aspects of personality in order to elucidate the underlying complexity of variables and their interactions driving differences in susceptibility to the SANE effect.
One weakness of the present study was the use of a convenience sample [113]. While I was fortunate to have a good response rate—especially for our campus community—the short-comings associated with convenience samples are acknowledged here. In addition, the data were collected at a single university. However, I was interested specifically in the SANE effect within a religious campus community, where traditional rigorous scientific training and critical thinking are common throughout the curriculum.
The SANE effect underscores how irrelevant neuroscience information can unduly enhance the perceived quality of scientific explanations. While prior research has focused largely on cognitive and educational moderators, gender may also play a role in shaping susceptibility to this effect. Theoretical frameworks related to cognitive style, epistemic trust, and science engagement provide a basis for expecting gender-based differences. However, empirical findings to date are limited and mixed. Future research should explicitly examine gender as a moderator of the SANE effect, both to clarify theoretical mechanisms and to inform gender-sensitive strategies for promoting scientific literacy. It remains important to examine how, under what contexts, and what disciplines impact superficial cues, such as visual imagery and framing arguments within the context of specific disciplines, produce that bias judgments of scientific merit.
Acknowledgements
The author wishes to thank Julianna Davis for her early contributions to the project. In addition, I would like to thank the students who participated in this project, and encouraged others to do so as well. In addition, the author thanks the editorial staff and the reviewers who make the journal possible.