TREFACE: A New Computerized Test of Emotional Stroop with Facial Expressions

Abstract

Using the conflict principle of the Stroop task, “effect of interference in color naming”, the present study proposes to create a computational version of the emotional Stroop task, called TREFACE, for its acronyms in Portuguese “Teste de Reconhecimento de Expressões Faciais com Conflito Emocional” (Facial Expression Recognition Test with Emotional Conflict). In this protocol, four fixed styles of presentation were generated according to the condition: Congruent Word Reading, Incongruent Word Reading, Congruent Recognition of Face Expression and Incongruent Recognition of Face Expression, counterbalanced in terms of each facial expression, word and gender of the photo character. Forty-two healthy volunteers completed the task. Results revealed that a task associated with word reading allows better performance than a task associated with face recognition. It was also identified that in the congruent condition, there is an advantage in terms of the correct responses. Additionally, the data regarding recognition of face expression showed greater difficulty when the image was not congruent with the word. In general, the results suggest that the emotional attribute can compromise the ability to recognize the faces, reaching the functioning of mechanisms such as cognitive control and regulation of emotions. Thus, the TREFACE paradigm can be considered a good assessment tool for monitoring emotional conflict, in addition to presenting itself as a new instrument in Portuguese language for assessing emotional working memory in healthy individuals and, eventually, in different pathologies that affect the functioning of cortical areas related to executive functions.

Share and Cite:

Prada, E. , Satler, C. , Tavares, M. , Garcia, A. , Martinez, L. , Alves, C. , Lacerda, E. and Tomaz, C. (2022) TREFACE: A New Computerized Test of Emotional Stroop with Facial Expressions. Journal of Behavioral and Brain Science, 12, 342-358. doi: 10.4236/jbbs.2022.127020.

1. Introduction

Among the various behavioral manifestations resulting from an emotion, facial expressions can be considered of great relevance for the external signaling of what the individual is feeling [1], demonstrating his emotions and interacting socially, besides presenting an adaptive value for the organism that performs them [2] [3] [4].

Recognizing facial properties is not only an important mechanism for maintaining survival, but also refers to a capacity of the brain’s biological system to establish and maintain social life [5] [6] [7]. The face transmits a large amount of information, which is processed in the order of milliseconds [3] [4] [7] [8]. These signals have been recognized as emotional gestures and are translated into a universal language: happiness, fear, disgust, sadness, surprise, and anger [9] [10] [11].

Several biological mechanisms are involved in the processing of emotional meaning. There is a broad consensus in the literature on the existence of a neural activity between the amygdala and the orbitofrontal cortex (OFC) in this type of processing and in the recognition of emotional facial expressions [2] [6] [12] [13] [14].

Neurophysiologically, the face is perceived as an image that follows a path of direct visual recognition, which goes from the retina to the lateral genicular nucleus in the thalamus. The information continues in areas of the primary and secondary cortex, located in the occipital cortex and in the medial sulcus of the temporal lobe. There is also an indirect route which starts at the retina and goes to the superior colliculi in the midbrain and from there to the amygdala, where it would generate signals for the central and peripheral structures; in addition to continuing its path to the visual cortex and specialized areas, such as the inferior temporal lobe and the superior temporal sulcus, where a perceptual analysis begins [7] [15].

Haxby, Hoffman and Gobbini [16] proposed a cognitive model of facial processing and analysis. The first of the mechanisms would be responsible for the treatment of the invariable details on the face: eyes, nose, mouth and their organization, which would result in efficient resources for the recognition of identity. The second mechanism would serve the aspects that change, with the movement of the eyes and mouth, such as emotional expression [1]. This processing model includes a central and an unfolded system, the first of which performs visual analysis and is composed of the lower occipital cortex, involved in the perception of facial features; the lateral fusiform gyrus, which encodes the inalterable aspects; and the upper temporal groove, which will engage with the changeable details of the face.

On the other hand, the second model involves, among its connections, structures such as the amygdala, the insula and components of the limbic system that modulate the emotional attributes of facial expressions; the intra-parietal groove related to spatial attention; the auditory cortex, involved in speech processing and the anterior temporal cortex, in charge of processing identity and biographical information [13] [14] [16] [17].

Cognitive processes such as memory, language, attention control and basic components of executive functions are strongly involved in the efficiency of emotional facial processing [2] [11] [15] [18] [19] [20].

At the same time, there are mechanisms or disorders that change the efficiency of the emotional recognition system, as in cases of generalized anxiety [21] [22] [23] [24], depression [22] [25], states of sadness [26] [27], post-traumatic stress [28] [29], autism [30] [31], Alzheimer’s disease [32], multiple sclerosis [33], insomnia, and expression of nocturnal cerebral hyper-metabolism [34].

Other studies have shown a relationship between hormonal variables and the phases of the woman’s menstrual cycle and a selective level in the emotional recognition of the faces [35]. Additionally, differences have been identified by age, comparing young and elderly adults [32] [36].

There are different instruments for assessing the monitoring of emotional conflict. The classic Stroop color and word task uses the interference effect to assess inhibitory control, preferably through the comparison between control and conflicting tasks [37]. In general, versions of this test differ in some dimensions, such as, number of colors used; type of stimuli used to present ink stains on the page; presentation of items in sequences of rows or columns; and the correction method [38].

The color and word Stroop is usually presented in three stages. In the first, called the Word, participants are required to read the written words. In the second stage, known as Color, is required to name the colors of words. In the last part, called Color and Word, the colors of the ink in which the words are printed should be named as quickly as possible, without considering the word itself [39] [40]. The conflicting mode of the presentation of the stages aims to generate interferences and distracting stimuli, evaluating the individual’s ability to inhibit an automatic response in detriment of another less used [40]. The effect that the test contemplates is one of the most robust cognitive phenomena available in neuropsychological assessment [27] [41].

Currently, different versions of the Stroop test are used to evaluate various aspects of executive functions, including constructs such as attention, interference, and inhibition [42] [43].

Emotional Stroop, a variant of the classic Stroop task, has been used for more than two decades. It is characterized by the presentation of words with an affective emotional content, for example, sadness, anger, happiness, among others of positive and negative valence, and with neutral content. The words are printed in color, where the subject must name their color while ignoring their semantic content [44] [45]. Thus, the emotional Stroop paradigm reveals itself as a popular measure of attentional bias in anxious and depressed patients [27] [46].

From this paradigm, it is possible to recognize how emotional attributes prove to be of great importance, since they exert an influence on cognitive processes. Some studies, in addition to verifying significant delays in naming colors in the classic Stroop, have also shown that the speed to name emotional words could be an indicator of the subjects’ concerns or anxieties: thus, words referring to emotional contents, such as death or sadness, produce greater interference than neutral words (for example, table or tree) [12] [47] [48].

To provide a measure of emotional conflict monitoring, which can consider more details, not only for the identification, but also for the resolution of the conflict, the scientific literature shows a wide variety of experimental designs using the emotional paradigm to study the neural mechanisms that may be associated with the efficiency of perceptual processing. In this direction, for example, photographs with images of faces accompanied by congruent and incongruous emotional words have been used [12] [20] [49], facial expressions of half of the face and its congruent and incongruent complement [19], emotional facial expression and complementary body expression in the congruent and incongruent condition [8] [50] and emotional expressions in the right vertical position and facing downwards [51].

As a result of these experimental variations, the emotional-face Stroop task proposed by [12] presupposes that the disturbance of the attention system, required for control and resolution, is altered by emotional interference, as for example, a word unrelated to the emotional expression of the face [12] [19] [20] [52]. Thus, the emotional face Stroop test model could assess the monitoring of emotional conflict, and, in turn, can be used to explore the quality of the active cognitive control mechanisms that shape the conflict, as well as those that offer the resources to identify and solve the problem [27] [49].

It is worth pointing out that, although there are instruments for the evaluation of executive functions and their relationship with the emotional components, the existence of an instrument that allows the monitoring of emotional face/word conflict is not available in the Brazilian context. In this perspective, the present work aims to develop a computerized instrument capable of reproducing the main attributes of the emotional Stroop paradigm, specifically formulated in previous works by Etkin et al. [12], for the Brazilian population. Thus, it is expected to make available a new assessment tool, which could be configured in a computational language, suitable for the Portuguese language, and that could be of theoretical and practical relevance for future research in neuropsychology.

2. Methods

2.1. Ethical Considerations

The study was approved by the Human Subjects Ethics Committee (protocol 56466216.0.0000.5084). A consent was obtained from all participants, in accordance with the ethical guidelines for research with human subjects (196/96 CNS/MS Resolution).

2.2. Participants

Forty-two participants aged between 18 - 30 years (25 women with a mean age of 29.3 ± 2.4 years and 17 males with a mean age of 26.5 ± 2.3 years) participated in our study. They were recruited within the Darcy Ribeiro Campus of the University of Brasília, DF, Brazil. All participants were native speakers of Brazilian Portuguese, they did not report any history of neurological disorder and they did not obtain values higher than expected in the State-Trait Anxiety Inventory: cut-off point 50, and in the Beck Depression Inventory (BDI-II): cut-off point 20 (see Table 1). All had normal or correct-to-normal vision acuity and they had not consumed drugs or alcoholic beverages in the 24 hours before the study.

2.3. Development of the TREFACE Instrument

Stimuli Selection

First, the original photographs of the set by Ekman and Friesen [53] [54] were purchased from the Paul Ekman Group, Copyright for academic and research use, known in the market as Pictures of Facial Affect (POFA)®.

The set of POFA 110 photographs (original stimuli) was formatted for a digital version, in addition to being numbered according to their original classification by Ekman, Friesen and Hager [55]. The digital formatting considered the following parameters: original dimension of 1411 × 2398 pixels, size 232 KB, resolution of 400 dpi and intensity of 24 Bits.

Then, the evaluation of the stimuli regarding the quality of emotional expression was carried out by five Brazilian specialists (judges) in the study of emotional facial stimuli processing in humans.

The set of POFA original photographs was presented on a computer screen, being positioned at a 90-degree angle in relation to the support surface (table), the person always sitting in front of the screen at 80 cm. None of the judges had

Table 1. Demographic and clinical characteristics.

Note. STAI = State-Trait Anxiety Inventory-Trait/State Version; cut off score 50. BDI-II = Beck Depression. Inventory-II; cut off score 20. SEM: Mean standard Error.

visual problems. The judgment according to the identified emotion (happiness, fear, sadness, anger, surprise, disgust, neutral, or “I cannot identify it”) was registered on a formatted paper sheet. The presentation was made by the Microsoft Office Power Point program, at 100% zoom level of the screen (20 inch).

Finally, the judges positively judged a total of 97 stimuli from the original POFA set. Only the group of stimuli that presented a level of coincidence between 80% and 100% was considered.

2.4. Computerized Model Formulation

The structure of the Task is made up of the sequence: 1) Guided Recognition (GR); 2) Word Reading (WR); and 3) Recognition of Face Expression (RFE), 70 stimuli were selected, pseudo-randomly for each stage. It was considered to have photographs of all emotional categories (happiness, fear, sadness, anger, surprise and disgust) and of both genders (male and female). At the end, 210 stimuli were presented, 70 for each stage.

In a complementary manner, a list of emotional words in Portuguese was created that would accompany the stimuli where necessary. These were: alegria,medo,tristeza,raiva,surpresa and nojo (happiness, fear, sadness, anger, surprise and disgust). The font size was 26, in font Arial, bold, red color. It is worth mentioning that, previous works have used red ink color in their letter protocols [12]. It is known that the red color superimposed on black, with white and gray backgrounds, increases the contrast of the image.

Thus, all the working material (stimuli) was introduced, configured and executed in the Stroop Test software - version 1.0.0.0.0. This software is a tool built in C# computational language on the Microsoft Visual Studio IDE platform, compatible with the Windows Vista operating system or higher.

For the GR stage, the presentation of a face stimulus paired with a word stimulus was considered. All 70 attempts (face-to-word) were matched. The delay time between stimuli was 100 milliseconds. For the stages RW and RFE, the stimuli were accompanied by the word in the center of the face (in the central line between the eyes and the nose), without affecting the central details of the character, an important aspect for the recognition of the emotional properties of the face. Similar examples can be identified in previous works [12]. A red dot on a white background was inserted as a fixation point, facilitating a pattern of attention or vigilance during the test execution according to the previous statements formulated by [12]. The delay and presentation time were standardized at 1000 milliseconds over these last steps. For all presentations the size of the stimuli (photographs) was 7.55 cm × 11.29 cm with a white background (see Figure 1).

Considering the details previously formulated in the literature [12], two conditions were developed, pseudo-randomly, within each of the stages RW and RFE: Congruent (C) and Incongruent (I). The congruent condition indicates a relationship of agreement or correspondence between the qualities of the elements (face expression and the word/emotion); on the contrary, the incongruent

Figure 1. Representation of the TREFACE task in the word reading and recognition of face expression stages. (A) The model pre-established within the stage. Condition; C = Congruent, I = Incongruent. A total of 70 stimuli per stage and 7 for each set were considered. (B) Conditions within the stages: C-WR = Congruent Word Reading, I-R = Incongruent Word Reading, C-REF = Congruent Recognition of Face Expression, I-REF = Incongruent Recognition of Face Expression.

condition reflects a lack of relationship or congruence between them. Thus, four fixed styles of presentation were generated according to the condition: C-WR (Congruent Word Reading), I-WR (Incongruent Word Reading), C-REF (Congruent Recognition of face Expression), I-REF (Incongruent Recognition of Face Expression), counterbalanced in terms of each face expression, word and gender of the photography model.

2.5. Procedure

Data were collected individually, in a single session, in a spacious, bright, and noise-controlled room. First, the participant was evaluated considering the inclusion criteria. Immediately, it was applied the Stroop task in the sequence: TREFACE 01: GR, TREFACE 02: WR, and TREFACE 03: RFE.

During the TREFACE task, the participant had to answer verbally to the objectives described for the different stages of the test, from the instructions that were presented on the monitor screen. The number of correct answers (scores), the number of errors, and the number of omissions (when not answered) were analyzed. To analyze the responses of the participants, the audio saved in the video file in WAV format was used.

2.6. Data Analysis

To analyze the correct answers of the participants’ performance, a Wilcoxon test for paired samples was used to compare the TREFACE stages, and a two-way ANOVA for repeated measures to compare the conditions within the TREFACE stages, where factor one was stages, with two levels (RW and REF) and factor two was condition, with two levels (congruent and incongruent), and the dependent variable was the number of correct answers. Data analysis was performed using the Sigma Stat 3.5 statistical program. The level of significance established for the analyses was p < 0.05.

3. Results

3.1. Performance in the TREFACE Stages

When comparing the scores in RW and REF, a Wilcoxon test for paired samples identified statistically significant differences between them (Z = −5648, p < 0.001). The mean of correct answers was greater for reading emotional words written on the photograph (97.76 ± 1.44) than for recognizing the emotional expression of the face ignoring the written word (43.98 ± 2.43) (see Figure 2).

3.2. Performance According to Conditions within TREFACE Stages

When comparing the performance achieved by the participants in the congruent and incongruous conditions within the TREFACE stages, a two-way ANOVA for repeated measures showed a statistically significant effect on the condition factor (F [1.41] = 69.923, p < 0.001) and in the stage factor (F [1.41] = 813.446, p < 0.001). In addition, an ANOVA also showed a statistically significant effect on the interaction between condition and stage (F [1.41] = 58.785, p < 0.001).

The Post hoc analysis for multiple comparisons (Bonferroni t test) showed that with regard to the condition factor, the congruent was different from the

Figure 2. Correct answers of the participants, in the TREFACE stages (Mean ± SEM). WR = Word Reading. RFE = Recognition of Facial Expression. *Statistically significant difference = WR > RFE. Wilcoxon test for paired samples (p < 0.05).

incongruent (t = 8362, p < 0.001); thus, the number of correct answers was greater in the congruent (81.29 ± 2.69) than the incongruent condition (60.44 ± 4.28). Additionally, in the stage factor, RW was different from REF (t = 28,521, p < 0.001), showing that the number of correct answers was greater for RW (97.76 ± 0.92) than for REF (43.98 ± 3.16).

Moreover, the analysis identified that in the stage factor within the congruent condition, the performance of the participants in RW was different from the performance in REF (t = 10,925, p < 0.001), where the correct answers were greater for RW (98.50 ± 0.97) than for REF (64.08 ± 4.47). This same result was observed in the incongruent condition (t = 23,211, p < 0.001), with an average of 97.01 ± 1.94 for RW and 23.88 ± 2.98 for REF.

Finally, for the condition factor within the REF stage, the analysis showed a difference between congruent and incongruent (t = 11,331, p < 0.001), where the participants’ performance was better in the congruent condition of the word with the image (64.08 ± 4.47) than the incongruent condition (23.88 ± 2.98). However, within the RW stage, no statistically significant difference was observed (t = 0.422, P = 0.674) between the congruent (98.50 ± 0.97) and the incongruent condition of 97.01 ± 1.94 (see Figure 3).

Figure 3. Correct answers of the participants according to the conditions within the TREFACE stages (Mean ± SEM). C-WR = Congruent Word Reading. I-WR = Incongruent Word Reading. C-REF = Congruent Recognition of Face Expression. I-REF = Incongruent Recognition of Face Expression *Statistically significant difference = C-WR > C-REF. **Statistically significant difference = I-WR > I-REF. ***Statistically significant difference = C-REF > I-REF. Two-way ANOVA for repeated measurements, followed Bonferroni t-test (p < 0.05).

4. Discussion

The aim of this study was to build a computerized instrument capable of reproducing the main attributes of the emotional Stroop paradigm for the Brazilian population. In this direction, the behavioral performance of young university students was collected being the first step to validate the instrument in the Brazilian context.

The results regarding the overall performance in the TREFACE stages revealed that the rate of correct answers was significantly higher in the RW stage when compared to the REF stage. Previous studies have reported that the ability to read is a learned mechanism, which becomes automated, especially in people who are assiduous readers [56] [57]. While the visual processing of the faces involves the participation of a set of deeper brain structures, thus resulting in greater difficulty in responding to this type of task [12] [20] [49] [58] [59] [60].

Neuroimaging studies by functional magnetic resonance indicate patterns of functional specialization of the prefontal cortex (PFC) related to two types of processes: sequential and monitoring [12] [20] [58] [59] [60]. Thus, sequential order processes in word reading activity would involve the PFC left hemisphere, and, in turn, more alternate or simultaneous monitoring processes would be at the base of the PFC right hemisphere during recognition of emotional facial expressions [20] [52] [57] [59].

The analysis of the conditions revealed that the rate of correct answers was higher when the word coincided with the image, indicating that the congruent mode makes the task easier. These data agree with previous studies (e.g., [59]), where it is pointed out that the ability to solve tasks in a related sequence favors their resolution with a synergy phenomenon instead of a competition phenomenon.

Regarding the reading of words congruent with the image (C-RW), a better performance was observed compared to congruent recognition (C-REF). This finding suggests that during reading words in congruent condition there was no immediate impairment, but in recognition this phenomenon was not observed. It is possible that there was a cognitive conflict that, even though of low intensity, can compromise recognition, remembering that emotional recognition is a relevant aspect for human behavior, especially when there are details in the context that can hinder processing [48] [52].

On the other hand, for RW when the image did not match it, a better rate of correct answers was observed, compared to REF in the same condition (I-REF). Here it is possible to argue that the word itself does not constitute a conflict when it comes to reading. However, recognition of face emotional attributes where the word generates an interference effect makes this processing difficult [57] [59], similar to that mentioned for the phenomenon between C-WR and C-REF.

Finally, comparisons between congruent and incongruent recognition indicate a higher rate of correct answers in the congruent condition (C-REF), compared to the recognition in the incongruent condition (I-REF). This phenomenon was not observed in the comparison between congruent and incongruent reading, where the maximum number of correct answers (ceiling effect) was always observed for both cases. This result may indicate the presence of an emotional conflict, when participants had to judge the emotional expressions in an incongruous situation. This result is similar to others previously reported [12] [20] [49], which indicate an interference effect on the processing of visual facial stimuli in unrelated contexts, a phenomenon described by Etkin et al. [12] [23] in his model of emotional conflict.

Functionally, it is possible that the TREFACE task, specifically in the stage of incongruent emotional recognition, compromises the performance of the detection and control functions, increasing the difficulty to exert inhibition (for example, a face of happiness with the word “fear”) [12] [23] [52] [60] [61]. In this way, the (verbal) inhibition capacity would become temporarily affected, which in turn makes it difficult to redirect new and appropriate responses to adapt quickly to the objective indicated in the task resolution: “Speak the name of the emotion as quickly as possible,ignoring the word” [12] [23] [62].

In addition to the effects on cognitive control, it is important to highlight that, in a complementary way, the conflict also managed to involve the temporary maintenance of information in working memory. According to Baddeley [63] and Diamond [62], the importance of this type of memory within the model of the components of executive functions is very broad, since in addition to maintain representations relevant to an ongoing activity over time, it also allows this content to be processed in relation to a purpose. Working memory also has the function of facilitating sustained attention, monitoring ongoing activity against its objectives and flexibility in the use of these elements during the performance of the task. These characteristics were observed in the participants’ performance during the TREFACE suggesting this protocol as a way of evaluating emotional working memory.

Functional neuroimaging studies have revealed that emotional stimuli activate the amygdala [12] [23] [64]. However, when incongruent images are presented, amygdala activity is inhibited by specific activation of the cingulate cortex in its anterior portion (ACC). These data indicate that ACC may be exercising a type of (inhibitory) control over amygdala activity, which would lead to improving its efficiency in dealing with emotional conflicts [12] [23] [64]. In contrast, patients with post-traumatic stress disorders and depression, resistant to treatment, cannot adapt to the conflict [65] [66]. A reduction in ACC activity could compromise efficiency during emotional processing, making them unable to control the emotional intrusion in their thoughts [59] [64] [67].

Here it is important to mention that the participants in the present study did not have neurological or psychiatric disorders or signs of anxiety or depression. Thus, the results of the present study can be considered as a standard reference for future research with clinical samples.

5. Conclusions

The results of the present study demonstrate that the TREFACE paradigm can be considered a good assessment tool for monitoring facial emotional conflict. It highlights the role that emotional aspects can play in the functioning of executive processes, which, in turn, requires sustained attention, working memory, cognitive control and mental flexibility, including motor-verbal control.

Thus, it is also possible to suggest TREFACE as an experimental design that reproduces the phenomenon of the emotional Stroop effect with faces, for use in research with clinical samples, in addition to being a low-cost tool, easy to access and with technical conditions to fit any other assessment protocol.

In addition, our results are complementary to previous works carried out in our laboratories, where various properties of visual stimuli and brain substrates related to facial recognition were analyzed [32] [36] [68] [69] [70].

Acknowledgements

This work was supported partly by a FAPEMA grant to CT (FAPEMA no 01102/16). EP was recipient of a PhD fellowship from the Pontifical Bolivarian University, Bucaramanga, Colombia, and LM was recipient of a PhD fellowship from the Student Program—Graduate Studies Plan (PEC-PG), CAPES/CNPq, Brazil. MCHT was recipient of a research fellowship from CNPq/Brazil (311582/2015-0).

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Calder, A.J. and Young, A.W. (2005) Understanding the Recognition of Facial Identity and Facial Expression. Nature Reviews Neuroscience, 6, 641-651.
https://doi.org/10.1038/nrn1724
[2] Dolan, R.J. (2002) Neuroscience and Psychology: Emotion, Cognition, and Behavior. Science, 298, 1191-1194.
https://doi.org/10.1126/science.1076358
[3] Tovée, M.J. (1995) Face Recognition: What Are Faces for? Current Biology, 5, 480-482.
https://doi.org/10.1016/S0960-9822(95)00096-0
[4] Yadav, S.P. (2021) Emotion Recognition Model Based on Facial Expressions. Multimedia Tools and Applications, 80, 26357-26379
https://doi.org/10.1007/s11042-021-10962-5
[5] Bliss-Moreau, E., Williams, L.A. and Karaskiewicz, C.L. (2021) Evolution of Emotion in Social Context. In: Shackelford, T.K. and Weekes-Shackelford, V.A., Eds., Encyclopedia of Evolutionary Psychological Science, Springer, Cham, 2487-2499.
https://doi.org/10.1007/978-3-319-19650-3_2459
[6] Fusar-Poli, P., Placentino, A., Carletti, F., Landi, P., Allen, P., Surguladze, S., Benedetti, F., Abbamonte, M., Gasparotti, R., Barale, F., Perez, J., McGuire, P. and Politi, P. (2009) Functional Atlas of Emotional Faces Processing: A Voxel-Based Meta-Analysis of 105 Functional Magnetic Resonance Imaging Studies. Journal of Psychiatry and Neuroscience, 34, 418-432.
[7] Haxby, J.V., Hoffman, E.A. and Gobbini, M.I. (2002) Human Neural Systems for Face Recognition and Social Communication. Biological Psychiatry, 51, 59-67.
https://doi.org/10.1016/S0006-3223(01)01330-0
[8] Kret, M.E., Stekelenburg, J.J., Roelofs, K. and de Gelder, B. (2013) Perception of Face and Body Expressions Using Electromyography, Pupillometry and Gaze Measures. Frontiers in Psychology, 4, Article No. 28.
https://doi.org/10.3389/fpsyg.2013.00028
[9] Ekman, P. and Friesen, W.V. (1971) Constants across Cultures in the Face and Emotion. Journal of Personality and Social Psychology, 17, 124-129.
http://psycnet.apa.org/record/1971-07999-001
https://doi.org/10.1037/h0030377
[10] Ekman, P. and Oster, H. (1979) Facial Expressions of Emotion. Annual Review of Psychology, 30, 527-554.
https://doi.org/10.1146/annurev.ps.30.020179.002523
[11] Holland, A.C., O’Connell, G. and Dziobek, I. (2021) Facial Mimicry, Empathy, and Emotion Recognition: A Meta-Analysis of Correlations. Cognition and Emotion, 35, 150-168.
https://doi.org/10.1080/02699931.2020.1815655
[12] Etkin, A., Egner, T., Peraza, D.M., Kandel, E.R. and Hirsch, J. (2006) Resolving Emotional Conflict: A Role for the Rostral Anterior Cingulate Cortex in Modulating Activity in the Amygdala. Neuron, 51, 871-882.
https://doi.org/10.1016/j.neuron.2006.07.029
[13] Jehna, M., Neuper, C., Ischebeck, A., Loitfelder, M., Ropele, S., Langkammer, C., Ebner, F., Fuchs, S., Schmidt, R., Fazekas, F. and Enzinger, C. (2011) The Functional Correlates of Face Perception and Recognition of Emotional Facial Expressions as Evidenced by fMRI. Brain Research, 1393, 73-83.
https://doi.org/10.1016/j.brainres.2011.04.007
[14] Šimić, G., Tkalčić, M., Vukić, V., Mulc, D., Španić, E., Šagud, M., Olucha-Bordonau, F.E., Vukšić, M. and Hof, P.R. (2021) Understanding Emotions: Origins and Roles of the Amygdala. Biomolecules, 11, Article No. 823.
https://doi.org/10.3390/biom11060823
[15] Fox, C.J., Iaria, G. and Barton, J.J.S. (2008) Disconnection in Prosopagnosia and Face Processing. Cortex, 44, 996-1009.
https://doi.org/10.1016/j.cortex.2008.04.003
[16] Haxby, J.V., Hoffman, E.A. and Gobbini, M.I. (2000) The Distributed Human Neural System for Face Perception. Trends in Cognitive Sciences, 4, 223-233.
https://doi.org/10.1016/S1364-6613(00)01482-0
[17] Gobbini, M.I. and Haxby, J.V. (2007) Neural Systems for Recognition of Familiar Faces. Neuropsychologia, 45, 32-41.
https://doi.org/10.1016/j.neuropsychologia.2006.04.015
[18] Adolphs, R., Damasio, H., Tranel, D. and Damasio, A.R. (1996) Cortical Systems for the Recognition of Emotion in Facial Expressions. Journal of Neuroscience, 16, 7678-7687.
https://doi.org/10.1523/JNEUROSCI.16-23-07678.1996
[19] Clayson, P.E. and Larson, M.J. (2013) Adaptation to Emotional Conflict: Evidence from a Novel Face Emotion Paradigm. PLOS ONE, 8, e75776.
https://doi.org/10.1371/journal.pone.0075776
[20] Reeck, C. and Egner, T. (2011) Affective Privilege: Asymmetric Interference by Emotional Distracters. Frontiers in Psychology, 2, Article No. 232.
https://doi.org/10.3389/fpsyg.2011.00232
[21] Avram, J., Balteş, F.R., Miclea, M. and Miu, A.C. (2010) Frontal EEG Activation Asymmetry Reflects Cognitive Biases in Anxiety: Evidence from an Emotional Face Stroop Task. Applied Psychophysiology Biofeedback, 35, 285-292.
https://doi.org/10.1007/s10484-010-9138-6
[22] Demenescu, L.R., Kortekaas, R., den Boer, J.A. and Aleman, A. (2010) Impaired Attribution of Emotion to Facial Expressions in Anxiety and Major Depression. PLOS ONE, 5, e15058.
https://doi.org/10.1371/journal.pone.0015058
[23] Etkin, A., Prater, K.E., Hoeft, F., Menon, V. and Schatzberg, A.F. (2010) Failure of Anterior Cingulate Activation and Connectivity with the Amygdala during Implicit Regulation of Emotional Processing in Generalized Anxiety Disorder. American Journal of Psychiatry, 167, 545-554.
https://doi.org/10.1176/appi.ajp.2009.09070931
[24] Kryza-Lacombe, M., Kiefer, C., Schwartz, K.T.G., Strickland, K. and Wiggins, J.L. (2020) Attention Shifting in the Context of Emotional Faces: Disentangling Neural Mechanisms of Irritability from Anxiety. Depression and Anxiety, 37, 645-656.
https://doi.org/10.1002/da.23010
[25] Deldin, P.J., Keller, J., Gergen, J.A. and Miller, G.A. (2000) Right-Posterior Face Processing Anomaly in Depression. Journal of Abnormal Psychology, 109, 116-121.
https://doi.org/10.1037//0021-843X.109.1.116
[26] Isaac, L., Vrijsen, J.N., Eling, P., Van Oostrom, I., Speckens, A. and Becker, E.S. (2012) Verbal and Facial-Emotional Stroop Tasks Reveal Specific Attentional Interferences in Sad Mood. Brain and Behavior, 2, 74-83.
https://doi.org/10.1002/brb3.38
[27] Smolker, H.R., Wang, K., Luciana, M., Bjork, J.M., Gonzalez, R., Barch, D.M., McGlade, E.C., Kaiser, R.H., Friedman, N.P., Hewitt, J.K. and Banich, M.T. (2022) The Emotional Word-Emotional Face Stroop Task in the ABCD Study: Psychometric Validation and Associations with Measures of Cognition and Psychopathology. Developmental Cognitive Neuroscience, 53, Article ID: 101054.
https://doi.org/10.1016/j.dcn.2021.101054
[28] Klimova, A., Bryant, R.A., Williams, L.M. and Felmingham, K.L. (2013) Dysregulation in Cortical Reactivity to Emotional Faces in PTSD Patients with High Dissociation Symptoms. European Journal of Psychotraumatology, 4, Article ID: 20430.
https://doi.org/10.3402/ejpt.v4i0.20430
[29] Kurtić, A. and Pranjić, N. (2011) Facial Expression Recognition Accuracy of Valence Emotion among High and Low Indicated PTSD. Primenjena Psichologija, 4, 5-11.
https://doi.org/10.19090/pp.2011.1.5-11
[30] Pelphrey, K., Sasson, N., Davis Goldman, B. and Piven, J. (2002) Visual Scanning of Faces in Autism. Journal of Autism and Development Disorders, 32, 249-261.
https://doi.org/10.1023/A:1016374617369
[31] Safar, K., Vandewouw, M.M. and Taylor, M.J. (2021) Atypical Development of Emotional Face Processing Networks in Autism Spectrum Disorder from Childhood through to Adulthood. Developmental Cognitive Neuroscience, 51, Article ID: 101003.
https://doi.org/10.1016/j.dcn.2021.101003
[32] Satler, C., Belham, F.S., Garcia, A., Tomaz, C. and Tavares, M.C.H. (2015) Computerized Spatial Delayed Recognition Span Task: A Specific Tool to Assess Visuospatial Working Memory. Frontiers in Aging Neuroscience, 7, Article No. 53.
https://doi.org/10.3389/fnagi.2015.00053
[33] Parada-Fernández, P., Oliva-Macías, M., Amayra, I., López-Paz, J.F., Lázaro, E., Martínez, ó., Jometón, A., Berrocoso, S., García de Salazar, H. and Pérez, M. (2015) Accuracy and Reaction Time in Recognition of Facial Emotions in People with Multiple Sclerosis. Revista de Neurologia, 61, 433-440.
https://doi.org/10.33588/rn.6110.2015225
[34] De Almondes, K.M., Holanda, F.W.N. and Alves, N.T. (2016) Sleep Deprivation and Implications for Recognition and Perception of Facial Emotions. Sleep and Biological Rhythms, 14, 13-22.
https://doi.org/10.1007/s41105-015-0029-3
[35] Gasbarri, A., Pompili, A., d’Onofrio, A., Cifariello, A., Tavares, M.C. and Tomaz, C. (2008) Working Memory for Emotional Facial Expressions: Role of the Estrogen in Young Women. Psychoneuroendocrinology, 33, 964-972.
https://doi.org/10.1016/j.psyneuen.2008.04.007
[36] Belham, F.S., Satler, C., Garcia, A., Tomaz, C., Gasbarri, A., Rego, A. and Tavares, M.C.H. (2013) Age-Related Differences in Cortical Activity during a Visuo-Spatial Working Memory Task with Facial Stimuli. PLOS ONE, 8, e75778.
https://doi.org/10.1371/journal.pone.0075778
[37] Stroop, J.R. (1935) Studies of Interference in Serial Verbal Reactions. Journal of Experimental Psychology, 18, 643-662.
https://doi.org/10.1037/h0054651
[38] Golden, C.J., Freshwater, S.M. and Zarabeth, G. (2003) Stroop Color and Word Test Children’s Version for Ages 5-14: A Manual for Clinical and Experimental Uses. Stoeling Press, Wood Dale.
https://doi.org/10.1037/t06065-000
[39] Lezak, M.D. (1995) Neuropsychological Assessment. 3rd Edition, Oxford University Press, New York.
[40] Strauss, E., Sherman, E. and Spreen, O. (2006) A Compendium of Neuropsychological Tests: Administration, Norms, and Commentary. Oxford University Press, New York.
[41] McLeod, C.M. (1992) The Stroop Task: The “Gold Standard” of Attentional Measures. Journal of Experimental Psychology: General, 121, 12-14.
https://doi.org/10.1037/0096-3445.121.1.12
[42] Yiend, J. (2010) The Effects of Emotion on Attention: A Review of Attentional Processing of Emotional Information. Cognition and Emotion, 24, 3-47.
https://doi.org/10.1080/02699930903205698
[43] Sweet, J.J. (1999) Forensic Neuropsychology: Fundamentals and Practice. Psychology Press, New York.
[44] Acero, J.J. and Morales, A. (2003) La neurociencia cognitiva como ciencia de la interpretación: El paradigma del stroop emocional. Jornadas Hispano-Portuguesas de Filosofía Analítica, Santiago de Compostela, 273-286.
[45] Arana Martínez, J., Cabaco, A. and Sanfeliú Giner, M. (1997) La tarea de Interferencia Stroop: 110 años después del informe de Cattel de identificación de colores y palabras. Revista de Historia de la Psicologia, 18, 27-38.
[46] Williams, J.M.G., Mathews, A. and MacLeod, C. (1996) The Emotional Stroop Task and Psychopathology. Psychological Bulletin, 120, 3-24.
https://doi.org/10.1037/0033-2909.120.1.3
[47] Botvinick, M.M., Braver, T.S., Barch, D.M., Carter, C.S. and Cohen, J.D. (2001) Conflict Monitoring and Cognitive Control. Psychological Review, 108, 624-652.
https://doi.org/10.1037//0033-295x.108.3.624
[48] Kauschke, C., Bahn, D., Vesker, M. and Schwarzer, G. (2019) The Role of Emotional Valence for the Processing of Facial and Verbal Stimuli—Positivity or Negativity Bias. Frontiers in Psychology, 10, Article No. 1654.
https://doi.org/10.3389/fpsyg.2019.01654
[49] Zhu, X.R., Zhang, H.J., Wu, T.T., Luo, W.B. and Luo, Y.J. (2010) Emotional Conflict Occurs at an Early Stage: Evidence from the Emotional Face-Word Stroop Task. Neuroscience Letters, 478, 1-4.
https://doi.org/10.1016/j.neulet.2010.04.036
[50] Van de Riet, W.A.C. and de Gelder, B. (2008) Watch the Face and Look at the Body! Reciprocal Interaction between the Perception of Facial and Bodily Expressions. Netherlands Journal of Psychology, 64, 143-151.
https://doi.org/10.1007/BF03076417
[51] Bimler, D.L., Skwarek, S.J. and Paramei, G.V. (2013) Processing Facial Expressions of Emotion: Upright vs. Inverted Images. Frontiers in Psychology, 4, Article No. 54.
https://doi.org/10.3389/fpsyg.2013.00054
[52] Ovaysikia, S., Tahir, K.A., Chan, J.L. and DeSouza, J.F.X. (2011) Word Wins over Face: Emotional Stroop Effect Activates the Frontal Cortical Network. Frontiers in Human Neuroscience, 4, Article No. 234.
https://doi.org/10.3389/fnhum.2010.00234
[53] Ekman, P. and Friesen, W.V. (1976) Pictures of Facial Affect. Consulting Psychologists Press, Palo Alto, CA.
[54] Ekman, P. and Friesen, W.V. (1978) Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press, Palo Alto, CA.
https://doi.org/10.1037/t27734-000
[55] Ekman, P., Friesen, W.V. and Hager, J.C. (2002) Facial Action Coding System. Manual and Investigator’s Guide. Research Nexus, Salt Lake City, UT.
[56] Stenberg, G., Wiking, S. and Dahl, M. (1998) Judging Words at Face Value: Interference in a Word Processing Task Reveals Automatic Processing of Affective Facial Expressions. Cognition and Emotion, 12, 755-782.
https://doi.org/10.1080/026999398379420
[57] Lupyan, G., Rahman, R.A., Boroditsky, L. and Clark, A. (2020) Effects of Language on Visual Perception. Trends in Cognitive Sciences, 24, 930-944,
https://doi.org/10.1016/j.tics.2020.08.005
[58] Anes, M.D. and Kruer, J.L. (2004) Investigating Hemispheric Specialization in a Novel Face-Word Stroop Task. Brain and Language, 89, 136-141.
https://doi.org/10.1016/S0093-934X00311-0
[59] Egner, T., Etkin, A., Gale, S. and Hirsch, J. (2008) Dissociable Neural Systems Resolve Conflict from Emotional versus Nonemotional Distracters. Cerebral Cortex, 18, 1475-1484.
https://doi.org/10.1093/cercor/bhm179
[60] Skagerlund, K., Skagenholt, M., Hamilton, J.P., Slovic, P. and Västfjäll, D. (2021) Investigating the Neural Correlates of the Affect Heuristic Using Functional Magnetic Resonance Imaging. Journal of Cognitive Neuroscience, 33, 2265-2278.
https://doi.org/10.1162/jocn_a_01758
[61] Feng, C., Becker, B., Huang, W., Wenhao, W., Eickhoff, S.B. and Chen, T. (2018) Neural Substrates of the Emotion-Word and Emotional Counting Stroop Tasks in Healthy and Clinical Populations: A Meta-Analysis of Functional Brain Imaging Studies. NeuroImage, 173, 258-274.
https://doi.org/10.1016/j.neuroimage.2018.02.023
[62] Diamond, A. (2013) Executive Functions. Annual Review of Psychology, 64, 135-168.
https://doi.org/10.1146/annurev-psych-113011-143750
[63] Baddeley, A. (2000) The Episodic Buffer: A New Component of Working Memory? Trends in Cognitive Sciences, 4, 417-423.
https://doi.org/10.1016/S1364-6613(00)01538-2
[64] West, H.V., Burgess, G.C. and Dust, J. (2021) Amygdala Activation in Cognitive Task fMRI Varies with Individual Differences in Cognitive Traits. Cognitive, Affective, & Behavioral Neuroscience, 21, 254-264.
https://doi.org/10.3758/s13415-021-00863-3
[65] Tabibnia, G. (2020) An Affective Neuroscience Model of Boosting Resilience in Adults. Neuroscience & Biobehavioral Reviews, 115, 321-350.
https://doi.org/10.1016/j.neubiorev.2020.05.005
[66] Shalev, A., Liberzon, I. and Marmar, C. (2017) Post-Traumatic Stress Disorder. The New England Journal of Medicine, 376, 2459-2469.
https://doi.org/10.1056/NEJMra1612499
[67] Etkin, A. and Schatzberg, A.F. (2011) Common Abnormalities and Disorder-Specific Compensation during Implicit Regulation of Emotional Processing in Generalized Anxiety and Major Depressive Disorders. American Journal of Psychiatry, 168, 968-978.
https://doi.org/10.1176/appi.ajp.2011.10091290
[68] Sá Canabarro, S.L., Garcia, A., Satler, C. and Tavares, M.C.H. (2017) Interaction between Neural and Cardiac Systems during Execution of the Stroop Task by Young Adults: Electroencephalographic Activity and Heart Rate Variability. AIMS Neuroscience, 4, 28-51.
https://doi.org/10.3934/Neuroscience.2017.1.28
[69] Belham, F.S., Tavares, M.C.H., Satler, C., Garcia, A., Rodrigues, R.C., Canabarro, S.L.S. and Tomaz, C. (2017) Negative Facial Expressions—But Not Visual Scenes— Enhance Human Working Memory in Younger and Older Participants. Frontiers in Pharmacology, 8, Article No. 668.
https://doi.org/10.3389/fphar.2017.00668
[70] Uribe, C.E., Garcia, A. and Tomaz, C. (2011) Electroencephalographic Brain Dynamics of Memory Encoding in Emotionally Arousing Context. Frontiers in Behavioral Neuroscience, 5, Article No. 35.
https://doi.org/10.3389/fnbeh.2011.00035

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.