EEG Mapping of Cortical Activation Related to Emotional Stroop with Facial Expressions: A TREFACE Study

TREFACE (Test for Recognition of Facial Expressions with Emotional Conflict) is a computerized model for investigating the emotional factor in executive functions based on the Stroop paradigm, for the recognition of emotional expressions in human faces. To investigate the influence of the emotional component at the cortical level, the electroencephalographic (EEG) recording technique was used to measure the involvement of cortical areas during the execution of certain tasks. Thirty Brazilian native Portuguese-speaking grad-uate students were evaluated on their anxiety and depression levels and on their well-being at the time of the session. The EEG recording was performed in 19 channels during the execution of the TREFACE test in the 3 stages established by the model-guided training, reading, and recognition—both with congruent conditions, when the image corresponds to the word shown, and incongruent condition, when there is no correspondence. The results showed better performance in the reading stage beta and gamma frequencies were displayed in a more intensely distributed way in the recognition stage. The results of this mapping of cortical activity in our study can help to understand how words and images of faces can be regulated in everyday life and in clinical contexts, suggesting an integrated model that includes the neural bases of the regulation strategy.


TREFACE (Test for Recognition of Facial Expressions with Emotional Conflict)
is a neuropsychological assessment model in Portuguese based on the emotional Stroop paradigm, which assesses components of executive functions, specifically working memory, inhibitory control, and cognitive flexibility. It is a computational tool composed of a collection of emotional facial expressions and words that may or may not be congruent with those facial expressions, reproducing characteristics of the emotional Stroop test. The results revealed that a task associated with word reading allows better performance than a task associated with face recognition. It was also identified that when the word coincides with the image (congruent condition), there is an advantage in terms of the hit rate, while there is greater recognition data difficulty when the image was not congruent with the word. In general, the results suggest that the emotional attribute can compromise the ability to recognize faces, affecting the functioning of mechanisms such as cognitive control and emotion regulation.
In view of these behavioral results, taking a step forward, we sought in the present study to investigate the neurobiological substrate of this phenomenon from electrophysiological analyses.
The electroencephalographic (EEG) record is a method for measuring electrical brain activity that allows the establishment of correlations between patterns of cortical activation and the behavior presented by the individual. The EEG records the postsynaptic activity of cortical neurons, corresponding to the cortical response from the influence of some event, as well as revealing the manifestation of the activity of subcortical structures [1]. The activity manifests itself in the EEG acquisition as electrical waves, which are classified according to their frequency band. The dominance of each class of brainwaves can be interpreted according to certain physiological and psychological states to which they are subjected. The high temporal resolution of this technique favors the assessment of the cerebral pronouncement of a phenomenon at the moment closest to its occurrence [2]. This important feature also enables the use of EEG both for studies of normal brain activity and in the presence of some pathology [3].
The PFC also receives inputs from the visual and auditory processing areas of the occipital and temporal lobes, and, in particular, the OFPFC presents a strong interconnection with the cognitive and emotional processing areas. This circuit originates in the inferior lateral and anterior ventral prefrontal cortex, projects to the ventromedial caudate nucleus and receives information from other cortical areas. Therefore, it is suggested that it could be involved in certain aspects of social behavior such as inhibitory control, empathy and compliance with social rules [13]. According to Diamond [14], such processes have a supervisory role, letting us act against our instincts or intuition and giving preference premeditated, controlled, and planned behaviors.
The visual processing of facial stimuli has been gaining interest in neuroscience. The facial recognition (FR) engine is actively linked to the functioning of EFs. An individual's emotional state is conveyed both by their behavioral context and their face expressions [15] [16] [17]. Several biological mechanisms are involved in the processing of emotional meaning. There is broad consensus in the literature on the existence of active neural activity in the amygdala and OFPFC [18] [19] [20] [21] [22]; in turn, these two structures are of great importance in recognizing the emotional expressions of the face. In addition, other much more specific neural subsystems associated with the recognition of certain emotions have been identified, such as fear [23] [24], joy [25], anger and disgust [15]. Cognitive processes such as memory, language, attention control and the basic components of EF are also strongly involved in the efficiency of emotional facial processing [26] [27] [28] [29].
By means of the Stroop's paradigm, it is possible to recognize how emotional attributes exert influence on cognitive processes. Some studies have shown that the speed in naming words that convey emotion could be an indicator of the subjects' concerns or anxieties, thus, words that refer to emotions produce a greater interference than "neutral" words [19] [30] [31]. These studies, therefore, demonstrate the importance of the emotional Stroop test as an effective tool to establish a model of emotional conflict. Thus, the testing model proposed by TREFACE can explore the quality of active cognitive control mechanisms that mediate conflict, as well as those that provide the resources to identify and solve the problem [32]. Furthermore, their findings support existing theories of emotional regulation, which involve the dorsolateral prefrontal cortex and the anterior cingulate cortex, in situations of cognitive conflict. The detection site resides in medial prefrontal structures, including the dorsal anterior cortex, which extends dorsally into premotor regions. Once the system's alert detection is activated, a flow of information goes to the cognitive control system, which initiates the resolution of emotional conflicts and modulates the processing of information, that is, facilitating the processing of the appropriate response and, at the same time, hampering the processing of the incorrect response, involving areas of the dorsolateral region of the prefrontal cortex in the resolution [19].
Additionally, several EEG studies have revealed the involvement of activities in specific bands related to EF. Oscillations in the theta band, particularly in the frontal lobe, have been identified as an indicator of demands on working memory due to the active recruitment of cognitive resources for task resolution [7] [33] [34] [35] [36] [37].
Alpha band activity, in its turn, is related to the inhibition of brain activities that are not involved in the mental task at hand, suggesting predictive markers of performance during working memory tasks [7] [36] [38] [39] [40]. Beta activity reflects a memory-promoting state, moderated by modality-independent attentional or inhibitory processes [7] [37] [41]. Finally, gamma oscillations support the maintenance of resource-specific information and reflect the formation of representations of content in visual processing [7] [42].
The emotional component involved in perceptual and cognitive processing makes recognition more complex, requiring greater activation of brain networks and mental resources to reach an efficient solution. From this perspective, the present work aimed to substantiate the model proposed by TREFACE for monitoring emotional conflict with evidence of neurophysiological mechanisms specifically identified by mapping cortical activity via EEG recording. Thus, it is possible to provide details related to both behavioral performance after exposure to the task at hand and the cortical mechanisms involved in solving problems.

Participants
Thirty volunteers aged from 18 to 25 participated in this study (15 women = 20.27 ± 0.60 and 15 men = 21.40 ± 0.69 years old). The participants were Brazilian Portuguese native speakers recruited through an advertisement within the Campus of the University of Brasília (UnB), Brasília, Brazil, and on social media (Facebook and WhatsApp), who agreed with the ethical terms for research with humans in Resolution 196/96 CNS/MS. All participants declared that they had no history of neurological or psychiatric disorders and had not consumed drugs, alcoholic beverages, or energy drinks 24 hours prior to the EEG recording. None of them had either impaired hearing or visual or speech problems. In addition to not reporting poor sleep the night before the evaluation, they presented scores below 50 on the State and Trait Anxiety Inventory (STAI-E/T), below 20 on the Beck Depression Inventory (BDI-II) and above 60% on the Self-Perception of Quality of Life Questionnaire (WHOQOL-bref).
Procedures Ekman and Friesen (1976), and words in red that correspond to the basic emotions present in the images. During the task, the participant has to respond verbally according to the instructions presented before each step. The first stage is called guided recognition (GR), in which an image is presented, followed by a word after a delay. The subject is informed to silently observe the relationship between the facial expression and the written word. This step is intended to familiarize the participant with the task and no recording or correction is performed.
In the following steps, the word is presented in the center of the image, which may or may not be congruent with the expression, and a verbal response is expected according to the instructions given. In the second stage, of total reading (LT), the participant is instructed to make a quick reading of the written word and, in the third stage of total recognition (RT), to verbalize the emotional expression that indicates the character's face, ignoring the written word. Each participant was exposed to the same stimuli, and the content that each participant read was the same.
Each stage had 70 stimuli with an interval of 100 ms, in the predefined sequence, and the delay in the first stage was 1000 ms, according to the scheme in dictated by the subject. It also allows the automatic marking of the EEG record, at each stimulus presentation, a useful configuration for this study. Electroencephalography recording Measurements of cortical activity by EEG were collected using the Neuron-Spectrum-4/EPM device (NeuroSoft®, Ivanovo, Russia). The EEG recording was performed using a 19-channel cap placed under the scalp, according to the international 10/20 system, and two sponges soaked in saline solution under points Fp1 and Fp2, to reduce the pressure of the cap on the skin. The impedance of the cap points was constantly checked to be kept below 10 KΩ throughout the session by applying conductive gel to each of the cap's passive sensor electrodes with the aid of a syringe with a special tip. All data were sampled at 1024 Hz, with amplification bandwidth settings between 0.1 and 70.0 Hz.
Data collection was performed at the Laboratory of Neuroscience and Behavior of the University of Brasília in a session divided into two segments: the screening and the experiment. When scheduling their participation, the participant was instructed to be available for an one-hour period; to abstain from ingesting alcoholic beverages, using drugs, relaxants or stimulants, as well as energy drinks, 24 hours beforehand; to avoid practicing strenuous physical exercise for at least three hours prior to the test; to try to get a good night's sleep the day before; and to wash their head well with neutral shampoo; to bring prescription glasses if used frequently. After going through the screening, the participant was sent to the EEG recording room with lighting for reading and noise control. The individuals were comfortably seated at a distance of 40 cm from a computer screen on which the stimuli would be presented, and the electrode cap was placed for the simultaneous EEG recording. After preparation, the baseline corresponding to basal cortical activity was recorded with eyes closed for three minutes. Afterwards, the test began, on a 17" computer screen, following the aforementioned steps-TG, LT, RT-each preceded by a baseline measurement, and, at the end of the last step, another baseline was recorded. During the entire recording, the marking of each test stimulus was visually verified, given by the TestPlatform, in the tracing of the EEG recording. At the end of the experimental session, the cap was removed.
Data analysis For the behavioral analysis in TREFACE, the performance and audio files provided by the platform were used to calculate the number of hits, i.e., when the dictated answer coincided with the expected answer, the number of errors, i.e., when the answers did not coincide, and the number of omissions, when there was no response. Reaction time was also calculated through a voice detection module, developed in the Matlab laboratory, which uses the stimulus presentation moment, recorded in the performance file, from which the voice signal is detected in the audio file.
For behavioral data, the Sigma Stat 3.5 statistical program was used and a Wilcoxon test for paired samples was performed to compare the reading and recognition steps. A two-way ANOVA for repeated measures was used to com-pare conditions within the steps of the TREFACE, in which factor 1 was a step, with two levels (reading emotional words and recognition of the emotional expression of the face) and factor 2 was the condition, with two levels (congruent and incongruent), and the dependent variable was the number hits or reaction time. The significance level established for the analyzes was p < 0.05.
All data were processed using codes programmed in Matlab, integrated into EEGLAB, version 9.0.4.5 [45]; http://sccn.ucsd.edu/eeglab). Initially, the sampling rate used in the collection was resampled to 200 Hz to optimize data processing. The fragments corresponding to the TREFACE steps were extracted from the continuous records and, in each fragment, the record was digitally separated at non-overlapping times, according to the markings made during the performance, later named according to the condition of the task. The fragments and epochs were submitted to the Infomax algorithm to decompose into their independent components (ICA; [46]. Components related to blinking or eye movement were removed from the original data and then the record was recalculated using the remaining components, filtered, and processed to extract measurements. The precomputed EEG data were calculated in spectral power and displayed for analysis in the frequency ranges: theta (4 -8 Hz), alpha (8 -13 Hz), beta (13 -30 Hz) and gamma . By means of topographic maps of cortical activity, using the power recorded at each electrode and the smoothing technique (smoothing) around the channels, to project the activation in the spaces between them. Time window length to analyze EEG signals was set in 2 seconds. The study was calculated in spectral power (in μV) and the data were made available for statistical analysis (paired t test; p < 0.05) in the EEGLAb itself to compare the topographic maps of each stage and condition of TREFACE.

Results
The analysis with the Wilcoxon test for paired samples, comparing the perfor- The post hoc analysis for multiple comparisons (Bonferroni's t test) of performance showed that, regarding the condition factor, the congruent condition was different from the incongruent condition (t = 11.095, p < 0.001; C > I) with greater statistical significance for C (91.25 ± 1.23) than for I (81.55 ± 2.62). Additionally, in the step factor, reading was different from recognition (t = 19,531, p < 0.001; LT > RT), with a higher number of correct answers for LT (100.00 ± 0.00) than RT (72.80 ± 1.71).
On the other hand, the analysis identified that, in the step factor within the congruent condition, the participants' performance in reading was different from the performance in recognition (t = 10.641, p < 0.001; L-C > R-C) with greater statistical significance for L-C (100.00 ± 0.00) than for R-C (82.50 ± 0.95). This same result was observed in the incongruous condition (t = 22,440, p < 0.001; L-I > R-I), being higher for L-I (100.00 ± 0.00) than for R-I (63.10 ± 2.12). Finally, for the condition factor within the step, no significant difference was found within the reading step, only in the recognition step, between conditions (t = 15.691, p < 0.001; R-C > R-I), higher in R-C (82.50 ± 0.95) than in R-I (63.10 ± 2.12).
When comparing the performance obtained by the participants from the average reaction time in the conditions within the TREFACE steps, a two-way ANOVA for repeated measures showed a statistically significant effect in the condition factor (F[1,29] = 10.241, p = 0.003) and in the step factor (F[1,29] = 6.229, p = 0.019). However, there was no statistically significant effect on the interaction between condition and step (F[1,29] = 0.568, p = 0.457).
A post hoc analysis for multiple comparisons (Bonferroni's t test) of the reaction time showed that, for the condition factor, the incongruent condition was different from the congruent (t = 3.200, p = 0.003; I > C) with greater statistical significance for I (293.58 ± 13.41) than for C (267.81 ± 9.64). Additionally, in the step factor, reading was different from recognition (t = 2.496, p = 0.019; LT > RT), with greater statistical significance for LT (307.86 ± 8.43) than for RT (254.82 ± 13.53). On the other hand, the analysis identified that, in the step factor, within the congruent condition, the participants' performance in reading was different from the performance in recognition (t = 2.602, p = 0.013; L-C > R-C), higher for L-C (297.86 ± 9.76) than for R-C (238.73 ± 12.64). This same result was observed in the incongruous condition (t = 2.074, p = 0.045; L-I > R-I), higher for L-I (317.54 ± 11.47) than for R-I (270.39 ± 20.60). Finally, for the condition factor, there was no statistically significant difference within the reading stage, while within the recognition stage, the analysis indicated a statistically significant difference between incongruent and congruent conditions (t = 2.804, p = 0.007; R-I > R-C), higher for R-I (270.39 ± 20.60) than for R-C (238. 73 ± 12.64).
In the global analysis, comparing the TREFACE steps in a paired t test in the recording channels, there was greater activation in the recognition step (RT) in relation to the reading step (LT) (Figure 2). For the theta band, there was an extended activation, with a statistically significant difference regarding the level of electrical activity, in the frontotemporal region, including parietal and occipital regions of the left hemisphere (F7, F3, T3, C3, T5, P3, O1), in the midline, between Fz and Cz, and across the entire frontotemporal line (F8, T4, T6) of the right hemisphere, greater for RT. The alpha frequency range showed greater activation for RT with a significant difference in the left frontal (F3) and central (Fz) regions, in addition to the frontotemporal region (F8, T4, T6). The electrical activity in beta frequency was significantly higher for RT, in the bilateral frontal region (F7, F3, F4), in the left temporal region (T3, T5), in the frontoparietal midline (Fz, Cz, Pz), and in the parietotemporal region of the right hemisphere (P4 and T6). Similarly, the gamma frequencies were found, with a statistically significant difference, in the bilateral frontal region (F7, F3, F4), in the left posterior temporal region (T5), in the frontoparietal midline (Fz, Cz, Pz), and in the parietotemporal region of the right hemisphere (P4 and T6), closing the right frontoparietal line with C4, greater for RT.
In comparing the conditions, congruent (C) and incongruent (I), and the steps of reading (L) and recognition (R), no significant results were found for the conditions within the steps (L-C vs L-I and R-C vs R-I). Furthermore, the findings in the comparisons of the steps between the conditions (L-C vs R-C, L-I vs R-I, L-C vs R-I and L-I vs R-C) are described in Figure 3. For the Theta band (θ), it was possible to identify greater cortical activity in the frontotemporal, left, and central parietal extension greater in the recognition stage among all conditions, with a significant difference in F7, F3, Fz, T3, C3, T5 (except L-C vs R-C), P3, Pz (except L-I vs R-I and L-I vs R-C), and O1, while for the right hemisphere, frontotemporal activation was observed with significant differences in F8, F4, T4 and T6, including F4 for L-C vs R-I and C4 for L-C vs R-I and L-I vs R-I, also higher for recognition stage.  The behavioral analyzes of this study revealed that performance was better in the reading stage when compared to the recognition stage, as revealed in the first TREFACE study [43]. Complementarily, reaction times were higher in the reading stage, when compared to task times in the recognition stage. Thus, it is possible to say that the participants, regarding the reading of the textual word, which appeared written on the stimuli of the faces, needed more time to identify the letters, to recognize the word as a whole and to have access to its meaning, taking more time overall to make their response efficient [47]. On the other hand, studies have shown that the facial recognition mechanisms analyzes visual information in the order of milliseconds, which implies the use of less time for the analysis, which, in this case, impaired the quality of recognition [48] [49].

Discussion
The same conformation of behavior in relation to the steps, i.e., better performance for reading than for recognition, was found when comparing the different conditions, congruent and incongruent, as in the previous study [43]. In general, a better performance was observed in the congruent condition, while the reaction time was longer in the incongruent condition, regardless of the step.
This finding may indicate that conflict, in addition to compromising task performance, tends to require more time for the response, thus, the attention mechanism is impaired to respond more effectively [50]. Reading incongruent stimuli requires more time, while the competition between image-word impairs the quality of processing recognition, even with the deceleration in responses [51].
The effort revealed in the incongruent condition, observed by the reaction time, suggests a time interference effect, which delays the arrival of an inhibitory activity required to suppress the irrelevant element of the relevant in the decision of the correct answer [52] [53].
With regard to cortical mapping, the EEG recording performed during the execution of the task presented diversified rhythms predominant in the frontal areas, midlines and temporal lines in the reading (LT) and recognition (RT) stages. The strongest theta activity during the recognition (RT) stage in the left hemisphere demonstrates the intense recruitment of cognitive resources from the frontal region for visual information processing [35] [36]. Theta oscillations are commonly associated with memory processes, mental manipulation of information and effort in audiovisual tasks [38] [54] [55]. It may indicate a communication between the temporal region and the hippocampus, given the strong connections between these two structures [56] [57]. Other evidence suggests that during emotional states the amygdala produces differentiated theta activity [58], and identified during the process of functional inhibition [59]. In this case, theta oscillations are observed mainly in the frontal cortex, expanding to other brain structures as a means of greater sensory-perceptual integration to initiate mechanisms of inhibition, as observed in midline activation.
In general, active participation of the right hemisphere in the control and orientation of executive attention has been reported in executive functioning tasks [60]. In this study, a positive activation in the right frontotemporal line was observed, both for theta and for alpha frequencies. Oscillations in alpha frequency were found in the right hemisphere, with no difference between steps, coherent with prior evidence that top-down processing in a working memory task increases alpha power in the prefrontal area, reinforcing the idea of selective cortical recruitment that can be extended from sensory cortices to the frontoparietal attention network [33] [61]. Furthermore, alpha oscillations are modulated during visual sensory stimulation [62] [63], which is a characteristic of stimuli presentation in TREFACE. Prefrontal alpha activity is directly involved in the neural mechanisms responsible for the maintenance of working memory [64], in addition to the relationship with the cognitive processes of memory [65] and attention [66], required for efficient task resolution in TREFACE. Evidence of this activation in the left dorsolateral region may indicate emotional response assessment processing [67].
For the analysis of higher frequencies, beta and gamma, similar activation patterns were identified, with synchronization in the recognition stage. In both beta and gamma, the frontal and posterior temporal regions were registered with greater power in the right hemisphere. The frontal and lower left temporal regions were differentiated in their electrical activity. Interestingly, beta frequencies are also modulated in the performance of tasks that require sensorimotor interaction [68] [69].
Regarding gamma oscillations, these have been associated with functional inhibition [70], attention and information processing [71] [72], as well as the active maintenance of content, memory [73] and conscious awareness [74]. The intensity of the gamma rhythm is also associated with the demand for cognitive flexibility, verified in the task of this study, so that the findings on alpha, beta and gamma oscillations can be correlated with previous studies that propose the existence of distinct roles of these bands in inference hierarchical perceptual and predictive coding [37] [75]. Beta and gamma waves are involved in attentional processes [73]. Gamma-band activity has received increasing consideration in recent years due to its role in different cognitive processes [37] [75]. Indeed, phase synchronization of gamma activity seems to be involved in attentional processes [37], and its measurement provides an important indicator of the relationship between executive functions and the prefrontal cortex [37]. Our results are in agreement with these suggestions as to the right hemisphere.
The overall activation pattern observed in the TREFACE steps was also observed as to each of the conditions. A broader participation of the sensory cortex during the reading of congruent stimuli suggests an approximation motivated by aspects of autobiographical memory resulting in the best composition of the response [76]. More intensely, recognition in the incongruent condition signals a broader recruitment of right hemisphere resources, strengthening the hypothesis of an executive role in orienting attention to task resolution [37] [60]. As for beta and gamma waves, extensive synchrony was observed, encompassing all regions, possibly due to the need to modulate the effect of the emotional attribute, consistent with evidence in left dorsolateral frontal activation, and due to the cognitive effort required in the task, thus conferring greater integration of brain areas [77].
In general, in this work it was possible to verify the influence of the emotional component induced during the execution of TREFACE, particularly by involving the participation of an extensive network of the frontomedial and temporal cortical circuits with the need for the participation of the two cerebral hemispheres in the monitoring of the conflict experimentally produced in a situation of facial emotion recognition, under a context of non-relationship with the emotional word. It is possible to indicate that the various skills tested by TREFACE regarding the monitoring of emotional conflict, functionally can cause an overlap of brain rhythms in different amplitudes, duration, and frequency ranges, with the participation of both hemispheres. This model proposes that the disturbance of the attention system, required for control and resolution, is altered by emotional interference, i.e., a word unrelated to the emotional expression of the face; and the cortical mapping demonstrated in this study corroborates previous studies [19] [27] [29] [48].
Likewise, the regulation of these simultaneous oscillatory patterns can be functionally associated with a large and complex neural network that has the participation of frontal areas and their extensive interaction with central parietal and temporal regions, so that the result is translated into an organization of behaviors efficient for solving the TREFACE task. On the other hand, testing in future works the effect produced by TREFACE in conditions in which the neural circuits involved are clinically compromised (neurologically or psychiatrically) will allow to broaden the understanding of the neurophysiological and neuroanatomical substrates that are associated with the functioning of the executive components. Thus, the results of this mapping of cortical activity in our study can help to understand how words and images of faces can be regulated in everyday life and in clinical contexts, suggesting an integrated model that includes the neural bases of the regulation strategy.
Additionally, and in a complementary way, this study substantiates TREFACE as a new neuropsychological assessment tool, computerized and suitable for the Portuguese language, which may be of theoretical and practical relevance for future research in Neurosciences.