Visual Attributes of Subliminal Priming Images Impact Conscious Perception of Facial Expressions

Abstract

We investigated, in young healthy participants, how the affective content of subliminally presented priming images and their specific visual attributes impacted conscious perception of facial expressions. The priming images were broadly categorized as aggressive, pleasant, or neutral and further subcategorized by the presence of a face and by the centricity (egocentric or allocentric vantage-point) of the image content. Participants responded to the emotion portrayed in a pixelated target-face by indicating via key-press if the expression was angry or neutral. Response time to the neutral target face was significantly slower when preceded by face primes, compared to non-face primes (p < 0.05, Bonferroni corrected). In contrast, faster RTs were observed when angry target faces were preceded by face compared to non-face primes. In addition, participants’ performance was worse when a priming image contained an egocentric face compared to when it contained either an allocentric face or an egocentric non-face. The results suggest a significant impact of the visual features of the priming image on conscious perception of face expression.

Share and Cite:

Huang, M. , Sun, H. and Vaina, L. (2019) Visual Attributes of Subliminal Priming Images Impact Conscious Perception of Facial Expressions. Journal of Behavioral and Brain Science, 9, 108-120. doi: 10.4236/jbbs.2019.93009.

1. Introduction

In 1872, Charles Darwin proposed that facial expressions are evolved behaviors that have a biologically adaptive function [1] . Nearly one hundred years later, Paul Ekman defined six universal facial expressions that are recognized and expressed cross-culturally―happy, sad, fear, disgust, surprise, and anger [2] . Facial expressions are indicators of underlying emotional states [3] , and recognition of the emotion in facial expressions is critical to interpersonal communication, response to imminent threat, and social behavior [4] . Within a fraction of a second, humans can recognize emotion, sex, relative age, race, and identity, just by viewing a face [5] [6] . There is substantial evidence that the emotion of facial expressions can even be processed in absence of conscious awareness [7] . Functional neuroimaging studies have indicated that non-consciously perceived emotional facial expressions can activate subcortical structures [8] [9] . De Gelder et al. reported on a blind sight patient who could discriminate among facial expressions presented in his blind field [10] . A study by Esteves et al. also demonstrated that, under conditions that prevent conscious awareness, facial expressions elicit skin conductance responses [11] . While most studies indicate that emotional facial expressions can induce neurophysiological changes and influence behavior without being consciously perceived, there is accumulating evidence that affective processing does not occur outside of awareness and that any unconscious reaction is dependent on prior semantic processing [12] [13] [14] [15] . These studies suggest that sematic categorization of a visual stimulus occurs faster than affective evaluation, and that the semantic categorization of a stimulus may be required for affective processing to occur.

Major contributions to the evidence that affective information can be processed without awareness come from affective priming experiments. The affective primacy hypothesis [16] [17] proposes that affective information can be accessed automatically and precedes conscious cognition. Subliminal affective priming experiments have provided both neural and behavioral evidence that affective stimuli are unconsciously processed [9] [18] [19] [20] . Subliminally presented affective information has also been shown to influence the evaluation of emotionally ambiguous facial expressions [21] .

A major effect of priming is that response times are faster when the prime has the same emotional valence as the target stimulus and slower when it is different [22] . This is referred to as the congruency effect. Many subliminal priming experiments have shown that affectively congruent prime-target pairs facilitate either response time or evaluation of the target stimulus [23] [24] [25] [26] , however these studies have all employed affective words as their primes. Attempts to replicate the congruency effect using affective images as primes have shown less conclusive results. Using prime images with fear or disgust inducing content, as well as fearful or disgusted facial expressions, Neumann et al. reported faster response time to prime-target pairs with congruent emotions [25] . On the other hand, Hermans et al. reported the opposite effect on response time (affectively congruent trials were slower than incongruent trials) using pleasant or unpleasant priming images, some of which contained faces [27] , and Andrews et al. reported lack of congruent affective priming for subliminal trials using happy or angry facial expressions [28] .

The discrepancy between these studies may be due to the content and type of the prime and target stimuli used. A large number of affective subliminal priming studies employ facial expressions as stimuli [25] [28] - [33] due to their innate association with emotions. However, there are significant differences between affective stimuli that contain faces versus those that do not. There is solid evidence that faces are processed separately from non-face stimuli [34] [35] [36] . Seeing as there are distinct differences between the processing of faces and non-faces, it is possible that the emotion in face and non-face stimuli is also processed differently. However, in most published studies, there is little distinction between emotional primes that contain faces and those that do not. Additionally, the relevance of the priming stimuli to the observer is frequently overlooked. The content of an affective priming image can be directed towards the observer, engaging him/her in the image, or can portray a scene that does not engage the observer at all. Priming images that are more directly relevant to the observer may more effectively capture the attention of the observer over priming images that portray non-relevant scenes. We suggest that the discrepancies between the results of different affective image priming studies may be attributed to the visual characteristics of the priming image stimuli which are often overlooked in the prime image selection criteria.

In this paper, using the same experimental protocol as in our previous study [37] , we investigated how the affective content of a priming image, the presence of a face or the vantage point of the prime content (image centricity) impact the effect of affective subliminal priming on the identification of target-face expressions (neutral or angry) in young healthy participants.

2. Methods

The experimental design was adapted from that of our previously published paper [37] . Here our aim was to investigate the effects of different classes of priming images on perceiving facial expressions in young healthy participants.

2.1. Participants

25 high-school senior and 1st - 2nd year undergraduate students (12 females, mean age = 18.17 years, SD ± 1.49) volunteered for this study. All participants were right handed, had normal or corrected to normal vision, and none had a history of neurologic, psychiatric, or developmental disease. All participants gave written consent before the start of the experimental sessions in accordance with Boston University’s Institutional Review Board Committee on research involving human participants. All data were collected at the Brain and Vision Research Laboratory, Boston University, Boston, Massachusetts.

2.2. Apparatus

All tests were written in, responses collected in, and stimuli controlled by the BraviShell toolbox, a MATLAB (MATLAB and Statistics Toolbox Release 2009a, The Math Works, Inc., Natick, Massachusetts, United States) software package developed in the Brain and Vision Research Laboratory (Biomedical Engineering Department, Boston University, Boston, MA, 2005-2015), which is based off the Psychophysics Toolbox extensions [38] [39] [40] . All tests were administered using a 13-inch Apple MacBook Laptop with Retina Display. Participants were administered three tests in the following order: SAFFIMAP (Subliminal Affective Image Priming), followed by PV (Prime Validation), and then by the FEP (Face Emotion Processing) tests which are described below.

2.3. Test Stimuli

Priming images

The stimuli in the Subliminal Affective Image Priming (SAFFIMAP) task consisted of priming images, masks, and target-face images. All priming images were selected from the International Affective Picture System (IAPS) database [41] or from public domain images available on the internet. Indices of the images selected from the IAPS database are as follows: aggressive: 12, 101, 163, 257, 1653, 1790, 1865, 1886, 1999, 2214, 2217, 2227, 2244, 2248, 2258; pleasant: 7, 18, 57, 58, 64, 124, 224, 455, 516, 538, 541, 726, 730, 743, 765, 775, 796, 815, 887, 896, 956, 969, 991, 1190, 1318, 1320, 1527. We chose 120 total images, with 40 images of each three different affective valences: (aggressive, pleasant, or neutral). The priming stimuli were further characterized by two additional attributes: centricity and the presence or absence of a face. Centricity refers to whether the vantage point of the image was egocentric (the content of the image was directed towards the observer) or allocentric (the content of the image was directed away from the observer). Half of the priming images were egocentric, and half were allocentric. The face content category refers to whether the priming image contained a face (60 images each). These subdivisions were spread evenly among the affective categories (10 images per each affectivity/centricity/face group). All images were initially categorized by their affective valence, centricity, and face-content by three independent evaluators, who were age and education matched to the subjects. Only images with agreeing categorizations were used as priming stimuli. Public domain images were selected from the internet where there were insufficient images in the IAPS database to fulfill our specific categories. Examples of each category of priming image are shown in Figure 1(a). The images were cropped and resized to 512 × 512 pixels and then normalized to a constant luminance and contrast using the SHINE toolbox [42] .

A separate, control, group of five age and education matched participants, reported conscious awareness of the priming image at prime durations longer than 16 ms. Therefore, in the test described here we chose the duration of prime to be 16 ms.

Mask

The mask following the priming image was generated by dividing the priming image into 16 × 16 pixels blocks (0.78 × 0.78 degrees), and randomly shuffling these blocks in both the horizontal and vertical positions. The result of this process was a scrambled version of the priming image, where the local image information remained the same, but the meaningful content of the image was destroyed. An example of a priming image mask is shown in Figure 1(a).

Target-Face Images

The target-face stimuli were pictures of unfamiliar faces portraying either an angry or a neutral facial expression. We selected 28 pictures of each facial expression (angry, happy, and neutral) from the Karolinska Directed Emotional Faces (KDEF) database [43] [44] . The angry and neutral target-faces were used as the target images for the SAFFIMAP test, and the happy target-faces were included in the FEP task. The face images were cropped and normalized similarly to the prime images. To make the facial expression discrimination task more challenging, we further divided the face pictures into 8 × 8 pixel blocks (23.4 arc minutes in height and width) and replaced 50% of the blocks with a color whose RGB (red, green, blue) components were sampled randomly from a normal distribution with mean and covariance equal to that of the RGB components of the full image. Examples of target-face images are shown in Figure 1(b).

Figure 1. Stimulus details. (a) Example of an aggressive, egocentric, face prime image with its corresponding mask, an example of a neutral, allocentric, face prime, and an example of a pleasant, egocentric, non-face prime. (b) Example of an angry (left) and neutral (right) target face.

2.4. Tests

Experimental Procedure

Participants were seated in a dimly lit room, 60 cm from the display screen. They were instructed to maintain fixation on a red square (0.3 deg diameter) displayed at the center of the computer screen throughout the duration of the stimulus presentation and to respond via keypresses on the computer keyboard. The participants were instructed to respond as quickly and as accurately as possible. In total, completion of all three tests took approximately 30 minutes. Participants were allowed a brief break between the completion of the SAFFIMAP test and the FEP/PV control tests.

Subliminal Affective Image Priming Task (SAFFIMAP)

The SAFFIMAP assesses the effect of subliminally-presented affective priming images on the perception of neutral or angry target facial expression. A schematic view of an experimental trial is presented in Figure 2(a). Prior to the start of the experiment, participants were informed that the task consisted of a brief flash of a jumbled-up image (the mask) followed by a picture of an angry or neutral face. Participants were told to focus on discriminating the emotion of the face presented at the end of the trial and to ignore the preceding stimulus. Participants were then asked to press any key on the computer keyboard to start a trial when prompted by an exclamation mark (6 degrees in height) displayed in the center of the computer screen. This was followed by a 500 ms blank screen at a neutral gray intensity (42 cd/m2) and the fixation mark which was presented throughout the trial. Next, a priming image, randomly selected from the experimental database, was shown for 16 ms and immediately followed by a mask displayed for 100 ms. Both the prime and the mask subtended 12.5 × 12.5 degrees and were presented in the same spatial location at the center of the computer screen. The target-face was displayed for 500 ms. Participants responded by pressing 1 on the keyboard if they perceived the expression of the target-face as neutral, and by pressing 2 if they perceived it as angry. The next trial began immediately after the participant’s response, or after 4000 ms, if no response was entered. There were in total 120 trials. Each contained a unique prime image and a corresponding mask, while the target-faces were randomly selected from 56 face expressions (28 angry and 28 neutral). After completing the SAFFIMAP, participants reported whether they recognized any of the priming images preceding the face images.

Face Emotion Processing (FEP) Control Test

In a three-alternative forced choice task (3AFC), participants were presented with human faces displaying angry, happy, or neutral expressions for 500 ms. The angry and neutral faces were the same as those used in the SAFFIMAP test. As the FEP test was identical to that used in our previous paper [37] , happy face expressions were included in the trials. In total, there were 84 unique trials with 28 trials for each of the three expressions. A schematic view of a single trial is shown in Figure 2(b). Each trial was prompted by an exclamation mark, followed by a 500 ms blank screen (at neutral gray, 42 cd/m2). After another 500 ms, the picture of the face expression to be evaluated was presented for 500 ms. Participants reported via keypress whether they perceived the face expression as angry, happy, or neutral (1 if the expression appeared to be angry, 2 if happy, and 3 if neutral). The next trial begins after the participant presses a key or after 4000 ms if no response is entered.

Prime Validation (PV) Control Test

In the PV test, the same experimental design was used to validate the priming images. In place of the target faces, participants were presented with the priming images of aggressive, pleasant, or neutral affectivity. There were 120 unique trials with 40 trials per each affective category. Participants reported via keypress whether they perceived the image as aggressive, pleasant, or neutral (1 if the image appeared to be aggressive, 2 if pleasant, 3 if neutral).

To confirm the validity of the categorization of the priming images used in the SAFFIMAP experiment and the participants’ ability to recognize facial expressions, all participants were administered PV and FEP as control tests.

Figure 2. Timelines of test trials. (a) Timeline of a single trial in the SAFFIMAP test. (b) Timeline of a single trial in the FEP test. The Prime Validation task follows an identical timeline, replacing the target-face with the priming images.

2.5. Statistical Analysis

The dependent measures of interest in the current study were participants’ reaction time (RT) and accuracy for target face categorization. Note that only RTs from correctly answered trials were included in the analysis. A 2 (Prime Face: Absent, Present) × 3 (Prime Affect: Aggressive, Pleasant, Neutral) × 2 (Prime Centricity: Egocentric, Allocentric) × 2 (Target Face: Angry, Neutral) repeated measures analysis of variance (ANOVA) was performed on the RT and accuracy data, respectively. When appropriate, post-hoc pairwise comparisons were performed using Bonferroni correction for multiple comparisons.

3. Results

After completing the SAFFIMAP test all participants reported verbally that they were unaware of the priming images. No subject exceeded the 4000 ms response time limit. The mean percent correct (n = 29) for the PV and FEP control tests were 90.4% ± 11.1% and 93.6% ± 5.29% respectively. Four participants (3 females) were removed from further analysis due to poor performance (less than 79.0% accuracy) on the PV control test.

3.1. RT Analysis

The ANOVA performed on the RT data showed a significant interaction between Prime Face and Target Face, F(1, 21) = 14.802, MSE = 0.01, p = 0.001, η p 2 = 0.413. Paired-samples t-tests indicated slower RTs when neutral target faces were preceded by face primes than when they were preceded by non-face primes (p < 0.05, Bonferroni corrected). In contrast, faster RTs were observed when angry target faces were preceded by face compared to non-face primes (p < 0.05, Bonferroni corrected). No other significant main effects or interactions were found (all p-values > 0.1). The mean RTs for the experimental conditions collapsed across Prime Affect and Prime Centricity were shown in Figure 3.

Figure 3. Mean RTs for the combinations of Prime Face and Target Face conditions. Error bars represent standard errors of the mean (SEM). *p < 0.05.

3.2. Accuracy Analysis

The overall average performance (n = 25) for the SAFFIMAP test was 76.1% ± 7.16%. The ANOVA performed on the accuracy data showed a significant main effect of Target Face, F(1, 21) = 5.631, MSE = 0.196, p = 0.027, η p 2 = 0.211, with more accurate responses to neutral (M = 0.79 , SEM = 0.04) than to angry (M = 0.70, SEM = 0.05) target faces. The interaction between Prime Face and Prime Centricity was also significant, F(1, 21) = 6.665, MSE = 0.039, p = 0.017, η p 2 = 0.241. Paired-samples t-tests revealed that participants’ categorization performance was worse when a priming image contained an egocentric face compared to when it contained either an allocentric face or an egocentric non-face (both p-values < 0.05, Bonferroni corrected). No other significant main effects or interactions were observed (all p-values > 0.08). Figure 4 shows the accuracy for different experimental conditions collapsed across Prime Affect and Target Face.

4. General Discussion

Our first notable finding is that subliminal priming with a face-containing image significantly quickens the participants’ responses to an angry target-face, compared to priming with a non-face containing image. This effect did not occur when the target-face was neutral, nor did it depend on the affective content of the prime image itself. This suggests that the presence of a face in the priming

Figure 4. Mean accuracy for the combinations of Prime Face and Prime Centricity conditions. Error bars represent SEM. *p < 0.05.

image, independent of its affective content, heightens the observer’s attention to the emotional expression of the target face. Figure 3 shows that when the neutral target-face is primed with a face-containing priming image, the observer’s response is significantly slower than when the priming image does not contain a face. However, when the angry target-face is primed with a face-containing priming image, the observer’s response is significantly quickened compared to when the priming image does not contain a face. The face prime may prepare the observer for an affective target image, which when validated by an angry target face results in a quickened response. In the case of a neutral target-face, we suggest that any face present in the prime image is prepares the viewer for an affective percept, thus when a neutral rather than an affective, angry target-face appears, the response is slowed.

Furthermore, we showed that there was a significant effect of the centricity of the priming image on the evaluation of the target face expression, depending on whether the priming image contained a face. When the priming image contained a face and the content of the prime was directed towards the observer (egocentric), the participant’s performance was significantly poorer on determining the facial expression of the target-face compared to when the priming image did not contain a face, or when the content of the prime was directed away from the observer (allocentric). This suggests that the egocentric face-containing prime engaged the observer and detracted from his/her perception of the target face expression. There may be an inhibitory mechanism involved in the subconscious perception of a relevant stimulus that dampens the observer’s attention to the facial expression of a consciously perceived face. This finding is supported by studies which demonstrate that egocentric faces are able to capture attention preferentially to other non-face stimuli [45] [46] . There is, however, little information on the ability of allocentric faces to grab attention, as most studies examining the ability of faces to draw attention employ forward-facing, egocentric face images as stimuli. Our findings seem to suggest that allocentric faces are less effective at capturing attention than egocentric faces, and thus when used in a priming image will not detract from the perception of the target face expression.

The affective content of the priming image had no significant effect on performance or RT, rather the face-content and centricity of the prime had the major impacts. Since RTs were quicker regardless of the affective valence of the prime, our results do not support an affective congruency effect. In order to investigate the question of affective congruency, future studies should be more selective of their priming image stimuli. Our results showed a significant effect of the face content of a priming image and the image’s centricity on either response time or accuracy. This finding has important implications for future subliminal affective priming experiments which employ face or scene-containing images as their priming stimuli, as the specific visual properties and the content of a prime image may produce unintended effects which could skew the results of the study. Further studies of what visual information can be processed subconsciously in a masked priming context are necessary to ensure the validity of future experiments. Due to the breadth of the subliminal priming image categories evaluated in this study, there were limited numbers of stimuli per specific affect/face/centricity category in order to keep the duration of the SAFFIMAP test within a reasonable amount of time. This limitation reduced our ability to evaluate more priming and target image categories, which should be included in future studies. In particular, it would be interesting to examine whether more distinct visual characteristics of face-containing prime images, such as gender or race, may subconsciously affect the conscious perception of the emotion displayed in the target face image. Additional image features, such as color, saturation, or content complexity, which were not investigated in this report, may also have significant effects on subliminal affective priming. Our results may also extend beyond the pleasant and aggressive primes used in this study to other frequently employed affective priming stimuli, such as those evoking fear and disgust, which will need to be investigated in future studies alongside additional target face expressions (ex. happy, fearful, disgusted).

Acknowledgements

The authors thank the subjects for participating in this research.

Conflicts of Interest

No potential conflict of interest was reported by the authors.

References

[1] Darwin, C. (1972) Expressions of the Emotions in Man and Animals. John Murray, London.
[2] Ekman, P. (1970) Universal Facial Expressions of Emotion. California Mental Health Research Digest, 8, No. 4.
[3] Ekman, P. and Oster, H. (1979) Facial Expressions of Emotion. Annual Review of Psychology, 30, 527-554.
https://doi.org/10.1146/annurev.ps.30.020179.002523
[4] Frith, C. (2009) Role of Facial Expressions in Social Interactions. Philosophical Transactions of the Royal Society B: Biological Sciences, 364, 3453-3458.
https://doi.org/10.1098/rstb.2009.0142
[5] Leopold, D.A. and Rhodes, G. (2010) A Comparative View of Face Perception. Journal of Comparative Psychology, 124, 233-251.
https://doi.org/10.1037/a0019460
[6] Tsao, D.Y. and Livingstone, M.S. (2008) Mechanisms of Face Perception. Annual Review of Neuroscience, 31, 411-437.
https://doi.org/10.1146/annurev.neuro.30.051606.094238
[7] Tamietto, M. and de Gelder, B. (2010) Neural Bases of the Non-Conscious Perception of Emotional Signals. Nature Reviews Neuroscience, 11, 697-709.
https://doi.org/10.1038/nrn2889
[8] Morris, J., Öhman, A. and Dolan, R. (1999) A Subcortical Pathway to the Right Amygdala Mediating “Unseen” Fear. Proceedings of the National Academy of Sciences of the United States of America, 96, 1680-1685.
https://doi.org/10.1073/pnas.96.4.1680
[9] Whalen, P.J., Rauch, S.L., Etcoff, N.L., McInerney, S.C., Lee, M.B., Jenike, M.A. (1998) Masked Presentations of Emotional Facial Expressions Modulate Amygdala Activity without Explicit Knowledge. The Journal of Neuroscience, 18, 411-418.
https://doi.org/10.1523/JNEUROSCI.18-01-00411.1998
[10] de Gelder, B., Vroomen, J., Pourtois, G. and Weiskrantz, L. (1999) Non-Conscious Recognition of Affect in the Absence of Striate Cortex. Neuroreport, 10, 3759-3763.
https://doi.org/10.1097/00001756-199912160-00007
[11] Esteves, F., Dimberg, U. and Öhman, A. (1994) Automatically Elicited Fear: Conditioned Skin Conductance Responses to Masked Facial Expressions. Cognition & Emotion, 8, 393-413.
https://doi.org/10.1080/02699939408408949
[12] Lähteenmäki, M., Hyönä, J., Koivisto, M. and Nummenmaa, L. (2015) Affective Processing Requires Awareness. Journal of Experimental Psychology: General, 144, 339-365.
https://doi.org/10.1037/xge0000040
[13] Pessoa, L., Kastner, S. and Ungerleider, L.G. (2002) Attentional Control of the Processing of Neutral and Emotional Stimuli. Cognitive Brain Research, 15, 31-45.
https://doi.org/10.1016/S0926-6410(02)00214-8
[14] Storbeck, J., Robinson, M.D. and McCourt, M.E. (2006) Semantic Processing Precedes Affect Retrieval: The Neurological Case for Cognitive Primacy in Visual Processing. Review of General Psychology, 10, 41-55.
https://doi.org/10.1037/1089-2680.10.1.41
[15] Nummenmaa, L., Hyönä, J. and Calvo, M.G. (2010) Semantic Categorization Precedes Affective Evaluation of Visual Scenes. Journal of Experimental Psychology: General, 139, 222-246.
https://doi.org/10.1037/a0018858
[16] LeDoux, J.E. (1998) The Emotional Brain: The Mysterious Underpinnings of Emotional Life. Weidenfeld & Nicolson, London.
[17] Zajonc, R.B. (1980) Feeling and Thinking: Preferences Need No Inferences. American Psychologist, 35, 151-175.
https://doi.org/10.1037/0003-066X.35.2.151
[18] Dimberg, U., Thunberg, M. and Elmehed, K. (2000) Unconscious Facial Reactions to Emotional Facial Expressions. Psychological Science, 11, 86-89.
https://doi.org/10.1111/1467-9280.00221
[19] Li, W., Zinbarg, R.E., Boehm, S.G. and Paller, K.A. (2008) Neural and Behavioral Evidence for Affective Priming from Unconsciously Perceived Emotional Facial Expressions and the Influence of Trait Anxiety. Journal of Cognitive Neuroscience, 20, 95-107.
https://doi.org/10.1162/jocn.2008.20006
[20] Murphy, S.T. and Zajonc, R.B. (1993) Affect, Cognition, and Awareness: Affective Priming with Optimal and Suboptimal Stimulus Exposures. Journal of Personality and Social Psychology, 64, 723-739.
https://doi.org/10.1037/0022-3514.64.5.723
[21] Lee, S.Y., Kang, J.I., Lee, E., Namkoong, K. and An, S.K. (2011) Differential Priming Effect for Subliminal Fear and Disgust Facial Expressions. Attention, Perception, & Psychophysics, 73, 473-481.
https://doi.org/10.3758/s13414-010-0032-3
[22] Kiesel, A., Kunde, W. and Hoffmann, J. (2008) Mechanisms of Subliminal Response Priming. Advances in Cognitive Psychology, 3, 307-315.
[23] Greenwald, A.G., Klinger, M.R. and Liu, T.J. (1989) Unconscious Processing of Dichoptically Masked Words. Memory & Cognition, 17, 35-47.
https://doi.org/10.3758/BF03199555
[24] Klinger, M.R., Burton, P.C. and Pitts, G.S. (2000) Mechanisms of Unconscious Priming: I. Response Competition, Not Spreading Activation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 26, 441-455.
https://doi.org/10.1037/0278-7393.26.2.441
[25] Neumann, R. and Lozo, L. (2012) Priming the Activation of Fear and Disgust: Evidence for Semantic Processing. Emotion, 12, 223-228.
https://doi.org/10.1037/a0026500
[26] Otten, S. and Wentura, D. (1999) About the Impact of Automaticity in the Minimal Group Paradigm: Evidence from Affective Priming Tasks. European Journal of Social Psychology, 29, 1049-1071.
https://doi.org/10.1002/(SICI)1099-0992(199912)29:8<1049::AID-EJSP985>3.0.CO;2-Q
[27] Hermans, D., Spruyt, A., De Houwer, J. and Eelen, P. (2003) Affective Priming with Subliminally Presented Pictures. Canadian Journal of Experimental Psychology/Revue Canadienne de Psychologie Expérimentale, 57, 97-114.
[28] Andrews, V., Lipp, O., Mallan, K. and König, S. (2011) No Evidence for Subliminal Affective Priming with Emotional Facial Expression Primes. Motivation & Emotion, 35, 33-43.
https://doi.org/10.1007/s11031-010-9196-3
[29] Aguado, L., Garcia-Gutierrez, A., Castañeda, E. and Saugar, C. (2007) Effects of Prime Task on Affective Priming by Facial Expressions of Emotion. The Spanish Journal of Psychology, 10, 209-217.
https://doi.org/10.1017/S1138741600006478
[30] Axelrod, V., Bar, M. and Rees, G. (2015) Exploring the Unconscious Using Faces. Trends in Cognitive Sciences, 19, 35-45.
https://doi.org/10.1016/j.tics.2014.11.003
[31] Banse, R. (2001) Affective Priming with Liked and Disliked Persons: Prime Visibility Determines Congruency and Incongruency Effects. Cognition & Emotion, 15, 501-520.
https://doi.org/10.1080/02699930126251
[32] Haneda, K., Nomura, M., Iidaka, T. and Ohira, H. (2003) Interaction of Prime and Target in the Subliminal Affective Priming Effect. Perceptual and Motor Skills, 96, 695-702.
https://doi.org/10.2466/pms.2003.96.2.695
[33] Wagenbreth, C., Rieger, J., Heinze, H.-J. and Zaehle, T. (2014) Seeing Emotions in the Eyes—Inverse Priming Effects Induced by Eyes Expressing Mental States. Frontiers in Psychology, 5, 1039.
https://doi.org/10.3389/fpsyg.2014.01039
[34] Grill-Spector, K., Weiner, K.S., Kay, K. and Gomez, J. (2017) The Functional Neuroanatomy of Human Face Perception. Annual Review of Vision Science, 3, 167-196.
https://doi.org/10.1146/annurev-vision-102016-061214
[35] Kanwisher, N., McDermott, J. and Chun, M.M. (1997) The Fusiform Face Area: A Module in Human Extrastriate Cortex Specialized for Face Perception. The Journal of Neuroscience, 17, 4302-4311.
https://doi.org/10.1523/JNEUROSCI.17-11-04302.1997
[36] McKone, E., Kanwisher, N. and Duchaine, B.C. (2007) Can Generic Expertise Explain Special Processing for Faces? Trends in Cognitive Sciences, 11, 8-15.
https://doi.org/10.1016/j.tics.2006.11.002
[37] Vaina, L.M., Rana, K.D., Cotos, I., Chen, L.-Y., Huang, M.A. and Podea, D. (2014) When Does Subliminal Affective Image Priming Influence the Ability of Schizophrenic Patients to Perceive Emotions? Medical Science Monitor, 20, 2788-2798.
https://doi.org/10.12659/MSM.893118
[38] Brainard, D.H. (1997) The Psychophysics Toolbox. Spatial Vision, 10, 433-436.
https://doi.org/10.1163/156856897X00357
[39] Kleiner, M., Brainard, D., Pelli, D., Ingling, A., Murray, R. and Broussard, C. (2007) What’s New in Psychtoolbox-3? Perception, 36, 1-16.
[40] Pelli, D.G. (1997) The VideoToolbox Software for Visual Psychophysics: Transforming Numbers into Movies. Spatial Vision, 10, 437-442.
https://doi.org/10.1163/156856897X00366
[41] Lang, P.J., Bradley, M.M. and Cuthbert, B.N. (1998) International Affective Picture System (IAPS): Technical Manual and Affective Ratings. The Center for Research in Psychophysiology, University of Florida, Gainesville, FL.
[42] Willenbockel, V., Sadr, J., Fiset, D., Horne, G., Gosselin, F. and Tanaka, J. (2010) Controlling Low-Level Image Properties: The SHINE Toolbox. Behavior Research Methods, 42, 671-684.
https://doi.org/10.3758/BRM.42.3.671
[43] Goeleven, E., De Raedt, R., Leyman, L. and Verschuere, B. (2008) The Karolinska Directed Emotional Faces: A Validation Study. Cognition and Emotion, 22, 1094-1118.
https://doi.org/10.1080/02699930701626582
[44] Lundqvist, D., Flykt, A. and Öhman, A. (1998) The Karolinska Directed Emotional Faces (KDEF). Department of Neurosciences, Karolinska Hospital, Stockholm.
[45] Langton, S.R., Law, A.S., Burton, A.M. and Schweinberger, S.R. (2008) Attention Capture by Faces. Cognition, 107, 330-342.
https://doi.org/10.1016/j.cognition.2007.07.012
[46] Theeuwes, J. and Van der Stigchel, S. (2006) Faces Capture Attention: Evidence from Inhibition of Return. Visual Cognition, 13, 657-665.
https://doi.org/10.1080/13506280500410949

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.