Rational and Continuous Measurement of the Emotional Decision Making in Visual Recognition of Facial Emotional Expressions with M.A.R.I.E.: First Half

Abstract

Context: The advent of Artificial Intelligence (AI) requires modeling prior to its implementation in algorithms for most human skills. This observation requires us to have a detailed and precise understanding of the interfaces of verbal and emotional communications. The progress of AI is significant on the verbal level but modest in terms of the recognition of facial emotions even if this functionality is one of the oldest in humans and is omnipresent in our daily lives. Dysfunction in the ability for facial emotional expressions is present in many brain pathologies encountered by psychiatrists, neurologists, psychotherapists, mental health professionals including social workers. It cannot be objectively verified and measured due to a lack of reliable tools that are valid and consistently sensitive. Indeed, the articles in the scientific literature dealing with Visual-Facial-Emotions-Recognition (ViFaEmRe), suffer from the absence of 1) consensual and rational tools for continuous quantified measurement, 2) operational concepts. We have invented a software that can use computer-morphing attempting to respond to these two obstacles. It is identified as the Method of Analysis and Research of the Integration of Emotions (M.A.R.I.E.). Our primary goal is to use M.A.R.I.E. to understand the physiology of ViFaEmRe in normal healthy subjects by standardizing the measurements. Then, it will allow us to focus on subjects manifesting abnormalities in this ability. Our second goal is to make our contribution to the progress of AI hoping to add the dimension of recognition of facial emotional expressions. Objective: To study: 1) categorical vs dimensional aspects of recognition of ViFaEmRe, 2) universality vs idiosyncrasy, 3) immediate vs ambivalent Emotional-Decision-Making, 4) the Emotional-Fingerprint of a face and 5) creation of population references data. Methods: With M.A.R.I.E. enable a rational quantified measurement of Emotional-Visual-Acuity (EVA) of 1) a) an individual observer, b) in a population aged 20 to 70 years old, 2) measure the range and intensity of expressed emotions by 3 Face-Tests, 3) quantify the performance of a sample of 204 observers with hyper normal measures of cognition, “thymia,” (ibid. defined elsewhere) and low levels of anxiety 4) analysis of the 6 primary emotions. Results: We have individualized the following continuous parameters: 1) “Emotional-Visual-Acuity”, 2) “Visual-Emotional-Feeling”, 3) “Emotional-Quotient”, 4) “Emotional-Deci-sion-Making”, 5) “Emotional-Decision-Making Graph” or “Individual-Gun-Trigger”6) “Emotional-Fingerprint” or “Key-graph”, 7) “Emotional-Finger-print-Graph”, 8) detecting “misunderstanding” and 9) detecting “error”. This allowed us a taxonomy with coding of the face-emotion pair. Each face has specific measurements and graphics. The EVA improves from ages of 20 to 55 years, then decreases. It does not depend on the sex of the observer, nor the face studied. In addition, 1% of people endowed with normal intelligence do not recognize emotions. The categorical dimension is a variable for everyone. The range and intensity of ViFaEmRe is idiosyncratic and not universally uniform. The recognition of emotions is purely categorical for a single individual. It is dimensional for a population sample. Conclusions: Firstly, M.A.R.I.E. has made possible to bring out new concepts and new continuous measurements variables. The comparison between healthy and abnormal individuals makes it possible to take into consideration the significance of this line of study. From now on, these new functional parameters will allow us to identify and name “emotional” disorders or illnesses which can give additional dimension to behavioral disorders in all pathologies that affect the brain. Secondly, the ViFaEmRe is idiosyncratic, categorical, and a function of the identity of the observer and of the observed face. These findings stack up against Artificial Intelligence, which cannot have a globalist or regionalist algorithm that can be programmed into a robot, nor can AI compete with human abilities and judgment in this domain. *Here “Emotional disorders” refers to disorders of emotional expressions and recognition.

Share and Cite:

Granato, P. , Vinekar, S. , Gansberghe, J. and Bruyer, R. (2024) Rational and Continuous Measurement of the Emotional Decision Making in Visual Recognition of Facial Emotional Expressions with M.A.R.I.E.: First Half. Open Journal of Psychiatry, 14, 223-264. doi: 10.4236/ojpsych.2024.143013.

1. Introduction

The development and supremacy of the human species in the animal kingdom has been facilitated by its ability to live in groups. This has enabled increasingly complex, precise, rapid, and efficient social functioning. This societal organization was only possible with the appearance of facial emotional communication. It has been associated with expression and recognition of facial emotions for long millennia. It is noteworthy that the appearance of verbal language followed nonverbal facial communication without replacing it. Communication with facial expressions made it possible to potentiate and improve the social and organizational life of increasingly large human groups. Facial emotional communication conveys meaningful information to members of 1) groups, 2) facilitates mass communication and adds to the 3) complexity. This attests to the importance of facial emotional expressions and the seriousness of the consequences in the event, there is dysfunction in both the production of a facial emotion and its recognition. These are two related basic ego-functions needed for social development. This is stated without forgetting that the emotional feeling (feeling) is the origin of the expression and recognition of emotion; Sigmund Freud and Anna Freud acknowledged that the interoceptive sensations derived from the body (viscera) are essential for emotions and furthermore the basic human drives of Eros and Thanatos or sexual and aggressive drives do originate in the body even in early infancy. Bowlby further elaborated on the attachment system between mother and infant, the facial emotional communication in this duo playing an important role. AI and robots are not yet quipped with any of these attributes. AI can be cognitively competent but not in combining emotions and their cognitive appreciation or social implications though AI (Large Language Models) can be programmed to use the conventional language to express emotions.

The combination of the binomial (cognition-emotion) has contributed to the expansion of human groups into increasingly large and complex social groups [1] [2] . Facial emotional expressiveness as well as its good recognition and interpretation enhanced: 1) behaviors ensuring the cohesion of the human groups [3] and 2) social cognitions [1] [4] . This has resulted in good quality societal functioning. Otherwise, we observe a social disorganization with inter-individual behavioral disorders with violence [5] . Social networks facilitated by modern technology infinitely increase use of visual face-to-face interactions at a distance. From now on, day to day mundane communication will rely more on images and less on written expressions. The Distant Face-to-Face (Face-time or Zoom, etc.) computer platforms designed for audiovisual communication will increase emotional communication within human groups.

This seemingly preferred mode of communication in current civilizations explains the interest of neurosciences, neurology and psychiatry in emotions and their regulation. Unfortunately, the promulgated conclusions of scientific studies are rarely consensually confirmed. Some are based on the theory of the six discrete expressions of “basic” emotions identified by of Paul Ekman: “anger”, “disgust”, “joy”, “fear”, “surprise” and “sadness”. More importantly, scientific controversies still exist. They are of interest to 1) the categorical or proportional aspect of Visual-Facial-Emotions-Recognition, 2) the universal or idiosyncratic aspect, 3) the continuous measurement of the ViFaEmRe, 4) the link between the two intracerebral black boxes (transmitters and receiver-synthesizers) that are, “expression” and the “recognition” of facial emotions. As such, only the “joy” is recognized by most authors as having a real congruence between the “Visual-Facial-Emotion-transmitter-Feeling” and “Visual-Facial-Emotional-Re- ceptive-Feeling”. Additionally, the lack of a consensually accepted continuous measurement tool prevents: 1) the reproducibility of the experiments, 2) comparison of results, 3) the emergence of operational concepts due to the inability to compare results and foster discussion. For Mauss and Robinson in 2009 [6] a project aimed at these goals was not realistic. Of course, it was true when the technology had its limitations.

However, the availability of computational devices with the morphing of facial emotions has allowed for progress in the understanding of Visual-Facial-Emo- tions-Recognition and their cerebral integration [7] [8] . The arrival of Artificial Intelligence will further improve the “human-machine” interface through two-way communication that will involve voice and facial emotions, as well as nonverbal body language. For it is the duo of “transmitter-man” and “machine-receiver” which is topical our purpose. However, before long, it is the duo “transmitter-machine” and “receiver-man” which will complete [9] [10] [11] [12] and complement the previous one. This presupposes an understanding of the intra-human physiology and pathophysiology, which will inform the duo “transmitter-man” and “receiver-man”. Consequently, the writing of the programming code will necessarily be based on concepts borrowed from a physiological reality entailed in that complex ego-function. Then AI would want to mimic such interactions and for which we are at a loss at present. No question such technology will arrive soon if not already here.

For our part, studies on visual recognition of emotions can be simplified by considering the following reflexive loop: 1) “EXPRESSION” of an emotion by the face, 2) a “DOUBLE BLACK BOX” that we will call “feeling underlying facial emotional expression” and “sense” of visual responsiveness to expressed facial emotions” and 3) “RECEPTION” of emotion through the eyes. Our work will focus only on the “black box” “feeling of visual receptivity of facial emotions”. We will not study the cognitive labeling aspect of the perceived emotion which is a function of the left cerebral hemisphere.

Our research questions are as follows.

Perfecting M.A.R.I.E. with detailed study of the 6 basic emotions in the sense of Ekman’s parlance [13] [14] , through emotional streaks gradually added to “neutrality”. This procedure is needed for quantifying the intensity of any perceived emotion. MARIE allows continuous and rational measurement of the Visual-Facial-Emotions-Recognition (ViFaEmRe), of an observer and a sample population. Are there interactions between ViFaEmRe and demographic variables? We shall attempt to identify continuous variables that partly explain the functioning of the “facial emotional receptive feeling”. What are the parameters that influence the decision making for identification of an emotion? Is ViFaEmRe categorical or proportional? Is ViFaEmRe universal or idiosyncratic, individual? Does ViFaEmRe change with age?

2. Methodology

2.1. Motivation of the Dichotomy between Unipolar and Bipolar Emotional Series

Statistically, all the authors’ previous (published scientific investigators) work merged Emotional-Series: 1) unipolar (“neutral-anger”, “neutral-disgust”, “neutral-joy”, “neutral-fear”, “neutral-surprise”, “neutral-sadness”) and 2) bipolar (“anger-fear”, “joy-sadness”, “anger-sadness”).

Upon reconsideration, it seems to us that this methodology was imperfect. However, it has a certain coherence, which will find its explanation in the continuation of this work. It is obvious that on an experimental level only the unipolar series must be considered. They seem more physiological and less artificial than bipolar Emotional-Series. That said, we do not exclude the possibility that in real life an individual goes directly from an “Alpha” emotion to a “Beta” emotion without going through emotional neutrality. This reflection makes it possible to understand that in our daily life the emotional language has many permutations, combinations, nuances, and contrasts. All of these emotional expressions provide rich social cues in facial communication and for social referencing normally present among two or more individuals. Even neutral facial expressions may have rich meaning in certain social contexts.

Indeed, if for intellectual convenience and for experimental design we accept the idea that the “neutrality” is an emotion, then we would have a combination of 2 emotions among 7, i.e., 21 different bipolar series. Our work concerns only 6 bipolar series. A study of these 21 emotional-series each composed of 19 stimulus-images would represent (21 × 19) = 399 presentations of ImSt for a single test face. This number should be multiplied by the number of Faces-Tests to be studied.

This methodology could allow us with close approximation to measure the perceptual reality of our emotional daily lives. The results would be more robust as: 1) the number of Faces-Test would be large and 2) the population samples would be numerous and very diverse. This reflection could be useful also for future an extension of this preliminary work of ours. If a cybernetic implementation would be contemplated, then all the 21 combinations apart from the 6 combinations of this work, i.e., 15, should be studied, for the ease of designing workable feedback loops.

2.2. Participants

2.2.1. Period

Between April 2000 and April 2005, we enrolled 204 healthy, right-handed, native French-speaking volunteers according to 10 items of the Edinburgh Handedness Inventory [15] [16] between the ages of 20 and 70 years at the “Clinical Investigation Centre” of the Lille’s University and Regional Hospital complex.

This research work was the subject of a study under the auspices of the Advisory “Committee for the Protection of Persons in Biomedical Research” (Comité Consultatif de Protection des Personnesdans la Recherche Biomédicale (C.C.P.P.R.B.)) of the Faculty of Medicine of Lille in 1998. This committee gave its approval.

The C.C.P.P.R.B. formed under the French law (Public Health Code article R2009 to R2020) which is inspired by the “Helsinki Declaration”. This medical research on human subjects was justified to the extent that the results would be expected to contribute to the scientific knowledge base enabling the earliest possible diagnoses in the context of certain psychiatric and neurological illnesses. The methodology used presented no potential harm to the human subjects.

2.2.2. Subjects, Groups, and Ethics

Our work aims to measure the ability to recognize with one’s eyes a facial emotion during life. However, the experiment does not address the aging of the same observer from age 20 to 70 years old. This type of long-term study is impracticable if not impossible. For this reason, the experiment is aimed at age groups of 5 or 10 years, ranging from 20 to 70 years old. It is certain that this methodology is open to criticism. Nevertheless, it allows a scientific approach to study the ability to visually recognize facial emotions during aging.

We formed seven groups, by age-bracket: 3 ten-year brackets for the subjects between the ages of 20 and 50 years, and 4 five-year brackets for the subjects between the ages of 51 and 70 years. Thirty subjects were recruited in each age group (except for the 66 - 70 group, n = 24) for varied results evaluating the effects of sex, age, and level of education.

Each subject provided informed written consent in accordance with requirements of the Ethics committee overseeing the research with human subjects. This mono-centric (contrasted with multi-center study, conducted at a single site) study consisted of a controlled, randomized test (subjects were not randomly assigned to experimental and control groups) (randomized applies to random choice of emotional series), carried out as a single blind study of the parallel groups and without giving any direct benefit to any individual subject. Subjects were blinded to the goals and purposes of the study and were given no knowledge of what responses were expected and being measured or evaluated.

2.2.3. Evaluation Set

Eyesight and hearing, with or without aid, was optimum. Participants were given a medical consultation to determine history, current medications, and check for neurological disease, diabetes, hypertension, and psychiatric problems, amongst others. This consultation consisted of 1) structured interview: Mini International Neuropsychiatric Interview (MINI) [17] [18] ; 2) Mini Mental Status Examination (MMSE) [19] ; 3) Hamilton scale for anxiety (HAMA) [20] ; 4) Hamilton scale for depression (HDRS) [21] and 5) a blood and urine drug test. Subjects over the age of 50 were given a cognitive assessment consisting of the: 1) Mattis Dementia Rating Scale [22] and 2) the episodic memory: Grober and Buschke scale [23] [24] (Table 1).

2.2.4. Inclusion Criteria

Inclusion criteria for participating subjects were: 1) men and women of ages between 20 and 70 years, 2)women were not in their menstrual period at the time of the testing; 3) a score on the HDRS scale below 10; 4) a score on the HAMA scale below 14; 5) visual acuity (corrected or uncorrected) of 20/20; 6) blood and UDS testing negative for drugs, alcohol; 7) no ongoing medical treatment; 8) able to understand the protocol and sign the consent form; 9) comprehend the methodology and; 10) carrying health insurance.

The aim was to control mood, anxiety, and cognitive variables. The observers had “no mood or anxiety disorders” (simplified by using the term “thymia” meaning normal mood) with optimum cognitive functions for age 20 to 70 years. This explains the qualifier “supra-normal” or “hyper normal” used in describing this group of subjects.

Table 1. Subject’s characteristics: “n”: number; “M/F”: sex ratio; “Age” (mean and standard deviation); “Level of Education”: (1) less than 12 years of schooling, (2) between 12 and 15 years of schooling, (3) more than 15 years of schooling;”HAMA” for Hamilton scale of anxiety (mean and standard deviation);”HDRS” for Hamilton scale of depression (mean and standard deviation); “MMSE” (mean and standard deviation); “MATTIS scale” (mean and standard deviation); “GROBER and BUSCHKE scale”: Immediate recall, total recall; 1/total recall; 2/total recall; 3/deferred total recall (mean and standard deviation).

2.2.5. Exclusion Criteria

Exclusion criteria were: 1) presence of neurological or psychological disorders; 2) belonging to the same family as another participating subject; 3) use of any psychoactive drugs for a prolonged period before their visit; 4) and pregnancy. Any subject who did not meet the inclusion criteria was excluded from the study. Any subject that met any exclusion criterion was not included in the study.

2.3. Material

The tests were set up using software that was developed for this study “Method of Analysis and Research of the Integration of Emotions” (M.A.R.I.E.). It was inspired by a previously used method [25] - [30] . M.A.R.I.E. examines the decision making of the subject when shown a photograph “image-stimulus” (ImSt) of a face “Face-Test” (FaTe) that expresses a “basic” or “intermediate” emotion.

For Ekman in 1992 [14a], the basics emotions are “anger”, “disgust”, “joy”, “fear”, “surprise”, and “sadness”, all of which are expressed on the Face-Tests of: 1) a “Blond woman” (FaTe-Blond), 2) a “Brunette woman” (FaTe-Brunette), and 3) a “man” (FaTe-Man) (Figure 1). In this work “neutral” is considered both an “emotion” and a “non-emotion” due to the controversy among the researchers [31] [32] .

We have created 6 Emotional-Series (Em-Se) “A-B” of intermediate emotions: 1) “neutrality-anger”; 2) “neutrality-disgust”; 3) “neutrality-joy”; 4) “neutrality- fear”; 5) “neutrality-surprise”; and 6) “neutrality-sadness” (Figure 2).

Each Em-Se was made up of two basic images number 1 and 19, and 17 intermediate ImSt created by blending or merging both basic images using computer technology of “morphing”. Photographs of the “Blonde woman”, “Brunette woman”, and “man” were lent by Paul EKMAN (Figure 1). The random

“FaTe-Blond” “FaTe-Brunette” “FaTe-Man”

Figure 1. The 3 Face-Tests used in this research work.

Figure 2. Emotional-Serie, “Neutral-Joy” created using the intermediate Image-Stimuli merging method, of the basic images 1 and 19. Total of the 19 Image-stimuli comprises one Emotional-series.

order of the images-stimuli was the same for each subject: 6 Em-Se × 19 ImSt = 114 ImSt for each Face-Test. The basics images of each series were labelled and were the two last Image-Stimulus at two ends of each Em-Se.

Each observer was seated in the same quiet room, facing a laptop screen while each of the Image-Stimulus (10 cm × 18 cm) was shown. The Image-Stimulus was shown in the center of the upper half of the screen. On the lower half of the screen: 1) labeled underneath the basic image, to the left of the stimulus, was the name of the basic emotion “A” (with 5˚ angle), and 2) labeled underneath the basic image, to the right of the stimulus, was the name of the basic emotion “B” (with a 5˚ angle) (Figure 3). This was only for Face-Test Blond woman (FaTe-Blond) and Face-Test Brunette woman (FaTe-Brunette) and not for the Face-Test man (FaTe-Man). Image-stimuli of the Face-Test Blond were presented first, followed by images-stimuli of the FaTe-Brunette, and finally images-stimuli of the FaTe-Man.

2.4. Procedure

The subjects undertook a forced-choice binary test by pressing with their right-hand fingers on the left or right button of the computer mouse, using the index for the left button or the middle finger for the right button. The Im-St remained on the display until the subject responded. The order in which the Em-Ses were displayed was: 1) “neutrality-anger”; 2) “neutrality-disgust”; 3) “neutrality-joy”; 4) “neutrality-fear”; 5) “neutrality-surprise”; and 6) “neutrality-sadness”. A pause of one minute was applied after each Em-Se like that estimated for repolarization of the brain after electrical stimulus. The duration of each Em-Se was 2 minutes on average.

2.5. Reproducibility

In order to verify the reliability of our procedure in which each observer only responded once to each stimulus, we carried out a “control test - retest” on a sample of 13 healthy subjects (6 women and 7 men) between the ages of 29 and 41 years (35 ± 5 years). These subjects took the test five times at 5-minute intervals. Inclusion and exclusion criteria were identical to those of the main sample.

Figure 3. Experimental situation: The subject must choose the image on the right or on the left according to which they identify as the central image. (Subject chooses to click the right or left side button of the mouse in making a choice.)

The individual standard for the consistency of the answers throughout the repetitions was measured at a minimum of 4 identical answers out of 5 (or at least, a concordance of 80%). The average level of concordance was 93.1%: 1) FaTe- Blond = 93%; 2) FaTe-Brunette = 92.5%; 3) FaTe-Man = 93.9%). The benchmark of 80% concordance was passed in 481/513 cases (it was 100% in 108 cases).

The 32 remaining were tested against the value of 80%, by means of a unilateral Student’s t-test: no result was significantly lower than this value. Therefore, it was confirmed that conducting the test for each Emotional-series once would be of sufficient validity and reliability. The ratio of discordant responses was not significantly influenced by sex (F(1, 26) = 1.53; p = 0.218), age (F(6, 1,8) = 1; p = 0.405), or level of education (F(l, 2, 8) = 1.64; p = 0.2).

2.6. Pretest

All the subjects first passed a trial on a monitored task. They used the same skill of decision making on a test having to choose between two possible answers but applied to a series of geometric intermediate images originally forming a continuum with square and circular outlines. The monitored task did not involve perception of emotion, thus eliminating “practice effect”.

3. Statistics

3.1. Data Analysis

We used the software SPSS v.20 (SYSTAT Software, Inc., SPSS.com). The dependent variable was the “Emotional-Visual-Acuity” (EmViAc), in other words, the “percentage of responses” “B” (%RespB). The results were analyzed using variance analyses (ANOVAs and MANOVAs), for which: 1) the “inter- subjects” factor was the age group (n = 7) and 2) the “intra-subjects” factors were Face-Test (n = 3), emotional-series (n = 6), and packets (n = 9). The alpha risk was fixed at 5%. The multivariate test Wilks’ lambda was used, and subsequent analyses were performed with Bonferroni correction with a level of significance of 0.001. The hypothesis of a linear relationship between 2 variables was measured with Pearson’s R.

3.2. Definition of Variables

3.2.1. The “Face-Test” (FaTe)

This is the generic term for the photograph of a face that bears a civil identity. It will be called: 1) “FaTe-Blond” for the Blonde woman’s test face; 2) “FaTe-Brunette” for the face-test of the Brunette woman and 3) “FaTe-Man” for the test face of the man.

3.2.2. “Basic Emotions”

Our article gives credits to the previous work of Paul EKMAN. He identified 6 fundamental discrete emotions called “basic emotions” which are: the “anger”, the “disgust”, the “joy”, the “fear”, the “surprise” and the “sadness” [14] .

3.2.3. “Image-Stimulus” (ImSt)

We will define by “Image-Stimulus”: 1) the image of the basic emotion “A”; 2) the image of the basic emotion “B”, 3) the intermediate images resulting from a mixture of percentage of the pixels constituting the emotions “A” (%PixA) and “B” (%PixB). This latter is a computer image that contains an inversely proportional %Pix from the canonical images of each of “A” and “B” emotions.

3.2.4. The “Emotional-Series” (EmSe)

Each “Emotional-Series” was specified by: 1) the increase in the gradation of emotion’s %PixB of image “B”: 0%, 10%, 20%, 30%, 35%, 38%, 41%, 44%, 47%, 50%, 53%, 56%, 59%, 62%, 65%, 70%, 80%, 90%, and 100% and 2) the decrease in %PixA: 100%, 90%, 80%, 70%, 65%, 62%, 59%, 56%, 50%, 47%, 44%, 41%, 38% , 35%, 30%, 20%, 10%, 0%. This, respectively to the image-stimuli: Nos.1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18 and 19.

3.2.5. The “Packets”

We have defined by “Packet” the association of one or more image-stimulus. The nine following packets have been considered. The packets #1, 2, 8, and 9 corresponded to measures to the image-stimulus Nos. 1, 2, 18, and 19. The packets # 3, 4, 5, 6, and 7 corresponded to the average of the measures to the images-stimuli Nos. 3, 4, and 5; then Nos. 6, 7, and 8; then Nos. 9, 10 and 11; then Nos.12, 13, and 14; finally, Nos.15, 16, and 17, respectively. The passage of the image-stimuli in packets is explained by the need to create a statistical variance in the binary measurements (“0” and “1”) and to allow the realization of parametric tests.

3.2.6. The “Quantum”

We have defined by “quantum” the percentage of pixels from the basic image “B” (%PixB) present in each image-stimulus or in each packet. From observer’s left to right, the saturation in quantum for the packets was 0%, 10%, 28.3%, 41%, 50%, 59%, 71.7%, 90% and 100%, respectively for the packet #.1; 2; 3; 4; 5; 6; 7; 8 and 9. On observer’s right, the saturation in quantum for the images-stimulus was 0%; 10%; 20%; 30%; 35%; 38%; 41%; 44%; 47%; 50%; 53%; 56%; 59%; 62%; 65%; 70%; 80%; 90%; and 100%. This, respectively to the image-stimuli: Nos:1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18 and 19.

3.2.7. The “Visual-Facial-Emotions-Recognition” (ViFaEmRe)

We define “Visual-Facial-Emotions-Recognition” as the visual “perception” or “non-perception” of or a failure to perceive a facial emotion. This is a binary measurement: “no” or “yes” for “0” and “1”. Implied, as one can see, is the cognitive function identifying the emotion and can recognizing its presence at various intensities of expression?

3.2.8. The “Emotional-Visual-Acuity” (EmViAc)

The “Emotional-Visual-Acuity” is a percentage measure of the response “1” for the emotion “B” by an individual or a sample of a population (%RespB). It is the ability to identify an emotion to varying degrees from 0% to 100%, from a sample of individuals, and 0 to 1 from 2 singles individuals. In these 2 cases it is a question of “human-transmitter” to another “receiver-human”.

This ability to recognize emotions is present (except for the 1% discovered by us mentioned above) in every individual regardless or independent of: 1) the %PixB present in the stimulus-image; 2) Face-Test; and 3) emotions it expresses. EmViAc is measurable regardless of where the stimulus of the human face can come from, whether a real live human being, or an image from: 1) a cinematographic film; 2) a morphing-type computer construction; 3) an avatar presented in a virtual universe [33] [34] [35] or 4) the face of a robot in our present technological reality [9] , which opens the question of the human-machine interface [36] . Therefore, when a human present with a beating heart perceives the emotions of a human who is not present or has no “beating heart,” we will talk about “Emotional-Visual-Acuity”. It goes without saying that this model is an experimental approximation of the ego-function in living human interactions between humans with beating hearts. However, let’s keep in mind that the interactions: 1) “non-human transmitter” vs “receiver-human” [37] , 2) “human- transmittervsnon-human receiver” will grow exponentially in the near future [38] [39] .

Deep-Fake” technology is already here wherein the imposter image of a real person can imitate the original almost 100% accurately until the robotic mimicry is exposed when the imposter image is not able to answer the most secret security questions, giving out or exposing the fake image as imitation and not real. This rogue technology is being used now by scamsters to exploit many elderly unsuspecting citizens to get them to part with their money in their pocketbooks.

3.2.9. The “Visual-Emotional-Feeling” (ViEmFe)

“Visual-Emotional-Feeling” (ViEmFe) is defined as the dynamic and interactive ability of two or more real persons with “Beating heart” in face-to-face communication, when they both visually recognize each other’s emotional expressions and share their “feeling.” [40] , as “human-transmitter” vsreceiver-human” [41] . This face-to-face physical presence and proximity allows for a mirrored communication through the “emotional mimicry”. Facial emotional mimicry has been conceptualized as an automatic, reflex-like process. The observer’s facial expression matches the observed facial expression. In real life situations this is possible because of the availability of mirror neurons in the premotor cortex to both interacting humans as well as having the access to their own visceral sensations. We will identify this concept under the terms of “feeling.” We will use “emotionality” and “feeling” interchangeably [42] [43] . For most authors, “Emotionality” as a variable connotes several meanings: 1) emotional intelligence; 2) the well-being; 3) self-control; 4) emotionality and 5) sociability. These components are recognized as useful assets in the professional world [44] .

Like hearing, when a speech is heard through a loudspeaker in the absence of the interlocutor then, the listener(s) don’t feel the Auditory Verbal Emotionality or the Auditory Instrumental Emotionality [45] [46] . The prosody of the voice takes place in the human black-box with “machine-transmittervshuman- receiver” as evidenced by the work of Kexin et al. in 2019 [47] . We could also speak of 1) “Auditory Verbal Emotionality”, 2) “Auditory Instrumental Emotionality” and 3) “Auditory Musical Emotionality”. Also, in our work, “Emotional-Visual-Acuity” allows an indirect and partial measurement of the “Visual-Emotional-Feeling”. This is comparable to decibels in the prosody of a voice and its emotional content and the sensitivity of the listener [48] [49] [50] .

3.2.10. “Visual-Emotional-Feeling-Sensitivity” (ViEmFeSe)

Since in our work “the facial emotional transmitter” is an image and the “facial emotional receptor” a human with a beating heart we will talk about “Emotional-Visual-Acuity”. On the other hand, if our work concerned a real human “facial emotional transmitter” and real human “facial emotional receptor” face to face and both with a beating heart for the two, we should have spoken of “Visual-Emotional-Feeling-Sensitivity” (ViFaFeSen). This topic can be studied in future in exploring the developmental line of these two ego-functions (a la Anna Freud) when methodology becomes available. Hopefully, it will be possible to measure the emotional attunement between two individuals which is the basic requisite for good therapeutic alliance in conducting psychotherapy and also in other meaningful relationships.

3.2.11. “Emotional-Quotient”

The emotional quotient is the result obtained by dividing one quantity by another. This measurement is a comparison between an individual or a sample of individuals and a larger reference sample. Each of these measurements is represented by the area under the curve of 1) the individual, 2) the individual sample or 3) the reference sample. It is the ratio between these measures that defines the emotional quotient.

An accurate measurement of the emotional quotient requires measuring the area under the emotional imprint curve of each observer. The measurement of the area under the curve is extremely difficult due to the great variability of the curves. Calculating this measure would take too long and the mathematical model would be imprecise due to the interindividual variability that we call idiosyncrasy. This is why we propose a quick approximation of the area under the curve. For this the symbolic formula will be

The sum of the 9 packets will constitute the measurement of the emotional quotient.

We measured the emotional quotient between the individual represented by the red graph and the reference population in green in Figure 4.

Figure 4. Comparison of the Individual-Gun-Trigger-Graph of an observer with the Populational- GunTrGr of its reference population. The observer (red line) presents an Individual-Gun-Trigger- Graph with a straight line while the Populational-GunTrGr (green line) represents a sigmoid curve whose right plateau does not reach 100% unlike the observer.

For the unique individual in red:

= (100 – 0)*0 + (100 – 10)*0+ (100 – 28.3)*33 + (100 –41)*67 + (100 – 50)*100 + (100 – 59)*100 + (100 – 71.7)*100+ (100 – 90)*100+ (100 – 100)*100

= 100*0 + 90 * 0+ 71.7 * 33 + 59*67+ 50 *100 + 41*100+ 28.3*100+ 10*100+ 0*100,

= 19249.

For the reference population in green:

= (100 – 0)*0 + (100 – 10)*0+ (100 – 28.3)*7 + (100 –41)*46 + (100 – 50)*75 + (100 – 59)*93 +(100 – 71.7)*97+(100 – 90)*98 +(100 – 100)*99

= 100*0 + 90 * 0+ 71.7 * 7 + 59*46 + 50 *75+ 41*93 + 28.3*97 + 10*98+ 0*99,

= 14503

The ratio and therefore the emotional quotient of the individual (red graph) is 19249/14503 = 1.32, compared to the reference population. Then, if we multiply this ratio by 100 we obtain a figure which is more telling, that is 132%. This individual is 32% more efficient than the reference population. This emotional quotient depends on the reference population. A comparison of the emotional quotient for the 2 individuals in Figure 5 is impossible insofar as we have no reference population.

We think we have approximated the measure of the emotional quotient, which connects a numerator and a denominator. However, these two measures are linked by the [Ethnic-Sample-Population * identity Face Test * Ethnic-Face- Test * Emotions * environment * other]. This reflection leads us to consider the

Figure 5. Comparison of from 2 observers. Observer “A” presents Individual-Gun-Trigger-Graph with a straight line and its associated slope. Observer “B” shows an Individual-Gun-Trigger-Graph with a stepped pyramid type broken line.

existence of determinism in the visual recognition of facial emotions.

Nevertheless, by taking into consideration these influencing variables, we can now measure this emotional dimension, which allows us to appraise this as one of the limbic brain functions. Quantifiable and objective parameters seem possible on the physiological and pathological level.

3.3. Definition of Graphs

The general graphic architecture is as follows: 1) Emotional-Visual-Acuity measurements: percentages of responses “B”, (%RespB) will always be on the ordinate; 2) on the abscissa we will have: a) the measurements in %PixB of the Images-Stimuli or b) the average %PixB present in each of the 9 the packets.

3.3.1. Explanatory Mental Image of the Concepts of “Emotional-Decision-Making” and “Emotional-Fingerprint”

Due to the complexity of the concepts: 1) of “Emotional-Decision-Making” and the graph attached to it “Emotional-Decision-Making Graph”. The notion of “decision-making” is the subject of more and more intensive work in neuroscience on a transdisciplinary and at visual level [51] [52] [53] , 2) in the case of “Emotional-Fingerprint” (EmFi) and the related graphic “Emotional-Finger- print-Graph” (EmFiGr), it seems appropriate to take inspiration from an audiogram to explain the similarities in the paradigms.

These two types of emotional graphs are comparable to a graphical representation of hearing through the perception of a sound frequency: 1) on the abscissa, according to an intensity (decibels), 2) on the ordinate. The subject is in a soundproof room. A sound characterized by a given frequency and a given intensity is administered to each ear. For a given frequency, the audiologist varies the intensity of the sound from 0 decibels until the patient shows that he hears the sound signal by raising his hand. The audiologist reports on an orthonormal den of the sound frequencies on the abscissa and the respective intensities heard on the ordinate. For a given frequency (X) and at the respective intensity heard (Y), this results in a point with coordinates (x, y). The meeting of all the points represents a broken straight line called an audiogram.

As far as we are concerned, the intensity in decibels is replaced by an intensity measured in %pixB. It ranges from 0% to 100% pixels “B” coming from image “B” (ImSt No 19 = Packet # 9). Remember that for each Images-Stimuli (ImSt), these percentage of pixels “B” (%pixB) are “mix” with the percentage of pixels of the image “A” (%pixA) which displays emotional neutrality. The %pixA and %pixB change inversely. The %pixB increases from 0% to 100%. While concomitantly the %pixA regress from 100% to 0%. For each Images-Stimuli the sum of the number of %pixA with the number of %pixB will always equal 100%. The progression of the sound signal, which runs from 0 decibels to 120 decibels, is identical to the progression in percentage of “%respB” which range from 0% to 100% in the Emotional-Serie presented to the observer. The latter identifies the Images-Stimuli on which he identifies the emotion “B”. The coordinates (x, y) associate percentage of pixels “B” in “x” for each of the 9 packets and the average percentage of observers for each packet who recognize the emotion “B” (%respB) in “y”. These points are transcribed on an orthonormal lair that is interested in a single Emotional-Serie. The connection of the points constitutes a broken straight line which we will call “Emotional-Decision-Making-Graph” (EmDeMaGr) of a given Emotional-Serie. This is analogous to the audiogram. In addition, for the 6 emotional-series, we will have respectively 6 Emotional-De- cision-Making-Graph (EmDeMaGr), (see Figures 6-8). We could put on the abscissa the percentage of each of the 19 Images-Stimuli (%pixB) and on the ordered the percentages of observers who recognize this emotion “B” (%respB). In this case, we lose the statistical variance allowed by grouping the 19 Images-Stimuli (ImSt) into 9 packets.

To be more precise in our argument, imagine that during an audiogram the sound frequencies are not chosen at random, but that they correspond to the “musical scale”: “do”, “re”, “mi”, “fa”, “so”, “la”, “ti”, and “do”. Our paradigm in M.A.R.I.E. replaces the notes of the musical scale with the “emotional-range” of the 6 basic emotions in the sense of Ekman: “anger”, “disgust”, “joy”, “fear”, “surprise”, and “sadness”. Adding “neutral” expression may be viewed as an emotion (or lack of) itself making it a number 7 for basic emotions almost like an “octave” with no graded mathematical proportionality of frequencies seen in the musical octave.

For each of the 6 Emotional-Decision-Making-Graph (EmDeMaGr), we will calculate the average of the Emotional-Visual-Acuity (EmViAc) measurements

Figure 6. Populational-Gun-Trigger-Graph %PixB. Comparison between emotional-decision-making %RespB and the 9 packets, for all observers combined, all emotional-series combined, and all age groups combined.

Figure 7. The 3 Emotional-Decision-Making-Graph or Gun-Trigger-Emotional-Graph of FaTe-Blond, FaTe-Brunette and FaTe-Men are statistically different.

Figure 8. Emotional-Decision-Making-Graph or Gun-Trigger-Graph for all the 3 FaTes, for each of the 6 Emotional Series for all Age Groups: Po-GunTrGr#∑*6/6*∑*∑.

of the all the 9 packets of each of the 6 Emotional-Decision-Making-Graph (EmDeMaGr). Each of these 6 averages will first be transcribed on an orthonormal graph and then all the points will be connected. This will result in a broken straight line that will measure and visualize the Emotional-Visual-Acuity for only one observer or a sample of population. It will be identified under the term “Emotional-Fingerprint” (EmFi) and the graph attached “Emotional-Fingerprint-Graph” (EmFiGr). We will have 6 averages and therefore a UNIQUE Emotional-Fingerprint for each: 1) Face-Test, 2) individual, 3) age-group, 4) population sample (see Figure 9).

As we know fingerprints and snowflakes are infinitely unique, so also is the face -Test that makes it possible to land on an infinite number of different specimens in nature (like observers or samples of population).

These results will constitute the “y” ordinates of the upper graph (%RespB). It will be the same for the “B” pixels constituting the stimulus images which will also be averaged for each packet. They will constitute a series of 9 numbers which will be on the abscissa axis “x” (%PixB). In this way, we obtain an orthonormal benchmark with 6 sigmoid curves respectively to the 6 emotional series. This is the Emotional-Decision-Making-Graph.

The average of the measurements of the 9 “y” ordinate values (%RespB) for each of the 6 sigmoid curves of the emotional decision-making graph is transcribed on the ordinate axis of the lower graph (%PixB). The color code

Figure 9. For the Face-Test Blond we choose “neutral” which we associate with “joy”. The result is an emotional-series of 19 stimulus images. The binary responses “0” or “1” from each observer will be grouped by packet according to the color code. The absolute sum of responses per packet will be transformed into a relative number by dividing it by the total number of observers.

between each sigmoid and the respective point is preserved. The same operation for the 6 emotional series will constitute the “emotional-fingerprint” of the population sample for the “Blonde” Face Test and the “neutral-joy” emotional series. An identical protocol will be used for all the Emotional Series and for the 3 Face-Tests.

3.3.2. “Gun-Trigger-Graph”

Research on “decision-making” is the subject of particular interest for authors such as Bechara A., Damasio H. and Damasio A.R. [54] . We will define by Emotional-Decision-Making-Graph the transcription on an orthonormal graph of the 9 measures of the Emotional-Visual-Acuity relating to the 9 packets of an Emotional-Serie. We will have an Emotional-Decision-Making Graph, for each of the 6 basic emotions that come from a FaTe. Further clarification on the origin of the Emotional-Decision-Making-Graph, will be provided by 4 consecutive endings: 1) the 3 Faces-Test (FaTe-Blond, FaTe-Brunette, FaTe-Man), 2) the 6 emotional-series, 3) the 7 Age-Groups by the succession of the 2 numbers which identify each Age Group, 4) the number of observers considered.

The graphic is comparable to a gun trigger. For ease of reading and understanding of this work, we can call it, “Gun-Trigger-Graph”. The Emotional-Decisional-Graph can be likened to a “Gun-Trigger”. The sensitivity of this trigger is greater: 1) the closer the sigmoid is to the “y” axis and vice versa, 2) the more vertical the axis of this sigmoid. In addition, we could name the Emotional-Decisional-Graph, “Gun-Trigger-Graph”.

Therefore, the symbolic expression of this concept will be “Gun-Trigger- Graph” or

The measurement on a single individual will be noted by the prefix “Id” for “individual”. The measurement for a sample of 30 individuals or more, which statistically reflects a population will be denoted by the prefix “Po” for “population”. However, the total number of individuals and the age range of the sample must be known. Measuring a sample between 2 and 29 individuals will not have a prefix. All combinations are possible. As an example, we will have:

1) “Po-GunTrGr#Blond*NJ*20-30*62” for 62 aged between 20 and 30 who identify the emotional-series “neutral-joy” on the Blonde’s face.

2) “GunTrGr#Brunette*NS*45-50*5” for 5 observers aged between 45 and 50 who identify the emotional-series “neutral-sadness” on the face of the Brunette.

3) “Po-GunTrGr#Blond*∑*50-55*∑” for all the observers aged between 50 and 55 who identify all the emotional series on the Blonde’s face.

4) “Po-GunTrGr#*∑*∑*∑*∑” for the entire population sample who identify the all emotional-series on allFaTe (see Figure 6).

5) “Po-GunTrGr#3/3*∑*∑*∑” for the entire population sample who identify the all emotional-series on each of the 3 FaTe, in this case the curves of each FaTe are individualized on the same graph (see Figure 7).

6) “Po-GunTrGr#∑*6/6*∑*∑” for the entire sample of the population who identify each of the 6 emotional series on the all the FaTe in this case the 6 curves are individualized on the same graph (see Figure 8).

7) “Po-GunTrGr#Blond*6/6*∑*∑” for the entire sample of the population who identify each of the 6 emotional series on the FaTe-Blond, in this case the 6 curves are individualized on the same graph (see Figure 10(a), idem for the FaTe-Brunette Figure 10(b) and theFaTe-Man Figure 10(c)).

8) Etc.

(a)(b)(c)

Figure 10. (a) Gun-Trigger-Graph for the FaTeBlonde. Of each of the 6 Emotional Series for all Age Group: Po-GunTrGr#Blond*6/6*∑*∑; (b) Gun-Trigger-Graph for the FaTe- Brunette. Of each of the 6 Emotional Series for all age Group: Po-GunTrGr#Brunette*6/6*∑*∑; (c) “Emotional-Decision-Making Graph” (EmDeMaGr) or “Gun Trigger Graph” (GunTrGr).of each of the 6 emotional series for the entire population sample and for the Man face test either or Po-GunTrGr#Man*6/6*∑*∑.

3.3.3. “Key-Graph” for Emotional-Fingerprint-Graph

We will define the “Emotional-Fingerprint-Graph” as the distribution on the same graph of the single average of the 9 measures of the Emotional-Visual- Acuity relating to the 9 packets of an Emotional-Serie for each of the 6 basic emotions that come from one FaTe seen on the Emotional-Decision-Making- Graph. For the sake of simplification, we will speak of “Emotional-Fingerprint” and for his graphic representation of “Emotional-Fingerprint-Graph”. In this way, we take up the concept of the emotional fingerprint of an individual but with a numerical and graphic representation [48] . Therefore, we identify one Emotional-Fingerprint-Graph for each of the 3 FaTes for each observer, for each age group and for each population sample. For ease of communication to mitigate the complexity of this article, we will most often use simpler terms like “Key-graph”. Indeed, the graphic is comparable to the teeth of a mechanical lock key. To facilitate the reading and understanding of this work, we can, call it the “Key-Graph”. Therefore, we will use Key-Graph and Emotional-Fingerprint- Graph interchangeably. Therefore, the symbolic expression of this concept will be

The measurement on a single individual will be noted by the prefix “Id”. The measurement for a sample of 30 individuals which statistically reflects a population will be denoted by the prefix “Po”. However, the total number of individuals and the age range of the sample must be known. Measuring a sample between 2 and 29 individuals will not have a prefix. All combinations are possible. As an example, we will have:

1) “Po-KeyGr#Blond*NJ*20-30*62” for Emotional Fingerprints of 62 observers ages between 20 and 30 who identify the emotional-series “neutral-joy” streak on the Blonde’s face.

2) “KeyGr#Man*NS*45-50*5” for Emotional Fingerprints of 5 observers aged between 45 and 50 who identify the emotional-series “neutral-sadness” on the face of the man.

3) Etc.

3.3.4. Syn-Diachronic Graphs of the Evolution of Emotional-Visual-Acuity with advancing Age

In our project, we should have followed 30 observers from the age of 20 to 70 years old. We would have had measures “DIACRONICS”. The duration and the cost would make such project impossible to realize. For this reason, we carried out 30 simultaneous measurements or “SYNCHRONICS” for 7 age groups. However, the compilation of the measurements and their synthesis on a chronological plan do not allow us to speak of synchronic or diachronic. These terms are contradictory. Consequently, our study is not diachronic, but a succession of synchronic measurements which indirectly reflect diachronic measurements. We propose to call it “SYN-DIACHRONIC”. In other words, “syn-dia- chronic” Emotional-Visual-Acuity from age 20 to 70 years of a population sample from the North of France. For the sake of brevity and clarity, we will summarize this concept by the terminology “during-life” or “lifespan”.

4. Results of Statistical Analysis

4.1. Demographic Variables

4.1.1. Sex

The average age of the population sample is 49 +/− 15 years. The measurement of Emotional-Visual-Acuity of men and women observers are identical (F (1, 202) = 0.638; p = 0.425). For all 3 FaTes: 1) women 62.37% +/− 4%, 2) man 61.89% +/- 4.4%. For each of the 3 FaTe taken separately, men and women observers recognize facial emotions with the same Emotional-Visual-Acuity: 1) for the FaTe-Blond (F (1,202) = 1.559; p = 0.213), 2) for the FaTe-Brunette F (1,202) = 0.236; p = 0.628) and 3) for the FaTe-man (F (1,202) = 1.146; p = 0,286). Most authors have similar results [55] [56] . This is not the case for Beck et al. in 2022 [57] . Our results are in all respects identical to those reported in the literature review carried out by Forni-Santos and Osorio in 2015 [58] .

4.1.2. Face Test

The sample population does not recognize the 3 Fa-Te in the same way (F (2,609) = 0.85; p = 0.000): 1) FaTe-Blond 64.8 % +/− 5%, 2) FaTe-Brunette 58.67% +/− 5% and 3) FaTe-Man 62.90% +/− 4.6%. For Connolly et al, that supramodal emotion recognition capacity and facial identity recognition are two related but independent concepts [59] .

4.1.3. Level of Education

For the entire sample and for the 3 FaTe, the educational level does not influence the Emotional-Visual-Acuity (EmViAc), (F=1,202) = 6.327; p = 0.555). Similarly, grade level has no impact on the Emotional-Visual-Acuity (EmViAc) measure for each of the 3 FaTe: 1) the FaTe-Blond (F=1,202) = 0.319; p = 0.573); 2) the FaTe-Brunette (F=1,202) = 0.125; p = 0.724) and 3) the FaTe-Man (F=1,102) = 1.577; p = 0.211). Connolly et al. confirmed these observations in 2019 [60] .

4.1.4. Age Groups

The Emotional-Visual-Acuity is not perceived with the same intensity by the 7 age groups (F=6,197) = 2.154; p = 0.049) (see Figure 11).

The Emotional-Visual-Acuity gradually improves from age group No. 1 (20 to 30 years old) 59.9% to age group No. 5 (55 to 60 years old) 63%. Growth is 3.3%. Then, it decreases slightly from age group No. 4 (51 to 55 years old) 63.2% to the last age group 65 to 70 years old) 62.3%. The decrease is 1.1%. Through experimental error, this measurement may not be meaningful. This suggests a plateau effect of Emotional-Visual-Acuity from the age of 51 to 55. We performed a linear regression to confirm this hypothesis only from age group No. 1 to No. 4, (see Figure 12(a)). The leading coefficient of this regression line is positive: 0.977 and the R2 is 0.958. We performed a linear regression to confirm this hypothesis only from age group No. 4 to No. 7, (see Figure 12(b)). The leading coefficient of this regression line is negative: −0.967 and the R2 is 0.9255. Most authors have similar results [61] [62] [63] [64] . These results of ours confirm the conclusions of certain authors such as Connolly and. al. in 2021, who

Figure 11. Measurement of the Emotional-Visual-Acuity for each age group from 20 years to 70 years. The means of each age group are indicated on the Y-axis. The standard deviations for each mean are presented.

(a)(b)

Figure 12. (a) Considering only the first 5 age groups show a significant improvement in the Emotional-Visual-Acuity with increasing age. (b) Considering only the 3 last age groups show a significant decrease in the Emotional-Visual-Acuity with increasing age.

believe that the faculty for facial expression and the ability to recognize facial identity improve with age [65] .

However, the decrease in emotional-visual-acuity of 1.1 % between age’s group No. 4 and No. 7 should be considered insofar as it could signify the beginning of a neuropsychological or neurocognitive declining process due to cerebral aging. Remember that this population is labeled or considered supernormal on the anxiety, thymic and cognitive levels. This 1.1% loss then could be greater if the population was “normal” on the 3 previous planes. Moreover, perhaps even more in the case of early onset Alzheimer’s disease or frontotemporal dementia [66] .

There seems to be a consistency with the first 4 age groups which all go crescendo and the last 3 in decrease or decline. It is possible to think of the maturation of a neuropsychological functionality such as the visual recognition of facial emotions and then of its stability (becoming staticor attaining plateau)and later showing onset ofits dulling from the age of 53, which is the median of the 51 - 55 age group. In general, our results confirm those of Phillips et al. in 2002 [67] . The 3.3% measurement of increase could reflect an organic and/or functional construction process attributable to the neuroplasticity. In addition, the measurement of 1.1% decline could be the reflection of an organic and/or functional deconstruction or neurodegenerative process. This hypothesis deserves further work.

4.2. At the Scale of the Population Sample Gun-Trigger-Emotional or Emotional-Decision-Making

4.2.1. At the Scale of the Population Gun-Trigger-Emotional-Graph for All Face-Tests, all Emotional Series, all Age Groups, and All Observers: Po-GunTrGr #∑*∑*∑*∑ and %PixB

For all 204 observers, 6 Emotional-Series and 3 Face-Tests, the Emotional- Visual-Acuity measurements associated with the 9 packets are significantly different (F (8,153) = 238.693; p = 0.000). The Emotional-Visual-Acuity is correlated with increased %PixB in the image-stimuli for each packet. The correlation is strong with a Pearson’s r of 0.887 (p = 0.000). Our results confirm the work of Ferreira et al. in 2021 [68] .

The general appearance of Po-GunTrGr#∑*∑*∑*∑ displays for Emotional-Visual-Acuity measurements %RespB is a sigmoid curve, and for %PixB, a broken line whose general appearance is a diagonal (see Figure 6). The sigmoid curve and the diagonal intersect once at the lower left. We will name the point of this intersection “lower intercept”. We do not observe any “superior intercept”. Indeed, we expected 100% of the sample population to recognize Image-Stimulus #19 from packet # 9. That was not the case. Moreover, this population sample presents of 99% Emotional-Visual-Acuity for the packets #7 and #8. This is confirmed for the packet # 9, which represents only the Image-Stimulus No.19, the basic emotion at the origin of the emotional-series. It is made up of 100 %PixB. It is a basic emotion of our reality. This image-stimulus No.19 appears in last position in each of the 6 Emotional-Series. This should result in a learning phenomenon that should promote its recognition in each Emotional-Serie. That does not seem to be the case. With this finding, Images-Stimuli No.19 of packet #9 is associated with an Emotional-Visual-Acuity equal to 99% for packets #7; 8 and 9. This assumes the existence of an “emotional paradox” or a methodological bias.

4.2.2. At the Scale of the Population Gun-Trigger-Emotional-Graph of Each of the 3 Face-Tests for All the Emotional Series, All Age Groups and all Observers: Po-GunTrGr#3/3*∑*∑*∑

Emotional-decision-making-graph or Gun-Trigger-Graph of each of the 3 Face-Test for the entire emotional series for the entire sample population.

The measurement of the Emotional-Visual-Acuity of each of the 9 packets “packet # 1, # 2,# 3, # 4, # 5, # 6, # 7, # 8, # 9”, for each of the three face-Tests reveals respectively 3 sigmoid curves (see Figure 4). The sigmoid curves of FaTe-Blond, FaTe-Brunette and FaTe-Man overlap for packets: # 1, # 2, # 7, # 8, and # 9. FaTe-Blonde’s sigmoid overlaps almost entirely with the sigmoid of FaTe-Man. Statistically, there is a significant difference between the 3 sigmoid curves (Wilk’s Λ = 0.026, D (24, 125) = 13.225, p = 0.000, partial η2 = 0.705) which is explained by the FaTe-Brunette. A separate ANOVA was conducted for each dependent variable with each ANOVA evaluated at an alpha level .001. There is no statistically significant difference (F (1, 16) = 0.060; p = 0.810) between FaTe-Blond and FaTe-Male.

At first glance, the identity of the face seems to intervene in decision-making with a shift to the right of the sigmoid of FaTe-Brunette. In other words, less sensitivity of the population sample for FaTe-Brunette. However, we must not forget the ethnic origin (African-American) of this face-test and the existence of other reasons not yet identified. On the other hand, for image-stimulus No. 19 (packet # 9) composed of 100 %PixB, it is 100 %RespB for the FaTe-Blond, 99 %RespB for the FaTe-Brunette and 99 %RespB for the FaTe-male.

Stimulus-image recognition “neutral” is 0%RespB; In other words, 100 %RespA for each of the 3 face-Tests. This finding assumes that the image-stimulus “neutral” could be an “emotional invariant” universally recognized. However, in our paradigm the image-stimulus “neutral” does not seem to activate the organic and functional circuits of emotions [69] . Stimulus-image No.1 has an “Emotional-Visual-Acuity” equal to zero independently of the FaTe and the Emotional-Series. More simply, we can also say that neutrality is not an emotion. Therefore, the perception of this image of neutrality of a human face most probably does not activate any neural circuit that processes recognition of emotions. Intriguingly, it does trigger severe anxiety in infants when the mother’s face abruptly presents sustained neutrality.

4.2.3. At the Scale of the Population Gun-Trigger-Emotional-Graph for All the 3 FaTe, for Each of the 6 Emotional Series for all Age Groups and All Observers: Po-GunTrGr#∑*6/6*∑*∑

There was a significant difference between the: 1) Emotional-Serie” neutral- anger”, 2) EmSe “neutral-disgust”, 3) EmSe “neutral-joy”, 4) EmSe “neutral-fear”, 5) EmSe “neutral-surprise”, 6) EmSe “neutral-sadness”, when we considered jointly on the variable packet # 1, # 2,# 3, # 4, # 5, # 6, # 7, # 8, # 9; Wilk’s Λ = 0.456, D(45, 5415) = 23.127, p = 0.000, (see Figure 5). A separate ANOVA was conducted for each dependent variable with each ANOVA evaluated at an alpha level .001. There was a significant difference between the packet, # 2, # 3, # 4, # 5, # 6, packets of the 6 emotional series.

This sample population has the worst Emotional-Visual-Acuity measure for “anger” compared to the other emotional-series. It is necessary that the image-stimulus contain more %PixB so that the “B” emotion of the Emotional- Visual-Acuity is perceived with the same percentages as the stimulus-images of the other 5 emotional-series and more particularly the “joy”. In other words, this population sample is insensitive to the “anger” and sensitive to “joy” which is the best-recognized emotion and with the fewest %PixB. It is followed by the “fear” the “sadness”, the “disgust” and the “surprise”, in that order. The “Anger” is the least recognized emotion. We observe a set of interstices between each of the successive curves. We will name these successive interstices “emotional-channels” between an Emotional-Serie “alpha” and an Emotional-Serie “beta” except between the best-recognized Emotional-Series and the least-recognized Emotional-Series. We will call this large interstice “Emotional-Focus”.

The central part of the “anger” sigmoid tends towards “horizontalization” unlike other emotions. For the latter, the trend is towards “verticalization”. Therefore, it is not absurd to investigate the existence of a link between: 1) the angle formed by the abscissa axis and the axis of the median part of the sigmoid, a) categorical if angle close to 90˚, b) continuous if angle close to 45˚ and c) ambivalence due to non-recognition of an emotion if angle close to 0˚; 2) more or less sensitivity of the population sample for the emotion considered (the positioning of the projection of the “lower intercept” on the abscissa axis). The sensitivity increases if the lower intercept approaches zero on the abscissa axis. Sensitivity decreases if the lower intercept approaches 100 on the abscissa axis

4.2.4. At the Scale of the Population Gun-Trigger-Graph for the Fate-Blond of Each of the 6 Emotional Series for All Age Group and All Observers: Po-GunTrGr#Blond*6/6*∑*∑

There was a significant difference between the: 1) FaTe-Blond neutral-anger, 2) FaTe-Blond neutral-disgust, 3) FaTe-Blond neutral-joy, 4) FaTe-Blond neutral-fear, 5) FaTe-Blond neutral-surprise, 6) FaTe-Blond neutral-sadness, when we considered jointly on the variables packet # 1, # 2,# 3, # 4, # 5, # 6, # 7, # 8, # 9; (Wilk’s Λ = 0.661, D(40,5281) = 13.131, p = 0.000), (see Figure 10(a)). A separate ANOVA was conducted for each dependent variable with each ANOVA evaluated at an alpha level .001. There was a significant difference between the packets on the variable packet,# 2, # 3, # 4, # 5, # 6 of the 6 emotional-series. The “joy” is the emotion that is identified with the most; 1) sensitivity in contrast to “anger” and 2) also way more categorical in contrast to “anger”.

4.2.5. At the Scale of the Population Gun-Trigger-Graph for the faTe-Brunette for Each of the 6 Emotional Series for All Age Groups and All Observers: Po-TrGr#Brunette*6/6*∑*∑

There was a significant difference between the: 1) FaTe-Brunette neutral-anger, 2) FaTe-Brunette neutral-disgust, 3) FaTe-Brunette neutral-joy, 4) FaTe-Brunette neutral-fear, 5) FaTe-Brunette neutral-surprise, 6) FaTe-Brunette neutral-sadness, when we considered jointly on the variables: packet # 1, # 2,# 3, # 4, # 5, # 6, # 7, # 8, # 9; (Wilk’s Λ = 0.0340, D(45,5415) = 32.827, p = 0.000), (see Figure 10(b)). A separate ANOVA was conducted for each dependent variable with each ANOVA evaluated at an alpha level .001. There was a significant difference between the packets, # 2, # 3, # 4, # 5, # 6, of the 6 Emotional Series.

The “sadness” is the emotion, which is identified with the most sensitivity and categorical effect in contrast to “anger” and some “surprise”. The sigmoid curves of the “anger” and the “surprise” overlap. The ordering of the sigmoid is different from the FaTe-Blond and FaTe-Man. The emotional-channels between the different emotions are wider compared to the other FaTes.

4.2.6. At the Scale of the Population Sample Gun-Trigger-Graph for the FaTe-Man of Each of the 6 Emotional Series for All Age Groups and all Observers: Po-GunTrGr#Man*6/6*∑*∑

There was a significant difference between the: 1) FaTe-Man neutral-anger, 2) FaTe-Man neutral-disgusted, 3) FaTe-Man neutral-joy, 4) FaTe-Man neutral-fear, 5) FaTe-Man neutral-surprise, 6) FaTe-Man neutral-sadness, when we considered jointly on the variables packet # 1, # 2,# 3, # 4, # 5, # 6, # 7, # 8, # 9 (Wilk’s Λ = 0.345, D(45,5415) = 32.313, p = 0.000), (see Figure 10(c)). A separate ANOVA was conducted for each dependent variable with each ANOVA evaluated at an alpha level 0.001. There was a significant difference between the packets, # 2, # 3, # 4, # 5, # 6, of the 6 Emotional-Series. The “joy” is the emotion that is identified with the most sensitivity and categorical effect in contrast to “anger” which is difficult to identify. There “anger” of FaTe-Man is the least well-identified emotion among the 3 × 6, i.e., 18 Emotional-Series of this experimentation. The Emotional-Channel between “anger” and “disgust” is the most important of all the Emotional-Channels of the 3 Gun-Trigger-Emotional- graphs.

4.3. At the Scale of a Single Observer Emotional-Decision-Making or Gun-Trigger-Emotional

4.3.1. Individual-Gun-Trigger-Graph, (Id-GunTrGr)

It is possible to make a graph of measures of “Individual-Graph-of-Emotional- Decision-Making” or “Individual-Gun-Trigger-Graph” for a SINGLE observer. On the ordinate we will have the measurement of the Emotional-Visual-Acuity and on the abscissa the 9 packets with the 9 quanta of %PixB. At the individual scale, the morphology of each Individual-Gun- Trigger-Graph oscillates between a “diagonal”, and a “broken line of the step pyramid type”. The latter are framed by a lower left tray, which has the ordinate “0”, and an upper right tray, which has the ordinate “100” (see Figure 4 and Figure 5).

Therefore, for a single observer, for one FaTe, for one Emotional-series, it becomes possible, thanks to the Individual-Gun-Trigger-Graph to realize, 1) an individual self-comparison “before” and “after”: a) a disease or b) a treatment, 2) inter-individual comparisons for the same or different sample population (see Figure 4). In addition, it is possible to compare the Individual-Gun-Trigger- Graph, of this observer with the Populational-GunTrGr#∑*∑*∑*∑ of the original population sample or other sample population, for the same age group or any other comparisons (see Figure 5).

Each Individual-Gun-Trigger-Graph takes in its middle part the form either of a straight line (y = ax + b) or a piecewise linear function. These characteristics are specific inter individual variations of the observer.

The set of 204 (observers) × 3 (Faces-Test) × 6 (emotional-series) or 3672 Id-GunTrGr can be categorized into 2 sets. A diagonal type of morphology distinguishes the first. The lower left plateau joins the upper right plateau by a straight line with a steep slope. The recognition of the “B” emotion (%respB) is done gradually with a “proportionality” between the Emotional-Visual-Acuity measurement and the %PixB present in the packet (see Figure 4). The second set concerns Individual-Gun-Trigger-graph where the plates are joined by a broken straight line of the step pyramid type (see Figure 5). In this set, Emotional-Decision-Making is not instantaneous but in stages. This testifies to an ambivalence of the observer. The standard deviation of the Emotional-Visual- acuity for each of the 9 packets of each Emotional-Series for each FaTe could be a good reflection of this ambivalence.

For some Individual-Gun-Trigger-Graph, the Emotional-Visual-Acuity is at 100% from packet # 5 and remains so until packet # 9, whereas the Emotional-Visual-Acuity for the various reference Populational-Gun-Trigger-graph #∑*∑*∑*∑ never systematically reaches 100% (see Figure 6 in green) and for Populational-Gun-Trigger-graph #3/3*∑*∑*∑ (see Figure 7). The comparison between an Id-GunTrGr, and its reference Po-GunTrGr #3/3*6/6*∑*∑ allows immediately to see if the Emotional-Visual-Acuity of the individual presents significant anomalies compared to the emotional-series, age group and its reference population, etc.

4.3.2. Fundamental-Individual-Gun-Trigger-Graph (Fu-Id-GunTrGr)

The best way to understand the mechanisms of emotional-decision-making that allows the construction of the determination of a binary choice is to focus on a single observer who is faced with no each of each of the 19 Images-stimulus and not with the 9 packets. This graph will depend, at the very least, on1) the mood and anxiety state of the observer, 2) the observer’s ethnicity 3) the civil identity of the Face-Test, 4) the Fate-Test’s ethnicity, 5) the Emotional-Serie, 6) of each of the 19 Image-Stimulus and 6) etc. This allows us to construct a “Fundamental-Individual-Graph-of-Emotional-Decision-Making” called differently, “Fundamental-Individual-Gun-Trigger-Graph” (Fu-Id-GunTrGr) [70] [71] [72] . It seems to us that this is an important contribution of our research work. The “y” axis has 2 values “0” and “1” (threshold function). If the observer identifies the image-stimulus as emotion “B”; in this case, the value “1” is assigned to this image-stimulus. Otherwise, if it identifies emotion “A”, the value “0” is assigned to the stimulus-image. On the “x” axis, we will consider the %PixB contained in each image-stimulus. In addition, we will not talk about “error” but more of a “misunderstanding” or “misrepresentation”. Indeed, there is no consensual frame of reference that delimits a “right answer” or a “wrong answer”. However, this reflection will be again discussed later depending on the image-stimulus identified.

We studied the responses of the 204 Fundamental-Individual-Gun-Trigger- Graph. Respectively to the 204 observers for only one Emotional-Series and only one FaTe. They can be categorized into 5 sets.

1) The first set or “I-effect”

It combines Heaviside type threshold function, which is distinguished by the image-stimulus where the stair step is positioned (see Figure 13(a)). We will name it “I-effect”. The image-stimulus which causes the “jump” will be named “Image-Stimulus-of-Flip”. The I-effect reflects an instantaneous and unambiguous aspect of the Emotional-Decision-Making with categorical choice like neural networks in the context of Artificial Intelligence (AI). We estimate its frequency at 27% for the Emotional-Serie studied and sample population.

2) The second set or “A-effect”

It corresponds to incongruous misunderstanding, even real errors, because they are located in a succession of identical and coherent answers to left of the image-stimulus-of-flip (see Figure 13(b)). We will name it “A-effect”. We estimate its frequency at 20% for the Emotional-Series studied and sample population.

3) The third set or “V effect “

It also corresponds to incongruous misunderstanding, even real errors, because they are located in a succession of identical and coherent answers to right of the “Image-Stimulus-of-Flip” (ImStFl) (see Figure 13(c)). We will name it “V effect “. We estimate its frequency at 13% for the Emotional-Serie studied and sample population.

4) The fourth set or “N-effect”

It is truly distinguished by misunderstanding in the answers insofar as they are between the lower left tray and the upper right tray (see Figure 13(d)). This zone of misunderstanding is explained by a hesitation between two possible choices. In other words, an “ambivalence” which can be defined by the coexistence of two opposite desires at the same time in the same person with difficulty in making a choice. This area can also be defined as an “Ambivalence-Beach” of small extent, with two misunderstandings. We will name it “N-effect”. In the N-effect there is no Image-Stimulus-of-Flip but an area. We estimate its frequency at 27% for the Emotional-series studied and sample population.

5) The fifth set or “M-effect”

It contains a succession of 4 misunderstandings which follow one another,

(a)(b)(c)(d)(e)

Figure 13. (a) Fundamental-Individual-Gun-Trigger-Graph. The “I-effect” reflects instantaneous and unambiguous decision-making with a categorical choice like neural networks. The image-stimulus which causes the “jump” will be named “Image-Stimulus- of-Flip”; (b) Fundamental-Individual-Gun-Trigger-Graph. The “error” when the answer nonconsensualit is more than one image-stimulus before “Image-Stimulus-of-Flip”. The “misunderstanding” when the non-consensual response is located just in front of the “reversal-image-stimulus”. These two scenarios can be summarized as “A-effect”; (c) Fundamental-Individual-Gun-Trigger-Graph. The “error” when the answer nonconsensual it is more than one image-stimulus before “Image-Stimulus-of-Flip”. The “misunderstanding” when the non-consensual response is located just in front of the “reversal-image- stimulus”. These two scenarios can be summarized as “V-effect”; (d) Fundamental Individual-Gun-Trigger-Graph. Observer unable to make a choice. This zone can is defined as “ambivalence-beach” of small extent, with 2 misunderstandings (2 peaks). We will name this set N-effect; (e) Fundamental Individual-Gun-Trigger-Graph. Observer has a greater incapacity to make a choice. This largest ambivalence-beach, with 4 misunderstandings (4 spikes). We will name this set M-effect.

and which are located between the lower left plate and the upper right plate (see Figure 13(e)). We will call it “M-effect”. The ambivalence-beach has a greater extent than in the N-effect. In the M-effect, there is no Image-Stimulus-of-Flip but an area. This area can also be also defined as “Ambivalence Beach”, somewhat smaller, with 4 misunderstandings. We estimate its frequency at 13% for the Emotional-Series studied and sample population.

6) Misunderstandings or Errors

The “A”, “V”, “N” and “M” effects are misunderstandings, even real errors. They could constitute statistical biases and would explain the “rounding” of the junction between the left and right trays with the diagonal of the middle part. It is visible in all the graphs dealing with the population sample (sigmoid curves).

The “A”, “V”, “N” and “M” effects reflect the reality of the experience and as such they must be taken into consideration. They could suggest the presence of an attention deficit disorderor simply inattention. The measurement of these errors could constitute a measure of Visual-Attention.

During this reflection, we could very well speak of 1)”misunderstanding” when the non-consensual answer is attached to the Image-Stimulus-of-Flip and 2) “error” when the answer nonconsensual is more than one image-stimulus away from Image-Stimulus-of-Flip. These are the “A” and “V” effects. In other words, the non-consensual answer is qualified as “error” when we approach Stimulus-Image No.1 and No. 19. It is qualified as “misunderstanding” when we approach Image-Stimulus-of-Flip. The “N”, “M” effects would be “misunderstandings”. The measurement of the extent of the Ambivalence-Beach would be a way of measuring the existence of an “attention disorder”. The “I” effect signified by the “Image-Stimulus-of-Flip” would be the boundary that delimits the “A”, “V”, “N” and “M” effects.

4.3.3. Individual Categorical Effect versus Proportional Population Effect

At the individual and fundamental scale, the Fundamental-Individual-Gun- Trigger-Graph (19 images-Stimulus) identifies a strictly categorical effect. Considering all the effects “I”, “A”, “V”, “N” and “M” leads to the Individual- Gun-Trigger-Graph (9 packets) which shows a diagonal or even a diagonal with degrees in its middle part framed by a low plateau on the left and an upper plateau on the right (see Figure 4). The summation of the Individual-Gun-Trigger- Graph of an age group allows a transformation of this diagonal into a true sigmoid (see Figure 4 green line). Summing all Individual-Gun-Trigger-Graph for each of the 3 FaTes results in increasingly “rounder” sigmoid (see Figure 7) and even more if we consider all the 3 FaTes of the 6 Emotional-Series and the 204 observers (see Figure 6). At the scale of a population sample, we have a true sigmoid curve, which is the result of an interaction of the accumulation of the “I”, “A”, “V”, “N” and “M” effects. When these are: 1) as close as possible to the Image-Stimulus-of-Flip, the result is an inclination of the middle part and 2) furthest from the Image-Stimulus-of-Flip, it results from a “curve rounding” of the junction between the middle part and the left tray and the right tray.

The compilation of many observers with only an effect “I” on the same Image-Stimulus-of-Flip for the same Emotional-Serie would be possible. In this hypothesis, we would have a truly categorical curve of the Heaviside type. Performing morphological and functional brain imaging would allow a more precise localization of emotional signal processing processes in a “purely” categorical framework. This should be possible with availability of a functional MRI.

The continuous or proportional aspect would be the result of the statistical summation of Ambivalence-Beach, which would spread throughout the 19 Stimulus-images respectively to each observer of the population sample. The accumulation of misunderstanding-errors, which would spread throughout the 19 Stimulus-images respectively to each observer of the population sample, would explain the fact that the right tray: V-effect, remains below 100% of Emotional-Visual-Acuity. The misunderstand-error on the left board: A-effect, are less numerous, or even absent, than the misunderstand-error on the right board V-effect. This suggests that neutrality would not be an emotion and that certain basic emotions may not be recognized in the context of this experiment. This was the subject of a debate between Ekman and Russel about the contradiction between idiosyncrasy vs universality in the recognition of facial emotions [73] [74] .

This new theoretical approach would help to separate the categorical aspect [75] [76] [77] from the proportional (or continuous or dimensional) aspect of the Visual-Facial-Emotions-Recognition. Our results can complement the discussion of Feldman et al. in 1998 [78] . He believes that there are 2 types of individuals: 1) those who have high valence focus with low excitation focus; they follow a proportional model, 2) those with lower valence focus and higher arousal focus; they follow a discrete and therefore categorical model.

Due to the length of this article, the rest of the communication is in the content of the article that follows.

5. Limitation of This Study

We recognize that this study focuses only on a small sample of residents of Lille in northern France. This sample is made up of culturally homogeneous French-speaking white Caucasians. These results, therefore, may not be entirely and broadly applicable on a cross-cultural level because variables concerning memory, cognitions, anxiety, and depression are controlled and optimal. Hence we have used the qualifier “Supra-Normal”.

The disadvantage of this methodology is that we do not have the numerical and graphical characteristics of a “normal” population for comparison. This sample does not really follow the same observers longitudinally from their 20 to their 70. In addition, to study and understand these functions better in the human race, this work should have presented 3 Faces-Tests ideally to observers who should be in samples of more than 30 in 6 groups for: 1) men, 2) women, 3) children, 4) adolescents, 5) recently settled immigrants of an ethnic group different from the Hosts and 6) long-established immigrants of an ethnic group different from the Hosts.

Finally, the population sample should have been extended to age groups from: 1) 0 to 2.5 years, 2) from 2.6 years to 5 years, 3) from 5.1 years to 10 years, 4) from 10.1 years to 15 years, 5) from 15.1 years to 20 years, 6) from 21 years to 50 years by age groups of 10 years and 7) from 71 years to 100 by age group of 5 years, each age group having more than 30 observers.

Acknowledgements

This work was supported by the grant 1998/1954 of the Programme Hospitalier de Recherche Clinique (P.H.R.C.) of the French government. I thank these colleagues and researchers from the commissions invested by the French government who, without knowing the principal author’s full credentials and limited experience as a starting investigator, believed in the idea of a young, totally unrecognized scientist. By their choice, they offered a grant which enabled not only the detailed study of the original ideas and concepts of these authors but also the establishment of the standards. This is the reason that compels us to thank the French Government recognizing the role that chance and destiny may have played. Thanks are due to Paul Ekman who gave permission to use photographs from “Unmasking the faces” (Ekman & Friesen, 1975) and to Olivier Lecherf who designed the computer program for processing and displaying the pictures. This study was made possible thanks to the Professor Christian Libersa of the Centre d’Investigation Clinique (CIC-CHU/INSERM, Lille). I would like to thank Professor Olivier Godefroy Neurologist at Amiens University Hospital center South St Vincent de Paul, 80054 Amiens Cedex for his help and unconditional support for more than 24 years. Without him, this fundamental research work would not have seen the light of day. My thanks and appreciation go to Mr. Jay Vinekar who allowed a critical and constructive reading of this work. And finally, the professor of neurology Patrick Vermersch from the Hospital Salengroto the Centre Hospitalier Régional De Lille (France) who supported Dr Granato, the principal author in the pursuit of this research. This article represents the capstone representation of 25 years of professional research work by the authors. It would not have been possible without the unconditional psychological and emotional support of their wives. The life partners must no longer be forgotten by doctors and researchers in the world of science who sacrifice personal comfort and well-being, and that of their wives, for psychiatry, science, and the progress of humanity. We would like to pay tribute and respect to Mrs. SHYAMALA HEMCHANDRA SASHITTAL Shyamala, the wife of Professor VINEKAR Shreekumar and Mrs. WALENCKI Martine, the wife of Dr. GRANATO Philippe.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Eslinger, P.J., Anders, S., Ballarini, T., Boutros, S., Krach, S., Mayer, A.V., Moll, J., Newton, T.L., Schroeter, ML., De Oliveira-Souza, R., Raber, J., Sullivan, G.B., Swain, J.E., Lowe, L. and Zahn, R. (2021) The Neuroscience of Social Feelings: Mechanisms of Adaptive Social Functioning. Neuroscience & Biobehavioral Reviews, 128, 592-620.
https://doi.org/10.1016/j.neubiorev.2021.05.028
[2] Zhang, D., Wang, X. and Zhang, S. (2023) Shared Leadership and Improvisation: Dual Perspective of Cognition-Affection. Behavioral Sciences, 13, Article 265.
https://doi.org/10.3390/bs13030265
[3] Lange, J., Heerdink, M.W. and Van Kleef, G.A. (2022) Reading Emotions, Reading People: Emotion Perception and Inferences Drawn from Perceived Emotions. Current Opinion in Psychology, 43, 85-90.
https://doi.org/10.1016/j.copsyc.2021.06.008
[4] De Panfilis, C., Antonucci, C., Meehan, K.B., Cain, N.M., Soliani, A., Marchesi, C. and Sambataro, F. (2019) Facial Emotion Recognition and Social-Cognitive Correlates of Narcissistic Features. Journal of Personality Disorders, 33, 433-449.
https://doi.org/10.1521/pedi_2018_32_350
[5] Demirel, H., Yesilbas, D., Ozver, I., Yuksek, E., Sahin, F., Aliustaoglu, S. and Emul, M. (2014) Psychopathy, and Facial Emotion Recognition Ability in Patients with Bipolar Affective Disorder with or without Delinquent Behaviors. Comprehensive Psychiatry, 55, 542-546.
https://doi.org/10.1016/j.comppsych.2013.11.022
[6] Mauss, I.B. and Robinson, M.D. (2009) Measures of Emotion: A Review. Cognition and Emotion, 23, 209-237.
https://doi.org/10.1080/02699930802204677
[7] Muukkonen, I. and Salmela, V.R. (2022) Representational Structure of FMRI/EEG Responses to Dynamic Facial Expressions. NeuroImage, 263, Article ID: 119631.
https://www.sciencedirect.com/science/article/pii/s1053811922007467
https://doi.org/10.1016/j.neuroimage.2022.119631
[8] Kim, J., Schultz, J., Rohe, T., Wallraven, C., Lee, S.W. and Bülthoff, H.H. (2015) Abstract Representations of Associated Emotions in the Human Brain. Journal of Neuroscience, 35, 5655-5663.
https://doi.org/10.1523/JNEUROSCI.4059-14.2015
[9] Abdollahi, H.M., Mahoor, R., Zandie, J., Sewierski, J. and Qualls, S. (2022) Artificial Emotional Intelligence in Socially Assistive Robots for Older Adults: A Pilot Study. IEEE Transactions on Affective Computing, 14, 2020-2032.
https://doi.org/10.1109/TAFFC.2022.3143803
[10] Abdel-Hamid, L. (2023) An Efficient Machine Learning-Based Emotional Valence Recognition Approach towards Wearable EEG. Sensors, 23, Article 1255.
https://doi.org/10.3390/s23031255
[11] Andreu-Perez, A.R., Kiani, M., Andreu-Perez, J., Reddy, P., Andreu-Abela, J., Pinto, M. and Izzetoglu, K. (2021) Single-Trial Recognition of Video Gamer’s Expertise from Brain Haemodynamic and Facial Emotion Responses. Brain Sciences, 11, Article 106.
https://doi.org/10.3390/brainsci11010106
[12] Gao, X., Weng, L., Zhou, Y. and Yu, H. (2017) The Influence of Empathy and Morality of Violent Video Game Characters on Gamers’ Aggression. Frontiers in Psychology, 8, Article 1863.
https://doi.org/10.3389/fpsyg.2017.01863
[13] Ekman, P. (1976) Pictures of Facial Affect. Consulting Psychologists Press, Washington DC.
[14] Ekman, P. (1992) An Argument for Basic Emotions. Cognition and Emotion, 6, 169-200.
https://doi.org/10.1080/02699939208411068
[15] Oldfield, R.C. (1971) The Assessment and Analysis of Handedness: The Edinburgh Inventory. Neuropsychologia, 9, 97-113.
https://doi.org/10.1016/0028-3932(71)90067-4
[16] Edlin, J.M., Leppanen, M.L., Fain R.J., Hackländer, R.P., Hanaver-Torrez, S.D. and Lyle, K.B. (2015) On the Use (and Misuse?) of the Edinburgh Handedness Inventory. Brain and Cognition, 94, 44-51.
https://doi.org/10.1016/j.bandc.2015.01.003
[17] Sheehan, D.V., Lecrubier, Y., Sheehan, K.H., Amorim, P., Janavs, J., Weiller, E., Hergueta, T., Baker, R. and Dunbar, G.C. (1998) The Mini-International Neuropsychiatric Interview (M.I.N.I.): The Development and Validation of a Structured Diagnostic Psychiatric Interview for DSM-IV and ICD-10. Journal of Clinical Psychiatry, 22-33, 34-57.
[18] Hergueta, T., Baker, R. and Dunbar, G.C. (L998) The Mini-International Neuropsychiatric Interview (MINI): The Development and Validation of a Structured Diagnostic Psychiatric Interview for DSM-IV and ICD-10. Journal of Clinical Psychiatry, 20, 22-33.
[19] Folstein, M.F., Folstein, S.E. and McHugh, P.R. (1975) Mini-Mental State: A Practical Method for Grading the Cognitive State of Patients for the Clinician. Journal of Psychiatric Research, 12, 189-198.
https://doi.org/10.1016/0022-3956(75)90026-6
[20] Thompson, E. (2015) Hamilton Rating Scale for Anxiety (HAM-A). Occupational Medicine, 65, 601.
https://doi.org/10.1093/occmed/kqv054
[21] Hamilton, M. (1960) A Rating Scale for Depression. Journal of Neurology, Neurosurgery and Psychiatry, 23, 56-62.
https://doi.org/10.1136/jnnp.23.1.56
[22] Lucas, J.A., Ivnik, R.J., Smith, G.E., Bohac, D.L., Tangalos, E.G., et al. (1998) Normative Data for the Mattis Dementia Rating Scale. Journal of Clinical and Experimental Neuropsychology, 20, 536-547.
https://doi.org/10.1076/jcen.20.4.536.1469
[23] Buschke, H. (1984) Cued Recall in Amnesia. Journal of Clinical Neuropsychology, 6, 433-440.
https://doi.org/10.1080/01688638408401233
[24] Grober, E., Buschke, H., Crystal, H., Bang, S. and Dresner, R. (1988) Screening for Dementia by Memory Testing. Neurology, 38, 900-903.
https://doi.org/10.1212/WNL.38.6.900
[25] Rösler, A., Lanquillon, S., Dippel, O. and Braune, H.J. (1997) Impairment of Facial Recognition in Patients with Right Cerebral Infarcts Quantified by Computer Aided “Morphing”. Journal of Neurology, Neurosurgery & Psychiatry, 62, 261-264.
https://doi.org/10.1136/jnnp.62.3.261
[26] Young, A.W., Rowland, D., Calder, A.J., Etcoff, N.L., Seth, A. and Perrett, D.I. (1997) Facial Expression Megamix: Tests of Dimensional and Category Accounts of Emotion Recognition. Cognition, 63, 271-313.
https://doi.org/10.1016/S0010-0277(97)00003-6
[27] Granato, P. and Bruyer, R. (2002) Measurement of the Perception of Facially Expressed Emotions by a Computerized Device: Method of Analysis and Research for the Integration of Emotions (MARIE). European Psychiatry, 17, 339-348.
https://doi.org/10.1016/S0924-9338(02)00684-3
[28] Granato, P., Vinekar, S., Godefroy, O., Van Gansberghe, J.P. and Bruyer, R. (2012) Evidence of Impaired Facial Emotion Recognition in Mild Alzheimer’s Disease: A Mathematical Approach and Application. Open Journal of Psychiatry, 2, 171-186.
[29] Granato, P., Vinekar, S., Godefroy, O., Van Gansberghe, J.P. and Bruyer, R. (2014) A Study of Visual Recognition of Facial Emotional Expressions in a Normal Aging Population in the Absence of Cognitive Disorders. Open Journal of Psychiatry, 4, 251-260.
[30] Granato, P., Vinekar, S., Godefroy, O., Van Gansberghe, J.P. and Bruyer, R. (2018) Measurement of the Ability to Recognize Facial Emotions over the Adult Lifetime in a Supra-Normal Sample. Clinical Psychiatry, 4, 55.
https://doi.org/10.21767/2471-9854.100055
[31] Furlong, L.S., Rossell, S.L., Karantonis, J.A., Cropley, V.L., Hughes, M. and Van Rheenen, T.E. (2022) Characterization of Facial Emotion Recognition in Bipolar Disorder: Focus on Emotion Mislabelling and Neutral Expressions. Journal of Neuropsychology, 16, 353-372.
https://doi.org/10.1111/jnp.12267
[32] Watts, S., Buratto, L.G., Brotherhood, E.V., Barnacle, G.E. and Schaefer, A. (2014) The Neural Fate of Neutral Information in Emotion-Enhanced Memory. Psychophysiology, 7, 673-684.
https://doi.org/10.1111/psyp.12211
[33] Rizzo, A.A., Neumann, U.R., Enciso, D.F. and Noh, J.Y. (2004) Performance-Driven Facial Animation: Basic Research on Human Judgments of Emotional State in Facial Avatars. CyberPsychology & Behavior, 4, 471-487.
https://doi.org/10.1089/109493101750527033
[34] Geraets, C.N.W., Klein, T.S., Lestestuiver, B.P., VanBeilen, M., Nijman, S.A., Marsman, J.B.C. and Veling, W. (2021) Virtual Reality Facial Emotion Recognition in Social Environments: An Eye-Tracking Study. Internet Interventions, 25, Article ID: 100432.
https://doi.org/10.1016/j.invent.2021.100432
[35] Del Aguila, J., González-Gualda, L.M., Játiva, M.A., Fernández-Sotos, P., Fernández-Caballero, A. and García, A.S. (2021) How Interpersonal Distance between Avatar and Human Influences Facial Affect Recognition in Immersive Virtual Reality. Frontiers in Psychology, 12, Article 675515.
https://doi.org/10.3389/fpsyg.2021.675515
[36] Liu, S.S., Tian, Y.T. and Dong, L. (2009) New Advances in Facial Expression Recognition Research. 2009 International Conference on Machine Learning and Cybernetics, Hebei, 12-15 July 2009, 1150-1155.
https://doi.org/10.1109/ICMLC.2009.5212409
[37] Essa, I.A. and Pentland, A.P. (1995) Facial Expression Recognition Using a Dynamic Model and Motion Energy. Proceedings of IEEE International Conference on Computer Vision, Cambridge, 20-23 June 1995, 360-367.
https://doi.org/10.1109/ICCV.1995.466916
[38] Lien, J.J., Kanade, T., Cohn, J.F. and Li, C.C. (1998) Automated Facial Expression Recognition Based on FACS Action Units. Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition, Nara, 14-16 April 1998, 390-395.
https://doi.org/10.1109/AFGR.1998.670980
[39] Valstar, M., Mehu, B., Jiang, M., Pantic, K. and Scherer, (2012) Meta-Analysis of the First Facial Expression Recognition Challenge. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 42, 966-979.
https://doi.org/10.1109/TSMCB.2012.2200675
[40] Rajah, R., Song, Z. and Arvey, R.D. (2011) Emotionality and Leadership: Taking Stock of the Past Decade of Research. The Leadership Quarterly, 22, 1107-1119.
https://doi.org/10.1016/j.leaqua.2011.09.006
[41] Gil, S. and Droit-Volet, G. (2012) How Do Emotional Facial Expressions Influence Our Perception of Time? In: Masmoudi, S., Yun Dai, D. and Naceur, A., Eds., Attention, Representation, and Human Performance: Integration of Cognition, Emotion, and Motivation, Psychology Press, London.
[42] Lipps, T. (1907) Bewusstsein Und Gegenstände. (Vol. 1). W. Engelmann, Lemgo.
[43] Hess, U. and Blairy, S. (2001) Facial Mimicry and Emotional Contagion to Dynamic Emotional Facial Expressions and Their Influence on Decoding Accuracy. International Journal of Psychophysiology, 40, 129-141.
https://doi.org/10.1016/S0167-8760(00)00161-6
[44] Mvududu, M.J. (2020) Can Trait Emotional Intelligence Variables of Well-Being, Self-Control, Emotionality, and Sociability Individually or Collectively Predict a Software Development Engineer’s Creativity?
https://digitalcommons.georgefox.edu/dbadmin/29
[45] Ryan, M., Murray, J. and Ruffman, T. (2010) Vieillissement et Perception de L’EMotion: Traitement des Expressions Vocales Seules et Avec des Visages. Recherche Experimentale sur le Vieillissement, 36, 1-22.
[46] Dor, Y.I., Algom, D.S., Vered, B.D. and Boaz, B.D. (2022) Detecting Emotion in Speech: Validating a Remote Assessment Tool. Auditory Perception & Cognition, 5, 238-258.
https://doi.org/10.1080/25742442.2022.2101841
[47] Kexin, T., Yongming, H., Guobao, Z. and Lin, Z. (2019) Research on Emergency Parking Instruction Recognition Based on Speech Recognition and Speech Emotion Recognition. 2019 Chinese Automation Congress (CAC), Hangzhou, 22-24 November 2019, 2933-2937.
https://doi.org/10.1109/CAC48633.2019.8997077
[48] Busso, C. and Narayanan, S.S. (2007) Joint Analysis of the Emotional Fingerprint in the Face and Speech: A Single Subject Study. 2007 IEEE 9th Workshop on Multimedia Signal Processing, Chania, 1-3 October 2007, 43-47.
https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4412814&isnumber=4412796
https://doi.org/10.1109/MMSP.2007.4412814
[49] Ferrández, J.M., Andina, D. and Fernández, E. (2019) Design of Reliable Virtual Human Facial Expressions and Validation by Healthy People. Integrated Computer-Aided Engineering, 27, 287-299.
https://doi.org/10.3233/ICA-200623
[50] Misaki, S. and Tetsuo, K. (2021) Emotion Recognition Combining Acoustic and Linguistic Features Based on Speech Recognition Results. 2021 IEEE 10th Global Conference on Consumer Electronics (GCCE), Kyoto, Japan, 12-15 October 2021, 824-827.
[51] Padilla, L.M., Creem-Regehr, S.H., Hegarty, M. and Stefanucci, J.K. (2018) Decision Making with Visualizations: A Cognitive Framework Across Disciplines. Cognitive Research: Principles and Implications, 3, 29.
https://doi.org/10.1186/s41235-018-0120-9
[52] Benoit, I.D., Miller, E.G., Mirabito, A.M. and Catlin, J.R. (2023) Medical Decision-Making with Tables and Graphs: The Role of Cognition, Emotions, and Analytic Thinking. Health Marketing Quarterly, 40, 59-81.
https://doi.org/10.1080/07359683.2022.2094101
[53] Marcum, J.A. (2013) The Role of Emotions in Clinical Reasoning and Decision Making. The Journal of Medicine and Philosophy: A Forum for Bioethics and Philosophy of Medicine, 38, 501-519.
https://doi.org/10.1093/jmp/jht040
[54] Bechara, A., Damasio, H. and Damasio, A.R. (2003) Role of the Amygdala in Decision-Making. Annals of the New York Academy of Sciences, 985, 356-369.
https://doi.org/10.1111/j.1749-6632.2003.tb07094.x
[55] Malykhin, N., Pietrasik, W., Aghamohammadi-Sereshki, A., Ngan Hoang, K., Fujiwara, E. and Olsen, F. (2023) Emotional Recognition across the Adult Lifespan: Effects of Age, Sex, Cognitive Empathy, Alexithymia Traits, and Amygdala Subnuclei Volumes. Journal of Neuroscience Research, 101, 367-383.
https://doi.org/10.1002/jnr.25152
[56] Deng, Y., Chang, L., Yang, M., Huo, M. and Zhou, R. (2016) Gender Differences in Emotional Response: Inconsistency between Experience and Expressivity. PLOS ONE, 11, e0158666.
https://doi.org/10.1371/journal.pone.0158666
[57] Bek, J., Donahoe, B. and Brady, N. (2022) Feelings First? Sex Differences in Affective and Cognitive Processes in Emotion Recognition. Quarterly Journal of Experimental Psychology, 75, 1892-1903.
https://doi.org/10.1177/17470218211064583
[58] Forni-Santos, L. and Osório, F.L. (2015) Influence of Gender in the Recognition of Basic Facial Expressions. A Critical Literature Review. World Journal of Psychiatry, 5, 342-351.
https://doi.org/10.5498/wjp.v5.i3.342
[59] Connolly, H.L., Lefevre, C.E., Young, A.W. and Lewis, G.J. (2020) Emotion Recognition Ability: Evidence for a Supramodal Factor and Its Links to Social Cognition. Cognition, 197, Article ID: 104166.
https://doi.org/10.1016/j.cognition.2019.104166
[60] Connolly, H.L., Young, A.W. and Lewis G.J. (2019) Recognition of Facial Expression and Identity in Part Reflects a Common Ability, Independent of General Intelligence and Visual Short-Term Memory. Cognition and Emotion, 33, 1119-1128.
https://doi.org/10.1080/02699931.2018.1535425
[61] Ruffman, T., Henry, J.D., Livingstone, V. and Phillips, L.H. (2008) A Meta-Analytic Review of Emotion Recognition and Aging: Implications for Neuropsychological Models of Aging. Neuroscience & Biobehavioral Reviews, 32, 863-881.
https://doi.org/10.1016/j.neubiorev.2008.01.001
[62] Hayes, G.S., McLennan, S.N., Henry, J.D., Phillips, L.H., Terrett, G., Rendell, P.G., Pelly, R.M. and Labuschagne, I. (2020) Task Characteristics Influence Facial Emotion Recognition Age-Effects: A Meta-Analytic Review. Psychology and Aging, 35, 295-315.
https://doi.org/10.1037/pag0000441
[63] Rutter, L.A., Dodell-Feder, D., Vahia, I.V., Forester, B.P., Ressler, K.J., Wilmer, J.B. and Germine, L. (2019) Emotion Sensitivity across the Lifespan: Mapping Clinical Risk Periods to Sensitivity to Facial Emotion Intensity. Journal of Experimental Psychology: General, 148, 1993-2005.
https://doi.org/10.1037/xge0000559
[64] Barbieri, G.F., Real, E., Lopez, J., García-Justicia, J.M., Satorres, E. and Meléndez, J.C. (2022) Comparison of Emotion Recognition in Young People, Healthy Older Adults, and Patients with Mild Cognitive Impairment. International Journal of Environmental Research and Public Health, 19, Article 12757.
https://doi.org/10.3390/ijerph191912757
[65] Connolly, H.L., Young, A.W. and Lewis, G.J. (2021) Face Perception across the Adult Lifespan: Evidence for Age-Related Changes Independent of General Intelligence. Cognition and Emotion, 35, 890-901.
https://doi.org/10.1080/02699931.2021.1901657
[66] Keane, J., Calder, A.J., Hodges, J.R. and Young, A.W. (2002) Face and Emotion Processing in Frontal Variant Frontotemporal Dementia. Neuropsychologia, 40, 655-665.
https://doi.org/10.1016/S0028-3932(01)00156-7
[67] Phillips, L.H., MacLean, R.D. and Allen, R. (2002) Age, and the Understanding of Emotions: Neuropsychological and Sociocognitive Perspectives. The Journals of Gerontology: Series B, 57, P526-P530.
https://doi.org/10.1093/geronb/57.6.P526
[68] Ferreira, B.L.C., Fabrício, D.M. AND Chagas, M.H.N. (2021) Are Facial Emotion Recognition Tasks Adequate for Assessing Social Cognition in Older People? A Review of the Literature. Archives of Gerontology and Geriatrics, 92, Article ID: 104277.
https://doi.org/10.1016/j.archger.2020.104277
[69] Poncet, F., Baudouin, J.Y., Dzhelyova, M.P., Rossion, B. and Leleu, A. (2019) Rapid and Automatic Discrimination between Facial Expressions in the Human Brain. Neuropsychologia, 129, 47-55.
https://doi.org/10.1016/j.neuropsychologia.2019.03.006
[70] Aviezer, H., Ensenberg, N. and Hassin, R.R. (2017) The Inherently Contextualized Nature of Facial Emotion Perception. Current Opinion in Psychology, 17, 47-54.
https://doi.org/10.1016/j.copsyc.2017.06.006
[71] Abo Foul, Y., Eitan, R., Mortillaro, M. and Aviezer, H. (2022) Perceiving Dynamic Emotions Expressed Simultaneously in the Face and Body Minimizes Perceptual Differences between Young and Older Adults. The Journals of Gerontology: Series B, 77, 84-93.
https://doi.org/10.1093/geronb/gbab064
[72] Reschke, P.J. and Walle, E.A. (2021) The Unique and Interactive Effects of Faces, Postures, and Scenes on Emotion Categorization. Affective Science, 2, 468-483.
https://doi.org/10.1007/s42761-021-00061-x
[73] Ekman, P., Friesen, W.V., O’Sullivan, M., Chan, A., Diacoyanni-Tarlatzis, I., Heider, K., Krause, R., LeCompte, W.A., Pitcairn, T., Ricci-Bitti, P.E., et al. (1987) Universals and Cultural Differences in the Judgments of Facial Expressions of Emotion. Journal of Personality and Social Psychology, 53, 712-717.
https://doi.org/10.1037/0022-3514.53.4.712
[74] Ekman, P. and O’Sullivan, M. (1988) The Role of Context in Interpreting Facial Expression: Comment on RUSSELL and FEHR. Journal of Experimental Psychology: General, 117, 86-90.
https://doi.org/10.1037/0096-3445.117.1.86
[75] Rossignol, M., Bruyer, R., Philippot, P. and Campanella, S. (2009) Categorical Perception of Emotional Faces Is Not Affected by Aging. Neuropsychological Trends, 6, 29-49.
https://doi.org/10.7358/neur-2009-006-ross
[76] Cheal, J.L. and Rutherford, M.D. (2011) Categorical Perception of Emotional Facial Expressions in Preschoolers. Journal of Experimental Child Psychology, 110, 434-443.
https://doi.org/10.1016/j.jecp.2011.03.007
[77] Smith, L.S., Grady, C.L., Hoang, N. and Moscovitch, M. (2014) Broadly Tuned Face Representation in Older Adults Assessed by Categorical Perception. Journal of Experimental Psychology: Human Perception and Performance, 40, 1060-1071.
https://doi.org/10.1037/a0035710
[78] Barrett, L.F. (1998) Discrete Emotions or Dimensions? The Role of Valence Focus and Arousal Focus. Cognition and Emotion, 12, 579-599.
https://doi.org/10.1080/026999398379574

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.