Rational and Continuous Measurement of Emotional-Fingerprint, Emotional-Quotient and Categorical vs Proportional Recognition of Facial Emotions with M.A.R.I.E., Second Half ()
1. Introduction
The development and supremacy of the human species in the animal kingdom has been facilitated by its ability to live in groups. This has enabled increasingly complex, precise, rapid, and efficient social functioning. This societal organization was only possible with the appearance of facial emotional communication. It has been associated with expression and recognition of facial emotions for long millennia. It is noteworthy that the appearance of verbal language followed nonverbal facial communication without replacing it. Communication with facial expressions made it possible to potentiate and improve the social and organizational life of increasingly large human groups. Facial emotional communication conveys meaningful information to group members, facilitates mass communication, and adds complexity to meanings. This attests to the importance of facial emotional expressions and the seriousness of the consequences in the event there is dysfunction in both the production of a facial emotion and its recognition. These are two related basic ego-functions needed for social development. This is stated without forgetting that the emotional feeling (feeling) is the origin of the expression and recognition of emotion; Sigmund Freud and Anna Freud acknowledged that the interoceptive sensations derived from the body (viscera) are essential for emotions and furthermore the basic human drives of Eros and Thanatos or sexual and aggressive drives do originate in the body even in early infancy. Bowlby further elaborated on the attachment system between mother and infant, the facial emotional communication in this duo playing an important role. AI and robots are not yet quipped with any of these attributes. AI can be cognitively competent but not in combining emotions and their cognitive appreciation or social implications though AI (Large Language Models) can be programmed to use the conventional language to express emotions. The combination of the binomial (cognition-emotion) has contributed to the expansion of human groups into increasingly large and complex social groups [1] [2]. Facial emotional expressiveness as well as its good recognition and interpretation enhanced: 1) behaviors ensuring the cohesion of the human groups [3] and 2) social cognitions [1] [4]. This has resulted in good quality societal functioning. Otherwise, we observe a social disorganization with inter-individual behavioral disorders with violence [5]. Social networks facilitated by modern technology infinitely increase use of visual face-to-face interactions at a distance. From now on, day-to-day mundane communication will rely more on images and less on written expressions. The Distant Face-to-Face (Face-time or Zoom, etc.) computer platforms designed for audiovisual communication will increase emotional communication within human groups. This seemingly preferred mode of communication in current civilizations explains the interest of neurosciences, neurology and psychiatry in emotions and their regulation. Unfortunately, the promulgated conclusions of scientific studies are rarely consensually confirmed. Some are based on the theory of the six discrete expressions of “basic” emotions identified by of Paul Ekman: “anger”, “disgust”, “joy”, “fear”, “surprise” and “sadness”. More importantly, scientific controversies still exist. They are of interest to 1) the categorical or proportional aspect of Visual-Facial-Emotions-Recognition, 2) the universal or idiosyncratic aspect, 3) the continuous measurement of the ViFaEmRe, 4) the link between the two intracerebral black boxes (transmitters and receiver-synthesizers) that are, “expression” and the “recognition” of facial emotions. As such, only the “joy” is recognized by most authors as having a real congruence between the “Visual-Facial-Emotion-Transmitter-Feeling” and “Visual-Facial-Emotional-Receptive-Feeling”. Additionally, the lack of a consensually accepted continuous measurement tool prevents: 1) the reproducibility of the experiments, 2) comparison of results, 3) the emergence of operational concepts due to the inability to compare results and foster discussion. For Mauss and Robinson in 2009 [6] a project aimed at these goals was not realistic. Of course, it was true when the technology had its limitations. However, the availability of computational devices with the morphing of facial emotions has allowed for progress in the understanding of Visual-Facial-Emotions-Recognition and their cerebral integration [7] [8]. The arrival of Artificial Intelligence will further improve the “human-machine” interface through two-way communication that will involve voice and facial emotions, as well as nonverbal body language. For it is the duo of “transmitter-man” and “machine-receiver” which is topical our purpose. However, before long, it is the duo “transmitter-machine” and “receiver-man” which will complete [9] [10] [11] [12] and complement the previous one. This presupposes an understanding of the intra-human physiology and pathophysiology, which will inform the duo “transmitter-man” and “receiver-man”. Consequently, the writing of the programming code will necessarily be based on concepts borrowed from a physiological reality entailed in that complex ego- function. Then AI would want to mimic such interactions and for which we are at a loss at present. No question such technology will arrive soon if not already here. For our part, studies on visual recognition of emotions can be simplified by considering the following reflexive loop: 1) “EXPRESSION” of an emotion by the face, 2) a “DOUBLE BLACK BOX” that we will call “feeling underlying facial emotional expression” and “sense” of visual responsiveness to expressed facial emotions” and 3) “RECEPTION” of emotion through the eyes. Our work will focus only on the “black box” “feeling of visual receptivity of facial emotions”. We will not study the cognitive labeling aspect of the perceived emotion which is a function of the left cerebral hemisphere. Our research questions are as follows. Perfecting M.A.R.I.E. with detailed study of the 6 basic emotions in the sense of Ekman’s parlance [13] [14], through emotional streaks gradually added to “neutrality”. This procedure is needed for quantifying the intensity of any perceived emotion. MARIE allows continuous and rational measurement of the Visual-Facial-Emotions-Recognition (ViFaEmRe), of an observer and a sample population. Are there interactions between ViFaEmRe and demographic variables? We shall attempt to identify continuous variables that partly explain the functioning of the “facial emotional receptive feeling”. What are the parameters that influence the decision making for identification of an emotion? Is ViFaEmRe categorical or proportional? Is ViFaEmRe universal or idiosyncratic, individual? Does ViFaEmRe change with age?
2. At the Scale of the Population Sample Emotional-Fingerprint or Populational-Key-Graph
2.1. Lifespan Syn-Diachronic Populational-Key-Graph for All FaTes, Each of the 6 Emotional Series, All Age Groups and All Observers (Po-KeyGr #∑*6/6*∑*∑)
If we consider each of the 6 emotions for the 3 face-tests, and throughout life, we see that “joy” (69%) is the best recognized emotion. It is followed by “fear” (66%) and “sadness” (66%) (tied), then by “disgust” (62%), then “surprise” (58%). “Anger” (52%) is the least well-recognized emotion (see Figure 1(a)).
2.2. Lifespan Syn-Diachronic Populational-Key-Graph for Each of 3 FaTes, each 6 Emotional Series, all Age Groups and All Observers (Po-KeyGr #3/3*6/6*∑*∑)
There was a significant difference between FaTe-Blond, FaTe-Brunette, and FaTe-Man, when we compared the variables “anger”, “disgust”, “joy”, “fear”, “surprise”, “sadness”, Wilk’s Λ = .327, D (12, 1227) = 75.327, p = .000, partial h2 = .428. A separate ANOVA was conducted for each dependent variable with each ANOVA evaluated at an alpha level.001. There was a significant difference between each emotional series of the FaTe-Blond, FaTe-Brunette, and FaTe-Man.
In Figure 1(b), each broken line is associated with a Face-Test. In addition, each point of a broken line represents the mean of the 9 packets of the Emotional-Visual-Acuity for each of the 6-constituent “sigmoid” of each of the 3 graphs: 1) GunTrGr#Blonde*6/6*∑*∑ (see Figure 10 A in first half article), 2) GunTrGr# Brunette*6/6*∑*∑ (see Figure 10 B (in first half article), 3) GunTrGr#Man*6/ 6*∑* (see Figure 10 C in first half article).
We will call this broken line differently “Populational-Emotional-Fingerprint” or more simplified “Populational-Key-Graph” (see Figure 1(b)). Each of the 3 Face-Tests has a Populational-Key-Graph which is represented as “consubstantial.”
Each broken line is the result of the “emotional interaction”: 1) of an “emotional expressiveness” (transmitter) specific to each FaTe and 2) of an “emotional sensitivity” (receiver) specific to the compilation of the 204 observers of the population sample. It is difficult, for the moment, to distinguish these two components. In addition, we will merge them under the term of “Visual-Facial- Emotionality” which can be defined by the reflective interaction in the duo of “transmitter-receiver”. In other words, “Visual-Facial-Emotionality” is the ability to: 1) to express, 2) feel, and 3) share emotions [15] [16] [17]. Therefore, Emotional-Visual-Acuity would be a partial measure of Visual-Facial-Emotionality, which is the resultant of the Polynomial:
[Ethnic-Sample-Population*identity Face Test*Ethnic-Face-Test*Emotions* environment*other]
A true measure of Visual-Facial-Emotionality would consist in measuring the interaction of the polynomials of two live observers “with beating hearts” communicating face to face. It is not easy to take the polynomials of a population
![]()
![]()
Figure 1. (a) Lifespan Syn-Diachronic Populational-Key-Graph for all FaTes, each of the 6 Emotional Series, all Age Groups, and all observers (Po-KeyGr #∑*6/6*∑*∑). The “joy” is the best-recognized emotion and “anger” is the least recognized emotion. (b)Lifespan Syn-Diachronic Populational-Key-Graph for each of 3 FaTes, each 6 Emotional Series, all Age Groups, and all Observers (Po-KeyGr #3/3*6/6*∑*∑). The sample of 204 observers perceives each of the 3 Faces-Test differently. These point towards an idiosyncrasy of the Visual-Recognition-Facial-Emotional-Expressions and not universalism. (c) Lifespan “Plate-Stacking- Graph” of the Populational-Key-Graph for each of 3 FaTes, each 6 Emotional Series, all Age Groups, and all Observers (Po-KeyGr #3/3*6/6*∑*∑). This graph is the most composite and the most explicit of the interaction of 1) the 204 observers with Polynomial [Ethnic-Sample-Population*identity Face Test*Ethnic-Face-Test*Emotions*environment*other] during life and 2) the 3 FaTe together. (d) “Plate-Stacking-Graph” of the Populational-Key-Graph for all FaTes, each of the 6 Emotional Series, each of the 7 Age Groups and all observers (Po-KeyGr#∑*6/6*7/7*∑). The grouping of the 6 Emotional Series for each of the 7 age groups shows statistically significant differences.
sample who observe a third person in front of them. This constitutes a mathematical limit of the concept of Visual-Facial-Emotionality. Therefore, we will conflate the two concepts Visual-Facial-Emotionality and Emotional-Visual- Acuity to keep it simple and comprehendible. We will use them interchangeably although they are not the same as explained above.
2.3. Lifespan “Plate-Stacking-Graph” of the Populational-Key- Graph for Each of 3 FaTes, Each 6 Emotional Series, All Age Groups and all Observers (Po-KeyGr #3/3*6/6*∑*∑)
We observe a superimposition of horizontal lines identical to a “stack of plates”. We will call this type of graph “Plate-Stacking-Graph”. It is nothing other than a re-transposition of the data from the abscissa axis to the ordinate axis. However, it is easier to understand and provides more information than the graph in Figure 1(b).
There was a significant difference between the 3 FaTe lifespan Emotional- Visual-Acuity, when we compared the variables on the 3 FaTe, Wilk’s Λ = .811, D (12, 1208) = 75.383, p = .000, partial h2 = .0.428. A separate ANOVA was conducted for each dependent variable with each ANOVA evaluated at an alpha level .001. There was a significant difference for each emotional series.
The lifespan transcription of the measurements of Facial-Visual-Emotionality throughout life (all age groups combined) for each FaTe and for each emotional series appears on the graph of the Figure 1(c). We observe that the “joy” is the emotion best perceived for each FaTe, apart from the “sadness” which precedes “joy” for FaTe-Brunette by 3%. The “anger” is the least easily recognized emotion for the 3 FaTes, but within 1% for the FaTe-Brunette with the “surprise”. Apart from this originality for the inversion “joy” “sadness” for the FaTe- Brunette and the “surprise” “anger” for FaTe-Man, the graphic representation of Emotional-Visual-Acuity measurements is identical for the 3 FaTes. This observation evokes a neurophysiological constant probably contaminated by the civil identity, by the ethnic origin of the FaTe or something else.
This graph visualizes all the lifespan “interstices”: 1) the Major-Visual-Facial- Emotionality and Minor-Visual-Facial-Emotionality, 2) All the emotional- channels, 3) Emotional-Focus that is the distance between the “Major-Visual- Facial-Emotionality and “Minor-Visual-Facial-Emotionality” for the 6 Emotional-Series and for each FaTe. One perceives, immediately, for each FaTe all these explicit measures are obvious and easily measured. These measures seem specific to each FaTe. In other words, they are idiosyncratic and point to a non-universal perception of facial emotions.
As an example, for:
1) the FaTe-Blond: the emotional-focus is 69% − 57% = 12%, the Major-Visual- Facial-Emotionality is here “joy” 69%, the Minor-Visual-Facial-Emotionality is here “anger” 57%.
2) the FaTe-Brunette: the emotional-focus is 69% − 48% = 21%, Major-Visual- Facial-Emotionality is “sadness” 69%, the Minor-Visual-Facial-Emotionality is “surprise” 48% and
3) the FaTe-Man: the emotional-focus is 71% − 49% = 22%, the Major-Visual- Facial-Emotionality is here “joy” 71% and the Minor-Visual-Facial-Emotionality is here “anger” 49%.
Identical measures are possible for any individual FaTe in any population sample: FaTe-politician, FaTe-movie actor, etc. This will apply to French population sample, Italian population sample, Indian population, etc. All measures and their combinations will be possible for all different and diverse populations.
This graphic may relate to 1) one or more FaTes, 2) one or more emotional-series, and 3) only to a sample population that has observers of different ages, 4) one or more sample population with the same age group. The simple observation of Figure 1(c) represents intuitive comprehension.
2.4. “Plate-Stacking-Graph” of the Populational-Key-Graph for All FaTes, Each of the 6 Emotional Series, Each of the 7 Age Groups and All Observers (Po-KeyGr #∑*6/6*7/7*∑).
The 3 FaTes are reduced to a single measure. There was a significant difference between the 7 age groups, when we compared the variables in the 6 emotional series, Wilk’s Λ = .062, D (2, 312) = 845.892, p = .000, partial h2 = .066. A separate ANOVA was conducted for each dependent variable, 6 emotional series, with each ANOVA evaluated at an alpha level .001. There was a significant difference for each emotional series.
We observe significant variations in most emotional channels. The visual-emotional-acuity of “joy” improves from the age group 20 - 30 to 51 - 55 years old. The visual-emotional-acuity of “fear” is super-imposed on “sadness”. Emotional-Visual-Acuity of fear and sadness are the same throughout life.
The Emotional-Focus remains narrow for all 7 age groups. Emotional-Visual- Acuity of the “joy” improves with advancing age. It is at its best from the age group 51 to 55 years. There is a no uniformity in the measurements of the Emotional-Visual-Acuity while advancing or maturing with age. The adjacent emotional-channel [anger = surprise] is the widest part of all the emotional-channels. It is measured at 8% on average. The Emotional-Focus is constant with an average of 17%. It is framed by the Major-Visual-Facial-Emotionality “joy” and Minor-Visual-Facial-Emotionality “anger” (see Figure 1(d)).
The comparison between the age groups 20 to 30 years and 65 to 70 years shows: 1) first, an improvement of the Emotional-Visual-Acuity for the “anger” 49% vs 53%; the “joy” 64% vs 70%, 2) next to that, a stability in Emotional-Visual-Acuity for the “disgust” 61% vs 62%, the “fear” 66% vs 66%, the “surprise” 56% vs 58% and the “sadness” 64% vs 66% over the lifespan. In other words, ability to recognize anger and joy improves with advancing age while the ability to recognize disgust, fear, surprise and sadness remains at a relatively stable level throughout lifespan.
2.5. Syn-Diachronic Populational-Key-Graph for the FaTe-Blond, Each of the 6 Emotional Series, Each of the 7 Age Groups and All Observers (Po-KeyGr #Blond*6/6*7/7*∑).
There was a significant difference between the 6 emotional-Serie, when we considered them all together on the variables: “anger”, “disgust”, “joy”, “fear”, “surprise”, “sadness”, Wilk’s Λ = .675, D (36, 845) = 2.196, p = .000, partial h2 = .063. A separate ANOVA was conducted for each dependent variable with each ANOVA evaluated at an alpha level.001. There was only a significant difference for ‘joy”, F (6, 197) = 6.068, p = .000, partial h2 = .156 (Figure 2(a)).
2.6. “Plate-Stacking-Graph” of the Populational-Key-Graph for the FaTe-Blond, for Each of the 6 Emotional Series, Each 7 of the 7 Age Groups and All Observers (Po-KeyGr #Blond*6/6*7/7*∑).
There was a significant difference between the 7 age groups, when we compared the variables on the 6 emotional series, Wilk’s Λ = .223, D (35, 709) = 8.677, p = .000, partial h2 = .0.259. A separate ANOVA was conducted for each dependent variable, 6 emotional series, with each ANOVA evaluated at an alpha level .001. There was a significant variation in each emotional series (Figure 2(b)).
The Emotional-Focus remains narrow for all 7 age groups. Emotional-Visual- Acuity of the “joy” improves with advancing age. It is the best for the age group 51 to 55 years. There is a nonuniformity in the recoded measures of the Emotional-Visual-Acuity throughout the advance in the ages 60 to 70. The adjacent emotional-channel [anger = surprise] is the widest part of all the emotional- channels. It is measured at 4% on average. The Emotional-Focus is constant with an average of 12%. It is framed by the Major-Visual-Facial-Emotionality “joy” and Minor-Visual-Facial-Emotionality “anger”.
The comparison between the age groups 20 to 30 years and 65 to 70 years shows: 1) an improvement of the Emotional-Visual-Acuity for the “anger” 55% vs 59%; the “joy” 64% vs 70%; the “surprise” 56% vs 63%, 2) a decrease in Emotional-Visual-Acuity for the “disgust” 68% vs 66%, the “fear” 68% vs 66% and “sadness” 66% vs 64%.
2.7. Syn-Diachronic Populational-Key-Graph for the FaTe-Brunette, Each of the 6 Emotional Series, Each of the 7 Age Groups and All Observers (Po-KeyGr #Brunette*6/6*7/7*∑)
There was no significant difference between the 6 Emotional-Series, when we compared them for the variables “anger”, “disgust”, “joy”, “fear”, “surprise”, “sadness”, Wilk’s Λ = .766, D (36, 845) = 1.460, p = .039, partial h2 = .0.43. A separate ANOVA was conducted for each dependent variable with each ANOVA evaluated at an alpha level.001. There was no significant difference among the 7 age groups (Figure 3(a)).
The general trend is the superimposition of the 7 broken lines of the age groups respectively at each emotional series. However, the layering is of higher quality for the FaTe-Blond. This results in less dispersion within each age group
Figure 2. (a) Syn-Diachronic Populational-Key-Graph for the FaTe-Blond, each of the 6 Emotional Series, each of the 7 Age Groups and all Observers (Po-KeyGr #Blond*6/6*7/7*∑).The grouping of the 6 emotional series for each of the 7 Age Groups shows statistically significant differences. The merging of the 7 broken lines leads to the yellow broken line of FaTs-Blond of Figure 1(b). (b) “Plate-Stacking-Graph” of the Populational-Key-Graph for FaTe-Blond, each of the 6 Emotional Series, each of the 7 Age Groups and all Observers (Po-KeyGr # Blond *6/6*7/7*∑). The “Plate-Stacking-Graph” view of the 7 Ages Groups of Emotional Fingerprints of the FaTe-Blond for each of the 6 Emotional Series, shows statistically significant differences.
![]()
Figure 3. (a) Syn-Diachronic Populational-Key-Graph for the FaTe-Brunette, each of the 6 Emotional Series, each of the 7 Age Groups and all Observers (Po-KeyGr #Brunette*6/6*7/7*∑).The grouping of the 6 Emotional Series for each of the 7 Age Groups shows statistically significant differences. The merging of the 7 broken lines leads to the violet broken line of FaTes-Brunette of Figure 1(b). (b) “Plate-Stacking-Graph” of the Populational-Key-Graph for the FaTe-Brunette, each of the 6 Emotional Series each of the 7 Age Groups and all Observers (Po-KeyGr #Brunette*6/6*7/7*∑). The grouping of the 7 age groups for each of the 6 Emotions Series shows statistically significant differences.
(see Figure 2(a)).
2.8. “Plate-Stacking-Graph” of the Populational-Key-Graph for the FaTe-Brunette, Each of the 6 Emotional Series Each of the 7 Age Groups and All Observers (Po-KeyGr #Brunette*6/6*7/7*∑)
There was a significant difference between the 7 ages groups, when we compared the j variables for 6 Emotional Series, Wilk’s Λ = .099, D (35, 709) = 14.853, p = .000, partial h2 = 0.370. A separate ANOVA was conducted for each dependent variable with each ANOVA evaluated at an alpha level.001. There was a significant difference for each emotional series (Figure 3(b)).
Emotional-Channels and Emotional-Focus are wider than FaTe-Blond and FaTe-Man. The ordering of emotional-series remains globally uniform with advancing age. There is a quasi-superposition of the broken straight lines of the Emotional-Visual-Acuity of the “anger” and some “surprise”. Unlike FaTe- Blond, the “sadness” is the emotion that has the best Emotional-Visual-Acuity across all age groups. There “anger” and the “surprise” seems tied. They have the worst Emotional-Visual-Acuity. After a period of strangulation between the ages of 56 and 65, the emotional focus opens.
The adjacent emotional-channel [anger = disgust] is the widest part of all the emotional-channels. It is measured at 10% on average. The emotional-focus is constant with an average of 21%. It is framed by the Major-Visual-Facial-Emotionality “sadness” and Minor-Visual-Facial-Emotionality “anger”.
The comparison between the age groups 20 to 30 years and 65 to 70 years shows: 1) an improvement of the Emotional-Visual-Acuity for the “anger” 48% vs 50%, the “disgust” 56% vs 60%, the “joy” 62% vs 65%, “sadness” 70% vs 71%, 2) a decrease in Emotional-Visual-Acuity for the “surprise” 50% vs 46%, 3) stability for the “fear” 62% vs 62%.
2.9. Syn-Diachronic Populational-Key-Graph for the FaTe-Man, Each of the 6 Emotional Series, Each of the 7 Age Groups and All Observers (Po-KeyGr #Man*6/6*7/7*∑)
There was a significant difference between the 6 emotional-Series when we compared the variables for “anger”, “disgust”, “joy”, “fear”, “surprise”, “sadness”, Wilk’s Λ = .662, D (36, 845) = 2.312, p = .000, partial h2 = .066. A separate ANOVA was conducted for each dependent variable with each ANOVA evaluated at an alpha level .001. There was only a significant difference for ‘sadness”, F (6, 197) = 6.70, p = .000, partial h2 = .169. The general trend is towards the superposition of broken lines. However, we observe a dispersion for the “sadness”. The dispersion of the “joy” even if it is obvious is not significant due to a “p” set at 0.001 following the Bonferroni correction (Figure 4(a)).
![]()
Figure 4. (a) Syn-Diachronic Populational-Key-Graph for the FaTe-Man, each of the 6 Emotional Series, each of the 7 Age Groups and all Observers (Po-KeyGr #Man*6/6*7/7*∑).The grouping of the 6 Emotional Series for each of the 7 Age Groups shows statistically significant differences. The merging of the 7 broken lines leads to the blue broken line of FaTs-Man of Figure 1(b). (b) “Plate-Stacking-Graph” of the Populational-Key-Graph for the FaTe-Man, for each of the 6 Emotional Series, each 7 of the 7 Age Groups and all Observers (Po-KeyGr #Man*6/6*7/7*∑).
2.10. “Plate-Stacking-Graph” of the Populational-Key-Graph for the FaTe-Man, for Each of the 6 Emotional Series, Each 7 of the 7 Age Groups and All Observers (Po-KeyGr #Man*6/6*7/7*∑)
There was a significant difference between the 7 ages groups, when we considered them all together on the variables for the 6 Emotional Series, Wilk’s Λ = .096, D (35, 709) = 15.094, p = .000, partial h2 = .0.374. A separate ANOVA was conducted for each dependent variable with each ANOVA evaluated at an alpha level.001. There was a significant difference for each emotional series (Figure 4(b)).
The adjacent emotional-channel [anger = disgust] is the widest part of all the channels. It is measured at 10% on average. The general trend is towards a constancy of the measurements due to a quasi-parallelism with the abscissa axis of the 6 broken straight lines. The Emotional-Focus is constant with an average of 20%. It is framed by the Major-Visual-Facial-Emotionality “joy” and Minor-Visual-Facial-Emotionality “anger”.
The comparison between the age groups 20 to 30 years and 65 to 70 years shows an improvement in all Emotional-Visual-Acuity “anger” 44% vs 50%; “disgust” 60% vs 61%; the “joy” 67% vs 72%; “fear” 67% vs 69%; “surprise” 62% vs 66%, and the “sadness” 56% vs 64%.
The understanding of the measures and their conceptualization of these 3 graphs seem quite explicit, while simply intuitive for their understanding.
3. At the Scale of Single Observer Emotional-Fingerprint or Individual-Key-Graph, Results, and Graphs
Synchronic Individual-Key-Graph for Each of the 3 FaTes, Each of 6 Emotional Series a Single Age Groupe and a Single Observer (In-Key-Gr# 3/3*6/6*1*1)
The production of statistics is not possible because they are the results of a single individual and not of a population of individuals. However, at first glance we observe obvious differences (Figure 5).
We will name this broken line “Individual-Emotional-Fingerprint” or “Individual-Key-Graph”. It represents the grouping and the succession of the average of the measurements of the Emotional-Visual-Acuity for each of the 9 packets for the 6 emotional-series for a single or several face-tests (each of the 3 FaTe in our graph) and for a single observer. Comparative studies can be carried out between 1) observer “Alpha” before and after injury or treatment, 2) observer “Alpha” versus observers “Beta”, 3) observer “Alpha” versus a sick or non-sick population sample. In other words, everyone perceives the emotions on the face of his contemporaries in a way that is like no other. This chart attests to “perceptible-emotional-idiosyncrasy” at visual perception.
![]()
Figure 5. Synchronic Individual-Key-Graph for each of the 3 FaTes, each of 6 Emotional Series, all Age Groups (In-Key-Gr# 3/3*6/6*1 *1).The 3 Individual-key Graph respectively to each of the 3 Face-tests (FaTe-Blond, FaTe-Brunette, and FaTe-Man) are statistically significantly different. One observer perceives these 3 Faces-Test differently.
4. Discussion
4.1. General Statistics
Our results are different than those of Eiland and Richardson in 1976) [18]. For the latter, neither the race, sex nor age of the observers influences their ability to Emotional-Visual-Recognition. Moreover, our results show that, sex, and education level of observers does not influence this variable. For us, age, race, and identity (FaTe) have a statistically significant influence on emotional-visual- acuity. While for Wingenbach et al. in 2018 [19] women have a better Emotional-visual-Recognition. For Forni-Saints and Osorio in 2015 [20] the analysis of the studies carried out to date does not allow them to draw definitive conclusions regarding the role of the sex of the observer in the recognition of facial emotion, mainly due to the absence of standardized methods of investigation. Our results show that the emotional-visual-acuity is the same for male and female observers. A FaTe-male vs FaTe-female crossover study with observers-male vs FaTe-female would be relevant if we had many more FaTes available to us, for a future in-depth investigation.
4.2. Gun-Trigger-Graph or Graphs of Emotional-Decision-Making
4.2.1. Emotional-Decision-Making Categorical vs Proportional
1) Fundamental-Individual-Graph-Emotional-Decision-Making
The discussion that makes it possible to separate the categorical aspect from the proportional aspect of decision-making is still a topic of interest [21] [22] [23] [24]. This study with M.A.R.I.E. attempts to provide a response.
The transcriptions of:
a) The Fundamental-individual-emotional-decision-making measurements for each of the 1 (FaTe) × 6 (Emotional-Series) × 1 (observer) × 19 (stimulus images) allow the construction of 6 Fundamental Individual-Graph-Emotional- Decision-Making with 114 binary responses. We recall and specify that each Emotional-Serie is composed of 19 stimulus images.
b) The Individual-Graph-of-Emotional-Decision-Making measurements for each of the 1 (FaTe) × 6 (Emotional-Series) × 1 (observer) × 9 (stimulus-packet) allow the construction of 6 Individual-Graph-of-Emotional-Decision-Making with 36 binary responses. We recall and specify that each Emotional-Serie is composed of 9 Packets.
The Individual-Graph-of-Emotional-Decision-Making, allows seeing the responses of an observer, for a FaTe, for an Emotional-Serie, which includes 9 packets. It makes it possible to understand the transition between Fundamental- Individual-Graph-Emotional-Decision-Making to Individual-Graph-Emotional- Decision-Making and Sample-population-Graph-Emotional-Decision-Making according to the manipulated variables (see Figure 6 in the first half article), the green broken straight line).
It is interesting to note that for certain observers and certain Emotional-Serie, the Fundamental-Individual-Graph-Emotional-Decision-Making looks like the Heaviside-type threshold functions (effect “I”) (see Figure 13 A in the first half article) [25]. This type of graph is observed in Artificial Intelligence in the context of neural networks [26] [27]. The observation of this similarity could allow a better understanding of the fundamental mechanisms in neuropsychology thanks to comparative studies between biology and artificial intelligence. As such, the unitary functioning of an artificial neuron is of the Heaviside type with a threshold curve identical to the effect “I”. The operation of a multiple neural network is closer to a sigmoid curve as we observe in the (see Figure 7 in the first half article) for each of the 3 FaTes [28] [29].
At the individual level of only one observer, apart from the effect “I” never have a sigmoid but an interval of uncertainty or ambivalence range: effect “N” or effect “M”. This is what we call Ambivalence-Beach. It is the sum of the effects “I “, “N” and “M” for an age group of 30 observers or for the population sample of 204 observers, which makes it possible to arrive at a sigmoid curve. The transition from a broken straight line with a sigmoid look to a real rounded sigmoid one depends on the number of observers. In other words, it is because each of the observers does not respond in the same way that the sigmoid curve is constructed. This finding assumes an unpredictability of an observer’s response to the Images-Stimuli. This unpredictability and therefore this variability is more for Image-Stimulus-of-Flip, which is identified by the largest standard deviation. Nevertheless, the limits of the possible answers are well identified. This observation evokes a normal law of normality.
Conversely, if the response, of the effect type, “I”, for each of the 204 observers was the same Image-Stimulus-of-Flip then the answer of a 205th observer would be predictable. This would allow a type of operation “robot” thanks to a common algorithm for all the observers. This possibility is not to be excluded in the hypothesis that the Visual-Facial-Emotions-Recognition would be innate with neural networks and wiring prior to birth universally true for all human beings. However, our findings do not support such hypothesis.
The effects “A” and “V” appear to be errors or misunderstanding depending on their distance from the Image-Stimulus-of-Flip. They are almost non-existent for the effect “A” i.e., 0% for 3 FaTes. They are measured at 1% for the effect “V” for packets # 7; # 8 and # 9 for 3 FaTes. This observation testifies to a phenomenon that is not explained by a simple error or misunderstanding. The effect “V” explains the absence of 100% recognitions to the right of the Image-Stimulus- of-Flip. Although the effects “A” and “V” are part of the experience, they can appear to be attentional errors. This observation would point towards an attention deficit that would be closer to the work of Neuwman, on people who present psychopathic personality traits [30]. The errors or misunderstandings, in other words “Misunderstanding-Error” are in a succession of identical and coherent answers. It would be useful to know if these Misunderstanding-Errors reflect reality or if it is the methodology that favors them. A continuation project of this article would be to “refine” the figures by removing the effects “A” and “V” of all the data and considering only the effects “N” and “M”. It would be interesting to individualize each of them to constitute or represent different respective population samples.
The categorization of type effects “I”, “N”, “M”, “A” and “V” can identify repetitive phenomena from one observer to another. We can deduce that these functional processes could have common points or even similar organic locations. It would be interesting to individualize each of them to constitute or represent most numbers in a population sample. The selection of a sample of observers who have a profile of the type “I” which is located on the same Image-Stimulus-of-Flip for the same Emotional-Series and the same FaTe would indicate a common neural circuitry or neuropsychological functioning. In other words, there would be a neural wiring prior to birth. Performing functional imaging of this population sample for the same FaTe, the same Emotional-Serie, and the same Image-Stimulus-of-Flip would certainly be more localizing. It would show in a finer and more precise way the cerebral territories involved in 1) the physiology of decision-making, 2) the interpretation of this emotion.
As such, to allow good communication between researchers and clinician doctors, we propose the following nomenclature for the:
a) “I effect”, “I-38” indicate that the Image-Stimulus-of-Flip contains 38% PixB which is image-stimulus No.6.
b) “N effect”, “N-38*44”; i.e., between Images-Stimuli No. 6 and No. 8; the Ambivalence-Beach begins at 38%PixB and ends at 44%PixB, i.e., a differential of 6%PixB.
c) “M effect”, “M-35*50” i.e., between Images-Stimuli No.5 and No.10; the Ambivalence-Beach begins at 35%PixB and ends at 50%PixB: i.e., a differential of 15%PixB.
We postulate that it is always “B” type responses (%RespB) and “B” type pixels (%pixB) that we consider.
The effects, “N” and “M” delimit an Ambivalence-Beach which is important since decision-making is difficult, with increased cerebrovascular flow in one or more regions and time-consuming. This makes it possible to evoke a neuronal and synaptic over-activation of the circuits of the emotions or for the circuit specific to this emotion. Experts and researchers in Functional imaging (e.g., fMRI) and neurophysiology should be interested in a homogeneous population sample for the effects “N” and “M”. The identification of specific circuits of an emotion would be more apparent from the background noise. This work would make it possible to understand the continuous aspect of the Visual-Facial-Emotions- Recognition.
On the one hand, the sample with the “I” effect and on the other hand the sample which groups the “M” and “N” effects bring into focus the controversy related to the categorical or proportional aspect of the Visual-Facial-Emotions- Recognition which is still prevailing as evidenced by the compilation of articles by many authors and more specifically the work of Harris and Young in 2012 [31]. These authors use computer morphing. In their discussions, they evoke parallel brain processing that is both categorical and continuous. This depends on circuits in the brain or its regions that process emotional perception. They find that the amygdala categorically processes Visual-Facial-Emotions-Recognition while the superior temporal sulcus processes them proportionally [21] [22] [23] [24] [31].
The measurement of Sample-Population-Emotional-Decision-Making and Fundamental-Individual-Graph-Emotional-Decision-Making could be an indirect reflection of synaptic functions. The administration of psychotropic drugs could have a positive or negative impact with a reduction or increase in decision-making times or even a modification of the effect “I”; “M”; “N”; “A” and “V”. This methodology for studying: 1) the Fundamental-Individual-Graph- Emotional-Decision-Making, 2) Individual-Graph-of-Emotional-Decision-Making, 3) Sample-Population-Emotional-Decision-Making, 4) Individual-Emotional-Fin- gerprint and 5) Populational-Emotional-Fingerprint or Populational-Key-Graph can now provide an original approach to mood disorders, psychotic illnesses, personality disorders, pervasive developmental disorders, development, autism spectrum disorders, etc. The same is true for neurological vascular diseases, neurodegenerative diseases, Huntington’s [32], in obsessive-compulsive disorder [21], etc.
2) Better Brain Function When Emotional Decision-Making is Categorical
Our work evokes interest in neuronal “mechanics” whose interactions are not limited to categorical or proportional functions. To simplify, we note that at the individual scale, the Visual-Facial-Emotion-Recognition is categorical and at the scale of the population sample, it is proportional. However, the categorical aspect would be the most efficient, which testifies to optimum cerebral functioning. Indeed, we need to consider the work published by Granato, Vinekar et al., 2020, [33]. The latter paper is pertinent to the neurodegenerative processes of Alzheimer’s disease. They find 3 types of graphic representations in the charts: 1) Heaviside curve, 2) diagonal, and 3) horizontal. The latter is the expression of random responses. The observer responds randomly to each of the 19 image- stimuli. It seems that the distinction between “A” and “B” emotions is no longer possible for a sample population with mild cognitive impairment suggestive of Alzheimer’s disease. In other words, for this population sample we will have as many answers, which identify image “A” as image “B”. This will result in a transformation of the sigmoid curve into a broken diagonal line then its horizontalization. In other words, this will be expressed before the impairment sets in by a diagonal line and after the impairment sets in by a horizontal straight line.
3) Explanatory Mental Image of the Concept of Categorical and Proportional and the Transition from One to the Other
The explanation of categorical and proportional dimensions requires taking as an example a balance with two pans with a needle in the middle. The categorical dimension can be explained by the presence of a mass on the right pan of the balance, which is in the low position. In the left plate, which is in the high position, the experimenter progressively deposits unit weights (the %PixB) which accumulate. Suddenly, quickly, when the accumulated weight of the left plate is close to the right plate, then the right plate rises and the left plate lowers. This speed of conformational change is like the stepping of the Heaviside curve. The speed is expressed by the angle of inclination of the middle part of the Graphic-of-Emotional-Decision-Making or Gun-Trigger-Graph.
It is the neurophysiological equivalent of the weight of this mass which is interesting to study and understand. A proposed explanation would be that the weight of this mass would only be the “force of conviction” (or “strength of conviction”) during this comparing task that an individual grants to a sensory perception reinforced by the presence of a canonical emotional image present in his personal memory library. In this case, all the individuals in the population sample have a similar weight with a priori Gaussian distribution. This gives a sigmoid close to a Heaviside-type stair-step threshold function. Each emotion would have its own weight. This is expressed by sigmoid to the left or to the right on the abscissa axis. The disappearance of this canonical emotional image (like fading of a polaroid camera) from the memory library leads to half hazard responses resulting in a horizontal graph. The presence of a diagonal may be the consequence of a population sample which would have different canonical emotional image weights from one observer to another. Consequently, firstly, the sigmoid is a Heaviside type threshold function degraded by the Gaussian distribution of the weights of conviction, which allows: 1) a diagonalization of the central part of this curve and 2) a rounding of the 2 ends of this central part. A possible but indirect measure of this “force of conviction” can be approximated by the projection onto the abscissa axis of the point of inflection of the diagonal of the sigmoid curve. The closer this projection is to the origin of the abscissa axis the stronger is the “force of conviction” and vice versa.
Secondly, the organic and/or functional alteration of neurons in networks specialized in the visual processing of facial emotions leads to a flattening of the sigmoid curve. Such horizontal curve usually follows a diagonalization of the curve. As the disease advances this curve assumes horizontality. This makes it possible to measure the degree of degradation. The graphic translation of this experience is a straight line of type “y = ax + b” with “a” as the leading coefficient. And when the latter tends towards “0” then we obtain a horizontal which represents a succession of random responses for each stimulus image. A study of the variables “a” and “b” would be of great interest for a good understanding of the weakening of emotion recognition through the various Face-Tests, that is usually associated with psychotropic drugs, or [34] [35] psychiatric and neurological pathologies [36] [37].
Finally, the flattening and absence of slope of this diagonal constitutes a horizontal graph which represents that the responses of the observers are due to chance with a distribution of the Bernoulli type. In other words, a horizontal with an ordinate located at 50% represents more reliance of the subjects on chance than actual recognition of emotions.
It can also be said that the categorical aspect and the proportional aspect are only the expression of the same unique phenomenon. It is distinguished by an operation: 1) optimum which gives it a categorical aspect (Heaviside threshold function), 2) degraded which gives it a proportional aspect (diagonalization of the sigmoid) and 3) a non-operational which gives it the appearance of a horizontal one.
The same Image-Stimulus of the same Emotional-Series responds to a binomial law of the success or failure type identical to the coin toss of a coin when it is a question of a single observer who makes the decision to identify the emotion. When 204 different observers repeat this test 204 times independently then it follows Bernoulli’s law, although the tests are identical and independent.
This reflection was discussed in the article Granato et al. in 2018 [38]. Today we could associate the weight of this mass or “force of conviction” with the categorical dimension. Consequently, the higher this index, the more optimal is the brain functioning.
In summary, we believe that optimum functioning of all the neurons that process emotions will be expressed in a categorical way by a Heaviside-type staircase curve. Impaired functioning will be expressed in a proportional way by a diagonal, with a variable slope. A severe impairment or non-functioning will be expressed by a horizontal.
These hypotheses give meaning to the results obtained in an article which studied Visual-Facial-Emotion-Recognition in people with mild cognitive impairment of the Alzheimer type, see Granato et al. in 2020 [33].
4.2.2. Emotional-Decision-Making Graph or Gun-Trigger-Graph at the Scale of the Population Sample
For this sample population of 204 observers the Emotional-Visual-Acuity of the 9 packets from each of the 6 Emotional-Series is statistically different for: 1) all 3 FaTes (Wilk’s Λ = .456, D (45, 5415) = 23.127, p = .000) (see Figure 8 in the first half article)2) the FaTe-Blond (Wilk’s Λ = .661, D(40,5281) = 13.131, p = .000) (see Figure 10 A in the first half article), 3) the FaTe-Brunette (Wilk’sΛ = .0340, D (45, 5415) = 32.827, p = .000) (see Figure 10 B in the first half article), and 4) the FaTe-man (Wilk’sΛ = .345, D (45, 5415) = 32.313, p = .000) (see Figure 10 C in the first half article).
This sample can discern the variation of %PixB for packets # 2; # 3; # 4; # 5 and # 6 for the 6 emotional series on each of the 3 FaTes. The scores of each of the emotional-series for each Face-Test are different. This implies that the identity of the Face-Test can influences the scores. As a result, emotional-Visual- Acuity depends at least on the face that expresses them.
The analysis of the morphology of the 18 sigmoid (3 Face-Tests X 6 Emotional-Series) can be schematized in 3 parts. A tray on the left almost resting on the abscissa axis for packets #1 and #2. A tray on the right parallel to the abscissa axis for packets # 7, # 8 and # 9. And between these two trays a broken lines that oscillate between one: 1) near parallelism with the y-axis: a) Emotional-Series “neutral-joy” and this “neutral-fear” for the 3 Faces-Tests, (see Figure 8 in the first half article), b) for the FaTe-Blond it is the EmSe “neutral-disgust” (see Figure 10 A in the first half article), c) for the FaTe-Brunette the EmSe “neutral-sadness” (see Figure 10 B in the first half article), d) and for the FaTe-Man the EmSe “neutral-surprise” (see Figure 10 C in the first half article). As a result, these last three emotions seem to characterize respectively these 3 Faces-Tests, 2) an obliqueness of broken straight lines, which are distinguished by their slope. The slope is minimum for the: 1) EmSe “neutral-anger” for the 3 FaTes, 2) EmSe “neutral-anger” for FaTe-Blond, 3) EmSe “neutral-anger” for FaTe-Brunette and 4) EmSe “neutral-anger” for FaTe-Man. In other words, this sample population has good Emotional-Visual-Acuity for: 1) the “joy” for the FaTe-Blond (see Figure 10 A in the first half article); 2) the “sadness” for FaTe-Brunette (see Figure 10 B in the first half article) and 3) the “joy” for the FaTe-Man (see Figure 10 C in the first half article).
Each of these FaTes has: 1) a “Major-Visual-Facial-Emotionality”: a) the “joy” for the FaTe-Blond, b) the “sadness” for FaTe-Brunette and c) the “joy” for FaTe-Man and 2) a “Minor-Visual-Facial-Emotionality”,: a) the “anger” for the FaTe-Blond, b) the “anger” or even the “surprise” for FaTe-Brunette and c) the “anger” for FaTe-Man. As a result, “anger” seems to be a constant independent of FaTe.
Apart from the Major-Visual-Facial-Emotionality specific to each FaTe, the medians of the 18 sigmoid are distinguished by their positioning on the abscissa axis. The distance between the Major-Visual-Facial-Emotionality to the left and the Minor-Visual-Facial-Emotionality, on the right is not an emotional-channel but an emotional-focus. It is narrow for the FaTe-Blond, wider for the FaTe- Brunette and even wider for the FaTe-Man.
4.2.3. Individual-Graph-Emotional-Decision-Making or Individual-Gun-Trigger-Graph
The Individual-Graph-Emotional-Decision-Making differs from the Fundamental- Individual-Graph-Emotional-Decision-Making by the presence on the abscissa axis of 9 packets instead of the 19 stimulus-images.
The Fundamental-Individual-Graph-Emotional-Decision-Making is of more interest to fundamental research. The Individual-Graph-of-Emotional-Decision- Making is more clinical. The speed and ease of interpretation make the Individual-Graph-of-Emotional-Decision-Making a good screening tool in clinical routine. Our work identifies a typical physiological profile with the caveat of a “supra-normal” population sample due to the following statistical biases: 1) optimum cognitions from 20 to 70 years old, 2) anxiety measure (HAMA) low; so, no anxiety syndrome, 3) mood measurement (HDRS) low; therefore, no depressive syndrome. The identification of several standard or “normal” profiles is necessary to guide the Individual-Graph-of-Emotional-Decision. The latter allows the comparison between healthy observers or sick observers. As well as the comparison of an observer or a patient with a simple type of Graph-individual- of-emotional-decision-making.
4.2.4. Populational-Graph-of-Emotional-Decision-Making or Po-Gun-Trigger-Graph, Po-GunTrGr#∑-∑-∑-∑ and Population Pixels
The packet # 1 consists of Images-Stimuli N˚1, which contains 0% PixB and 100% PixA, i.e., “neutrality”. This stimulus-image N˚1, is correctly recognized by 100% of the population sample as “neutrality (see Figures 6, 10 A, B, C, D in the first half article). This finding assumes that “neutrality” would be an emotionally robust and invariant. A confusion between “neutrality” and another emotion, for the packet #1, would be more concerning than a confusion between two different emotions. Nevertheless, it is not excluded from the theoretical reflection that “neutrality” would not be an emotion.
In the Populational-Graphic-of-Emotional-Decision-Making, or Populational-Gun-Trigger-Graph, (Po-GunTrGr#∑-∑-∑-∑) and Population Pixels Man (see Figure 6 in the first half article), the graphical expression of the distribution of pixel quanta PixB is a broken line that looks like a diagonal. Its coordinates are (0; 0) and (100; 100) at the extremities of the abscissa and ordinate axes. The morphology of this diagonal is fixed throughout our study. It is independent of each Emotional-Serie studied. The first intercept or “intercept-low” is the point of intersection between the line and the curve on the left of the graph. It represents the point where the percentage of PixB constitutes the Images-Stimuli equal to the percentage of observers’ “B” responses. Looking at the graph, we should have a second intercept called “intercept-high.” The right plateau of the sigmoid curve, of the Po-GunTrGr#∑-∑-∑-∑ and Population Pixels, does not take the value 100% on the ordinate. This assumes that some of the 204 observers in the population sample do not recognize the image-stimulus N˚ 19 (packet # 9) which is at the origin of the Emotional-Serie. It is administered at the last position. In addition to the Images-Stimuli N˚ 19 (packet # 9), this measurement of less than 100% also concerns quanta: 1) packet # 7 (image-stimulus N˚ 15, N˚ 16 and N˚ 17) and 2) packet # 8 (image-stimulus N˚ 18). This attests to the reality of the phenomenon, which is repeated on each of the last 5 stimulus-images (N˚ 15; N˚ 16; N˚ 17; N˚ 18 and N˚ 19) of the emotional-series. Remember that the population sample studied is “supra-normal” in terms of cognitions (MATTIS) and memories (Grobber and Buschk) with low scores for anxiety (HAMA) and depression (HDRS). These scores are optimum for each observer. In other words, our entire population sample is assumed to be intelligent, cheerful, and euthymic with the tests administered. We attempted to control for random variables. “Cognition” “Memory” “Anguish” and “Mood”. Nevertheless, 1% of our sample of a homogeneous population in terms of cognitions does not recognize the basic emotions on the Faces of their contemporaries. This observation opens the debate on a “normal” population insofar as “cognitions”, “HAMA” and “MDRS” scores would be heterogeneous. The hypothesis that this percentage is greater than 1% cannot be ruled out and deserves to be discussed. The confirmation of this hypothesis could be an explanatory factor for the violence or lack of empathy of certain individuals in society. It could be a specific neuropsychological disorder present in certain pathologies such as antisocial personality, pervasive developmental disorders, autism spectrum disorders, psychosis, mood disorders, etc. Our results agree with Gery et al., who find an inability for sexual aggressors to decode the emotions on the faces of their victims [39] or in antisocial personality [40]. In summary, we can evoke the trans-nosology existence of a “Disorder of Visual-Facial-Emotions-Recognition” (DVFER) and, maybe, even more appropriately labelled “Disorder of the Emotional Visual Acuity” (DEVA) as distinct disorder pervasively present in 1% or more of normal population and in a higher level of prevalence in other psychopathologies and/or personality disorders. This could become a legitimate consideration for committees working on future editions of DSMs or ICDs.
The effect “V” which appears in Figure N˚ 13 (see in the first half article) C highlights a real error because it appears in a sequence of identical and coherent answers. This error appears far from the Image-Stimulus-of-Flip. This explains why the rate of 99% is constant in packets # 7; packets # 8 and packets # 9. It is possible that this error is made when displaying the first Images-Stimuli which evokes the emotion “B”. It would be a “fit-error” between the psychic apparatus of the observer and the first “Images-Stimuli” which appear of the Emotional-Serie of the test. Another bias would simply be an “Attentional-Error”. These two hypotheses would assume in fact a symmetry across the Image-Stimulus- of-Flip, which does not seem to be the case. A better localization of the Image-Stimulus-of-Flip could be the subject of a future publication by determining the Image-Stimulus-of-Flip. Finally, the Image-Stimulus-of-Flip could be considered as a point of symmetry if packets # 7; # 8 and # 9 in the Figure 6 (see in the first half article) were equal to 100%.
Beyond these hypothetical explanations we must consider this reality whether “Misunderstanding” or “Errors”. Due to the confusion of the two terms, we will identify this observation by the term “Misunderstanding-Error”. Additional work with the removal of this misunderstanding-error would be interesting. However, in experimental psychology laboratory, it is not always possible to create real life paradigms. We will speak of error for a response in opposition to the totality of the sample of the reference population. We will speak of misunderstanding-Error if this response is shared by a non-negligible percentage of the sample of the reference population. This observation alone opens the debate on the relativity of an individual’s response, which is not necessarily good or bad. The quality of this response necessarily depends on the reference population sample. The 2 measures are inseparable. Observer variability is evident in the setting of onset of Alzheimer’s-like degenerative neurological disease. This refers to the polynomial dimension of the observer and the reference population sample.
4.2.5. Populational-Graph-of-Emotional-Decision-Making, or Po-Gun-Trigger-Graph, For Each Face-Tests, Po-GunTrGr#3/3-∑-∑-∑
The presence of sigmoid curves for each of the 3 Face-Test supposes an identical functional mechanism (see Figure 7 in the first half article). The greatest statistical variance concerns the middle part of the 3 curves. The Face-Test identity or/and ethnicity appears to be first source of this variance. The left and right- side parts are different because: 1) for the package # 2; 1% to 3% of the population sample does not recognize the emotion “neutral”, which is the emotion “A”, 2) from packet # 7, to packet # 9; 1% to 2% of the sample population does not recognize the studied emotion “B”. This finding can be explained by possible adjustment errors or attentional errors. However, let us keep in mind that there is none for the packet # 1, Images-Stimuli which is the basic image of the “neutrality”. This is not the case for packet # 9, which is Images-Stimuli N˚19, therefore the basic image at the origin of the Emotional-series. Images-Stimuli N˚1 and N˚19, constituting packets # 1 and packet # 9 respectively, are presented in penultimate and last position. This supposes a period of learning and adjustment with the 17 previous Images-Stimuli. In addition, despite this, there are many errors for Image-Stimulus N˚19.
The almost perfect superposition of the sigmoid of FaTe-Blond and FaTe-Man could be explained by the fact that these 2 FaTes, on an ethnic level, come from the same ethnic population sample studied. Otherwise, if the face-Test studied is not from the population studied, then by comparison the curves would not overlap. This could explain the positioning of the sigmoid of FaTe-Brunette.
The sigmoid curves of FaTe-Blond and FaTe-Man would be closer to a Heaviside function, type threshold function (stair step) with, in this case, more categorical and faster decision-making. Conversely, the sigmoid curve of FaTe- Brunette approaches a diagonal. In this, case the decision-making seems less categorical and more continuous. In other words, more progressive, more proportional, more uncertain, and less categorical, longer, and hesitant for each of the 9 packets and therefore each of the 19 Images-Stimuli. The diagonal explains the continuous aspect. In other words, the diagonal represents a proportionality between the quanta (average percentage of pixels) PixB in the packet and the percentage of “B” response from the sample.
The aspect of Po-GunTrGr#3/3-∑-∑-∑ seems to be an intermediate state between “diagonal-proportionality” and “stair-step-categorical”. The tendency towards one or the other of these two extremes could depend on variables, which it would be interesting to identify. Ethnicity as well as identity could influence. However, let us not forget to consider cognitions, the existence of innate or acquired psychiatric or neurological disorders.
4.2.6. Populational-Graphic-of-Emotional-Decision-Making, or Po-Gun-Trigger-Graph, for the 3 FaTe for Each of the 6 Emotional-Serie for the Entire Population Sample Po-GunTrGr#∑-6/6-∑-∑
We have named Emotional-Focus the difference between the leftmost sigmoid and the rightmost sigmoid. Each sigmoid can be compared to a gun-trigger. We could talk about “sigmoid-trigger”. Indeed, for the same percentage of Emotional-Visual-Acuity of emotion “B” by the same sample population: 1) few PixB are enough, for the leftmost sigmoid; this assumes that this trigger is very sensitive; 2) more PixB are needed for the rightmost sigmoid; this assume that this sigmoid trigger is insensitive (see Figure 8 in the first half article).
This observation gives rise to the concept of “Emotional-Sensitivity” which could be extrapolated by calculating for each Emotional-Serie of each FaTe the number of PixB (x-axis) for an Emotional-Visual-Acuity (y-axis) measured at 50%.
The “difference-of-Emotional-Sensitivity” is the difference between Major- Visual-Facial-Emotionality and Minor-Visual-Facial-Emotionality for a sample population and for a set of FaTes, which can be considered a more precise measure of the concept of the Emotional-Focus. It would be interesting to look for a positive or negative correlation between Emotional-Sensitivity and Emotional-Focus. Indeed, the emotional focus can vary to the left or vary to the right and in this case, it will affect the sensitivity.
4.2.7. Populational-Emotional-Decision-Making-Graph or Po-Gun-Trigger-Graph for each Face-Test, for Each of the 6 Emotional Series for the Entire Population Sample, Po-GunTrGr#3/3-6/6-∑-∑
The concept of Emotional-Sensitivity and of difference of Emotional-Sensitivity also applies individually for FaTe-Blond, FaTe-Brunette, and FaTe-Male. These new variables make it possible to further characterize each FaTe.
The positioning of each emotional-series of Po-GunTrGr#3/3-6/6-∑-∑ is very close for each FaTe. The stability of this ordering could be explained by an identical differential sensitivity of the organic structures. The latter would simultaneously process emotions in a categorical and continuous way as suggested by Fujimura [41], Harris and Young [31] and Matsuda Yoshi-Taka [42]. These authors located distinct neural loci that process the physical and psychological aspects of facial emotion perception in a categorical or continuous manner.
At the level of each individual, our results point towards an exclusively categorical organic treatment if we take into consideration the “I” effect. Conversely, the misunderstanding-errors for the “A”, “V”, “N” and “M” effects would be the expression of an inner questioning specific to each individual. We will call it “Doubt”. Therefore, the doubt would be physiological. In this case the misunderstanding-errors would be the fundamental and individual expression of the sigmoid aspect which appear at the scale of a population sample. This hypothesis would blur the boundary in the choice of two contradictory positions such as stimulus image “A” or stimulus image “B”. In addition, generally between two sensory choices (vision, hearing, olfaction, taste and touch) but also between two diametrically opposed cognitive choices. This inner questioning is constant and continues every time we meet or talk with a person, whether we known it or not. This hypothesis could be confirmed by the example in the following paragraph.
Explanatory mental image of the concept of simultaneous categorical and simultaneous proportional perception.
This paradigm can be likened to a person putting their hand under the hot water faucet of a sink that hasn’t been used for a day. First, the individual perceives cold water, then gradually he perceives water that is increasingly hot, then suddenly he withdraws his hand. In this experiment, the cutting-point is the temperature of the water at which he withdraws his hand. This proves a proportional and categorical treatment of the perception of heat as mentioned by these latest authors [31] [41] [42].
The few variations in the ordering of emotions could be the consequence of: 1) a better Emotional-Visual-Acuity and Emotional-Visual-Feeling for certain emotions or, 2) a strong influence of the identity of the FaTe, 3) the ethnic origin of the FaTe, 4) the ethnic origin of the sample population, and 5) congruence between factors 3 and 4. This implies that a face-Test could have characteristics that would be specific to it and that would influence Emotional-Visual-Acuity and Emotional-Visual-Feeling.
However, our results challenge these hypotheses because at the scale of an individual observer we will never have a sigmoid but a Heaviside function with “A”, “V”, “N” or “M” effects. In other words, there is a different pattern of Image-Stimulus-of-Flip and misunderstanding-Error. It is the summation of these individual’s effects, which can lead to a sigmoid. This latter for a population sample seems to be an intermediate state between (diagonal-proportionality) and (stair step-categorical). A possible analogy of this different pattern is the Lapalce-Gauss curve, which takes on a bell shape only by summing individual measurements greater than or equal to 30. On the individual level, it is only a simple and unique continuous measurement.
Otherwise, each Polynomial [Ethnic-Sample-Population*identity Face Test* Ethnic-Face-Test*Emotions*environment*other] for a sample-population of observers and the Face-Test can be characterized by Major-Visual-Facial-Emo- tionality and a Minor-Visual-Facial-Emotionality. This finding could explain Emotional-Visual-Feeling like 1) sympathy, 2) antipathy, 3) empathy, 4) the affection we can bring to a person we see for the first time, 5) the popularity of a politician or actor. This enumeration assumes a real impact of the identity of a face and the emotions it expresses on the psychic apparatus of the observer or the population sample. In simple English, it can be “being in tune” with the person like a mother is with her child or a good therapist is with his patient. We could call this interaction “Psychic-Emotional-Intrusion”. Insofar as: 1) it imposes itself on the psychic apparatus of the observer, 2) he receives it well despite himself, 3) it can modify certain psychic balances, 4) create a feeling of love, neutrality or hate, 5) create sympathy or antipathy, 6) explain the “charisma”, 7) explain leadership, etc. which can all be subsumed under the phenomenon of instant “transference” in psychoanalytic parlance. It can be positive or negative on either side and is subject to “counter-transference” and both may be positive and/or negative. Regardless, both subjects are in tune with each other’s emotional state. As such, the initial analytic treatment installed the patient in such a way that he did not see the analyst’s face. This technique minimized the effect of what we call Psychic-Emotional-Intrusion.
It would be relevant to know if for the same FaTe the Major-Visual-Facial- Emotionality and Minor-Visual-Facial-Emotionality, of single FaTe are the same when population samples are radically different or heterogenous. In this eventuality, psychic-emotional-intrusion would be different.
The expression of Individual-Graph-of-Emotional-Decision-Making or Individual-Gun-Trigger-Graph and Populational-Graph-of-Emotional-Decision- Making or Populational-Gun-Trigger-Graph (see Figure 4 in the first half article) is to be compared to the work on artificial neural networks in artificial intelligence.
The Po-GunTrGr#∑-∑-∑-∑ (see Figure 6 in the first half article) Po- GunTrGr#3/3-∑-∑-∑ (see Figure 7 in the first half article), Individual-Gun- Trigger-Graph (Id-Gun-Tr-Gr (see Figure 4 red line in the first half article) and more specifically Fundamental-Individual-Gun-Trigger-Graph (Fu-Id-Gun-Tr- Gr) (see Figures 13 A, B, C,D, E in the first half article)could be associated with a “activation function” Heaviside type function. The equivalent in neurophysiology would correspond to “activation potential” or “pacing threshold” which once reached leads to a response from the neural network followed by the triggering of an event such as decision-making on a general level.
These graphs consider the binary response [“I recognize the “B” emotion”] or [“I don’t recognize the “B” emotion”], of a single observer, for a single Face-Test and for each of the 19 Images-Stimuli of the only one Emotional-Serie. Therefore, in this case the threshold function appears on the neurophysiological level. On the other hand, the succession of “0” and “1”, don’t allow parametric tests. This is why we have implemented the packages #3; #4; #5; #6; #7 each contain 3 different Images-Stimuli to allow the creation of variance and to allow parametric statistical tests. This constraint led us to set up the Individual-Graph-of- Emotional-Decision-Making, which is different from the Fundamental-Indi- vidual-Graph-Emotional-Decision-Making.
4.3. Graphs of the Emotional Fingerprint or Key-Graph
Explanatory Mental Image of the Concept of Emotional-Fingerprint (EmFi)
It is like the fingerprints that a thumb would leave on the surface of an object. The thumb being the Face Test and the “object surface” the psychic apparatus of the single observer or the population sample observers. The result on the surface of the object depends on the condition of the thumb and the type of support. In other words, from the polynomial [Ethnic-Sample-Population*identity Face Test*Ethnic-Face-Test*Emotions*environment*other]. It will result in a quality of the fingerprint that is extremely variable. This could even explain the discrepancy in the conclusions of the different articles. This is due to the lack of control of the polynomial’s components.
The differential resulting from the comparison of the Individual-Emotional- Fingerprint or Individual-Key-Graph between 2 observers for the same Face Test would make it possible to see one and/or multiple additional components that appear in the measurement and understanding of Emotional-Visual-Feeling. Future work would be needed to: 1) identify profiles for a normal population on different variables, 2) delineate the boundary between physiological and pathological, 3) identify normalized and standardized profiles of the previous graphs and associate them with most of the pathologies encountered in: a) psychiatry, b) child psychiatry, c) neurology, d) sociology and e) criminology, etc. These graphic profiles of facial emotions could point towards a given pathology.
In Figure 5, each Face-Test of the Populational-Emotional-Fingerprint or Individual-Key-Graph has a “broken straight line” which is him “consubstantial”. For one Face test, each point of this broken straight line has for coordinates in abscissa the name of the Emotional-Series and in ordinate the average of the Emotional-Visual-Acuity of the 9 packets of this same Emotional-Series. In other words, the yellow broken straight line consists of the 6 points, which are the average of the percentage of %respB for each of the 6 graphs of each Emotional-Series from the Po-GunTrGr#Blond-6/6-∑-∑. It is similar for: 1) the purple broken straight line for the GEDM#Brunette-6/6-∑-∑ and 2) the blue broken straight line for the GEDM#Man-6/6-∑-∑.
In addition, each of the 6 average, for each FaTe, is the resultant of the couple: 1) Emotional-Sensitivity which is specific to each observer of the sample of population and 2) Emotional-Expressiveness which is specific to each Face Test. It is difficult to distinguish them from each other. The resultant of these two entities could be what we have named “Emotional-Visual-Feeling” or “Facial-Visual- Emotionality” or more simply “Emotional-Visual-Acuity”. We measure this latter at the populational (see Figure 1(a), Figure 1(b), Figure 5(a), Figure 3(a), Figure 4(a)) and individual level (see Figure 5) through Emotional-Visual- Acuity. However, in realty, this proposal is false because observer and Face-Test are not face-to-face with beating hearts. To circumvent this obstacle, we will continue to speak of Emotional-Visual-Acuity without forgetting that this measure includes the component of Visual-Emotional-Feeling to a much lesser degree than, if we were real life face-to-face with access to feeling of our beating heart and other visceral sensations.
Therefore, each point of the Populational-Emotional-Fingerprint or Populational-Key-Graph (see Figure 1(b)) is the resultant of the average of the 204 Individual-Emotional-Fingerprint. The informative character of the Emotional-Series is obvious, and its comprehension is simple. This type of document could become as essential as the MMSE [43] or the electrocardiogram in routine clinical practice in psychiatry and neurology. In addition, it may find clinical and research applications.
The literature on the psychopathology of facial emotion recognition disorders is moving in two directions. On the one hand, a total inability to recognize facial emotions. On the other hand, difficulties in recognizing facial emotions. Both of these result in medical, societal, or legal consequences. The cocaine users and antisocial personalities don’t seem to recognize facial emotions at all [44] [45]. The inability to perceive disgust seems to be one of the hallmarks of Huntington’s chorea [46] [47]. The measurable impact of Individual-Emotional-Fingerprint could advance the understanding of these dysfunctions but above all facilitate screening for the first signs of certain diseases.
Presenting several Face Tests to an observer will identify the ones that moves him the most, the less, the lease. In other words, the Individual-Emotional-Fingerprint could be the result of a set of complex organic brain interactions that a face imprints on the emotional circuits of the psychic apparatus of the observer. The Corporations like Nike for example may already be intuitively deploying this reasoning in choosing their basketball icons to be promoted in a manner that would be impactful for broadest appeal in their market share of consumers.
4.4. Universality or Idiosyncrasy of Visual Facial Emotions Recognition
This debate has been the subject of controversy and debate between authors such as Ekman in 1994 [48] who is a supporter of the universality of the Visual- Facial-Recognition-Emotion [14] [16] [48] [49] [50] [51] and Russel [52] [53] who is taking an opposite stance. Russel defends the thesis of idiosyncrasy. He was followed by other authors who proved the importance of culture and ethnic group in Visual-Facial-Emotions-Recognition (ViFaEmRe).
4.4.1. Idiosyncrasy of Visual-Facial-Emotion-Recognition
One of the definitions of “idiosyncrasy” is it evokes a way of being that is specific to each individual and leads him/her to have a unique behavior of his own. In addition, faced with the same stimulus, we will have as many different reactions as there will be different individuals.
Assuming, that Emotional-Visual-Acuity is idiosyncratic, and then we should see significant differences in broken straight lines for each Individual-Emotional- Fingerprint (InEmFi). This, each time: 1) that the same Face Test is tested on several observers from different population samples or 2) that several FaTes are tested on same and single observer. Then, the same hypotheses could be extended for different FaTe and for the different sample population.
Our results endorse the idiosyncrasy hypothesis as shown in Figure 1(b) at the population level as shown in (see Figures 7 and 8 in the first half article). They are in congruence with findings of many authors like El Zeinand and Aviezer in 2018 [54]. Furthermore, these authors conclude that facial identity strongly influences the idiosyncratic aspect of emotion recognition and with strong variations specific to each human observer. They obtain graphs of the sigmoid type. Their results support our work.
For the same sample population, Figure 1(b) highlights a statistic with a Wilk’s Λ = .327, D (12, 1227) = 75.327, p = .000, partial η2 = .428 and with 3 ANOVAs that are significantly different because their “p” is less than 0.001. Figure 5, which is the Emotional Fingerprints of a single observer of the 204 observers, attests to these statistical differences and confirms the fact that every one of the 204 observer expresses a different Facial-Visual-Emotionality for the Blond-FaTe, Brunette-FaTe, and Man-FaTe.
Each Face Test in addition to its identities: civil, sexual, morphological, etc. seems to have an “emotional-identity” which would be the result of the previous variables and it would be linked to the Visual-Emotional-Feeling of the observer. This latter appears to depend on the Polynomial [Ethnic-Sample-Population* identity Face Test*Ethnic-Face-Test*Emotions*environment*other]. For Vogel, and Stajduhar in 2012 and 2021 [55] [56], the behaviorally, 5-month-olds distinguished faces within their own race and within another race, whereas 9- month-old babies only distinguish faces within their own race becoming limited in their capacity. For the observer, it is necessary to consider his ability to feel emotions (feeling) which would depend on the internal state of his psychic apparatus (education, thymic state, psychic pain, etc.).
This situation would be comparable to the perception of the voice. Apart from what is said, a listener who listens to several speakers who read the same text will have more interest and pleasure for one of them because of the melody he associates with the text to make it come through as more living, in other words more emotionally appealing. It’s hard to compare how two people “identify” and “resent” (feeling) for the same text with the same voice. This goes beyond voice identity. Tonal qualities of the voice with inflections more likely to arouse emotional responses may be at play here. Similar process is likely involved in interpersonal facial communication.
The idiosyncratic aspect of our results points towards the absence of an innate Visual-Emotional-Feeling. This does not exclude the presence of a “programmable” or “plastic” organic support through the emotional interactions between mother and baby, mother, and child, among children and along pairs (twins) and couples. The question of the importance of learning Visual-Facial-Emotions- Recognition, Emotional-Visual-Acuity, Emotional-Visual-Feeling and Facial- Visual-Emotionality during the first stages of the psycho-affective and psychosocial development of infants and later in children would be of fundamental interest. This line of questioning seems relevant after a period of more than 2 years when our contemporaries, their caregivers, had to wear an anti-COVID mask in front of adults, young children, and newborns [57]. These latter practices could have had deficiencies in learning Visual-Facial-Emotions-Recognition and Emotional-Visual-Acuity and consequently Facial-Visual-Emotionality. This hypothesis is supported by the work of Montague et al. and Carbon in 2020 [58] [59] [60] [61].
For Camras and Shutter in 2010 [62] some facial expressions can have different meanings to: 1) infants; 2) the children and 3) the adults. Finally, non-emo- tional factors can sometimes lead to the production of facial “emotional” (like) expressions such as those induced by pain.
This observation, even if it is empirical, leads one to think that there is a learning phase in the Visual-Facial-Emotions-Recognition and the adjustment between an emotional-feeling and the expression of ad hoc emotion. It would be relevant to follow these age groups during development. This observation suggests our future interest in extending our work to a population sample from ages 0 to 20 years of the population of the North of France.
Figure 1(b) shows 3 different Populational-Emotional-Fingerprint which depend on our sample population of the North of France, 3 Face-Tests and 6 Emotional Series. This assumes that there will be as many Populational-Emotional- Fingerprint as there will be many Face-Tests for only one-population sample with a multiplicative effect with the numbers of Face-Tests and sample-populations. The number of Emotional-Serie will be always the same. Therefore, the Populational-Emotional-Fingerprint of one Face-Test would be related to a given population. The Populational-Emotional-Fingerprint would be multiple and different. This lack of universality of Emotional-Visual-Acuity would render Artificial Intelligence (AI) inoperative. The latter could not possibly at its current level of sophistication of algorithms have a dimensioning capability to discern the variations of emotional experiences at national, nor continental, nor planetary level except perhaps at local or even regional level, but certainly no for all diverse ethnic populations. Exception to this can be there is ethnic match between the observer and the Face-Test. Taking this hypothesis into account supposes that an AI would be in even more difficulty for a sample of a multi-ethnic population and even more so in an airport. The desire for a local or regional AI tuned to the population will have to consider the nature of the continent, country, culture, ethnicity, religion, etc. From now on, the question of the inadequacy of the AI in the Visual-Facial-Emotions-Recognition is posed because it will have to consider the population reference, frames of the Face-Tests, and the observer. Given the complexity of the field of facial emotional expressions and their recognition, the development of algorithms for AI will soon be impossible without in-depth research in this multicenter field studies and among multiethnic and heterogeneous populations, additionally with the analysis of large “big data”.
4.4.2. Universality of Visual-Facial-Emotion-Recognition
The Universality of Visual-Facial-Emotions-Recognition hypothesis holds that all humans communicate through six same basic, internal emotional states (“anger”, “fear”, “disgust”, “joy”, “sadness” and “surprised”). We might associate with “black box” of the brain, with the corollary of an innate biological wiring. For Paul Ekman the universality of the Visual-Facial-Emotions-Recognition is a reality of the “nature”. He built his scientific work on this postulate. He tried to prove it and make it scientific throughout his life [48] [49] [63]. In other words, for Ekman, a native of Surinam in South America would be able to identify the 6 facials emotions of a native of Kyoto in Japan! For Russel in 1995 [53] one of Ekman’s detractors, the “culture” imposes itself on the “nature.” This even if he admits to endowment of a “minimal” universality for global human race. Subsequently, Ekman accepted the importance of culture in Visual-Facial-Emotions- Recognition as evidenced by his work with other authors. Ekman ends up admitting the existence of intercultural differences between Native Americans and Japanese. Both in terms of emotional categories but also in terms of the intensity of facial emotions recognition [64]-[70]. Thus, this old debate seems to have been seemingly put to rest.
Today the controversy is still relevant as evidenced by Limbrecht, et al., in 2012 [71], who refute this assumption of universality. They highlight the powerful influence of culture on the formation of basic emotional behaviors. Therefore, today, the controversy Visual-Facial-Emotion-Recognition related or with “nature” or “nurture” (“culture”) still stands, which allows us to make our contribution.
The study of Figure 1(b) enables us to note a total absence of superposition for the 3 broken straight lines. Regarding our results if we consider that the “Visual-Facial-Emotions-Recognition” is universal, then the Populational Emotional Fingerprint in Figure 1(b), should be strictly identical and superimposable. This is not the case. The 3 broken straight lines are radically different. This argues not for the universality of Populational-Emotional-Fingerprint but for an idiosyncrasy of Populational-Emotional-Fingerprint. For the 3 FaTes the variance is low for “joy”, “fear” and “sadness”. Moreover, for the FaTe-Blond and the FaTe-male the variance is low for the “surprise”. Finally, for FaTe-Man and FaTe-Brunette the variance is low for “anger” and “disgust”.
The study of Figure 7 (see in the first half article) enables us to note a perfect superposition of the lateral parts for the 3 FaTes. On the other hand, only the Face-Test-Blond and Face-Test-Man overlap strictly on the median part. This graph can be a measure of emotional-sensitivity through measurement of the area under the curve.
These observations, in Figure 7 (see in the first half article) and Figure 1(b), make it possible to evoke an “intricacy” between universality and idiosyncrasy. The ethnic origin of Face-Test Brunette seems to be an explanatory cause. In this eventuality, the absence of consensus in our results and those of the authors would be explained by the existence: 1) of an “organic-core” with innate brain wiring that would underpin the universality of Visual-Facial-Emotions-Recognition and 2) many “functional-layers” would modulate the answers of the organic core. Education, ethnic group, family group, culture, etc. would be some of these layers. But we will keep in mind that: 1) the first layer is the ethnicity of the Face-Test, 2) the second the civil identity of the Face-Test, 3) the third is cognitive abilities, 4) the fourth layer will be constituted by the mood of the observer, which are all variables to him like the clouds in the sky according to the variations in the meteorology over the same geographic region.
Considering all the concentric layers will allow us to counter fully no longer the arguments for organic versus functional and nature versus nurture (culture.) We are taking account a concentric planet like sphere made up of 1) a central-core that would be associated with the innate organic support and not programmable by culture, 2) different layers that become more and more plastic, in other words, programmable by the culture when it orients towards the surface of the sphere. This would be the programmed component of the Emotional-Visual- Acuity, 3) and finally the “climatic aspect” which is as changeable as the climate or even our mood. A comparison with the terrestrial globe would be interesting.
Consequently, the non-superposition of the Populational-Emotional-Fingerprints would be explained by an organic base that is universal with organic wiring that would be shaped by the individual’s living environment. Cerebral plasticity would allow this adjustment and therefore a good congruence between the expression of an emotion and its good recognition if the sender and the receiver are from the same ethnic group. In this case, the conclusions of the authors would go in the direction of the universality of the Visual-Facial-Emotions- Recognition and Emotional-Visual-Acuity. If the transmitter and the receiver are of different ethnic groups, the difficulties of adjustment between expression and Visual-Facial-Emotions-Recognition would be significant and would direct the authors towards considering idiosyncrasy.
Our results are close to Bonassi et al. in 2021 [72] and their paradigm is closer to ours. Japanese observers study Japanese Face-Test and Caucasian Face-Test. The emotions of Caucasian Face-Test are recognized with different scores from Japanese Face-Test. The culture and ethnicity of observers and Face-Teste influence the results.
The existence of distinct Visual-Facial-Emotions-Recognition and Emotional-Visual-Acuity disorders in different neurological and psychiatric pathologies attests to the existence of this organic and supramodal nucleus. This hypothesis is strongly supported on a functional level by Connolly et al., in 2020 [73]. This latter can be associated with a 6-sided a “Symmetrical-Hexagonal-Pyramid” (for the 6 basics emotions (Ekman)) [74] [75] [76]. Consequently, we have 6 “receiving” [77] facets and 6 “transmitting” facets, that is 12 facets. One hexagonal tower for reception “receiver-machine” and the other for expression “transmitter-machine”. The junction of each top of the pyramid would be the place of integration of the 6 perceptions of emotions and 6 expressions of emotions. Dima et al. in 2011 [77] locate this “control tower” at the level of the amygdala [78] [79] [80] [81]. The dysfunction of one of the 2 pyramids would explain the inability to identify facial emotions, sometimes even assuming a good capacity for expression. Conversely, the dysfunction of the contra-lateral pyramid is the difficulty in expressing emotions, which would be correctly perceived, on the faces. Finally, one can imagine a concomitant dysfunction in both of the 2 pyramids. However, it must be kept in mind that the dysfunctions may affect only one or more of the emotional facets [82] on either or both the “perception pyramid” and the “expression pyramid”.
Each of the 12 facets of hexagonal pyramid for reception and expression would be modulated by ethnicity, identity, family, education of the Face-Test, and the mood of the observer, etc. In other words, these factors medicated via rest of the brain functions besides the amygdala can also be of etiological significance. The pyramidal dimension of emotional signal processing is gradually imposing itself, as evidenced by Phung and Bouzerdoum in 2007 [83].
The integrating center would be insensitive to ethnicity, identity, family, education, of the Face-Test, and the mood of the observer [84]-[86]. In this hypothesis, the dysfunction of the integrating center would affect the visual recognition and expression of all emotions: 1) alexithymia [87]; 2) autism in children, autism spectrum disorders [88]; 3) mental disorders afflicted with affective disorders; 4) ideo-affective dissociation seen in schizophrenia [89]; 5) Alzheimer’s disease [36]; 6) Parkinson’s disease associated melancholy [90].
Reaching one facet of the hexagon would be observed with: 1) the “disgust” in Huntington’s chorea [32] [47]; 2) with “disgust” and the “anger” in borderline personalities [91]. In these eventualities the measurement of Individual-Emotional- Fingerprint or Emotional-Key-Graph and its comparison with 1) the reference Populational-Emotional-Fingerprint for a normal sample population; 2) a specific Populational-Emotional-Fingerprint profile of this pathology could contribute to a positive diagnosis as early as possible in its sub-clinical phase.
4.5. Syn-Diachronic Graphs by Age Group
We have named the successive interstices emotional-channels between an Emotional-Serie “alpha” and an Emotional Serie “beta” except between the best- recognized Emotional-Series and the least-recognized Emotional-Series. We have called this large interstice Emotional-Focus (see Figures 1 C, 1 D, 2 B, 3 B and 4 B) The measurement of the Emotional-Focus is specific to each FaTe with a measurement specific to it (mean and standard deviation). In our work, it is different for each of the 3 FaTe. It remains generally constant for the 7 age’s groups of each FaTe) (see Figures 2 B, 3 B and 4 B). We could qualify this variable as the “Constant-Syn-Diachronic-Emotional-Focus”. This would be a new characteristic variable of each Face-Test. It is 11% ± 4% for Face-Test-Blond; 21% ± 2% for Face-Test-Brunette and 22% ± 2% for Face-Test-Man.
A possible explanation of these differences would be the existence of a cognitive bias and/or a thymic bias with an additive or multiplicative effect. Each of the 7 Age Groups was selected for optimum cognitive and thymic scores. If the hypothesis of these two biases is correct, then the non-modification of the Emotional-Focus would be the expression of a real cognitive and thymic homogeneity of our population sample for each of the 7 age groups. Conversely, this does not exclude a modification of this Constant-Syn-Diachronic-Emotional-Focus with a normal population on the cognitive and/or thymic level.
The lesser extent of the Constant-Syn-Diachronic-Emotional-Focus of FaTe- Blond compared to FaTe-Man and FaTe-Brunette supposes the existence of a reason that escapes us for the moment. Assuming that Constant-Syn-Diachronic- Emotional-Focus is independent of the cognitive levels of the population sample, then the Constant-Syn-Diachronic-Emotional-Focus would be specifically linked to the polynomial. The existence of this Constant-Syn-Diachronic- Emotional-Focus supposes the proof of a link between each FaTe and the population sample which justifies the use of the concept of Facial-Visual-Emo- tionality and more precisely Visual-Emotional-Feeling and which itself is even more precise than Emotional-Visual-Acuity. This assumes that the Constant-Syn-Diachronic- Emotional-Focus would be an additional variable to measure the Visual-Facial- Emotions-Recognition of a FaTe.
At the global and statistical level, we observe a slight improvement in the Visual-Facial-Emotions-Recognition between 20 to 53 years (see Figures 11 and 12 A in the first half article). Our results are not very close to those of Chaby and Narme in 2009 [92] for whom healthy older people show subtle alterations or diminution in visual recognition of facial emotions as early as age 50 with an increased diminution after age 70. In addition, at the individual level the same changes (see Figure 2(b), Figure 3(b), Figure 4(b)) are reported. In addition, they support the work of Ruffman and Henry in 2008 [93] for whom the elderly has more and more difficulty recognizing the 6 basic emotions. This is with the exception that the elderly tend to be better than young adults at recognizing “disgust” in the facial expressions.
However, our results are contrary to those of Sullivan and Ruffman in 2004, [94]. For the latter, there is no doubt that aging causes a decrease in the ability to recognize all the facial emotions. This difference in results could be explained by a different methodology, cognitive and thymic biases, and one Face-Test vs several Face-Test.
We postulate with caution and subject to these results being confirmed by subsequent studies that during aging the Visual-Facial-Emotions-Recognition measurements depend on 1) the identity of the observed face; 2) the ethnic origin of the Face-test and the sample population; 3) the number of Face-Tests administered; 4) the mood of observers and 5) the cognitive state of the observers. In other words, it is polynomial.
Therefore, in the context of the studies of the Visual-Facial-Emotions-Recognition in aging, it seems important to distinguish between individual and global measurements, a single or multiple Face-Test tested. This could explain the lack of consensus between the authors in addition to different paradigms and tools.
5. General Discussion
The sex of the observers, the sex of the Face-Tests and level of education do not influence Visual-Facial-Emotions-Recognition. It is possible that this observation favors: 1) good inter-individual communication in the family and societal group, 2) a sexual rapprochement to ensure the sustainability of the social group. The presence of a discrepancy in this area would imply difficulties that would jeopardize the creation of a couple, conjugality, living well in family, in groups and in society, even the survival of the species.
However, 3 pitfalls appear. The first pitfall assumes that the Face-Tests are from the same ethnic group as the sample population. The second pitfall is the observation that our population sample is supra-normal. In other words, the cognitions and memories are homogeneous and above the average of a normal population. This sample was selected along these lines to control cognitive variables as: 1) MMSE; 2) MATTIS and 3) Grober and Buschke. The third pitfall concerns an absence of anxiety disorders with a HAMA and an absence of mood disorders with an HDRS.
The fact that these parameters (cognitions, anxiety, and mood) were not considered by most authors may explain the lack of consensus in comparing results. Our work should be extended to a normal population sample with a bell-shaped distribution of 1) cognition measures, 2) memories measures, 3) HAMA scores and 4) HDRS scores.
When several Face-Tests are considered, on a statistical level, we have shown the existence of a maturation of Emotional-Visual-Acuity which increases from 20 years to 55 years. It peaks to 53 years. Then, it decreases but remains higher than the age group of 40 to 50 years. A linear regression study would allow us to estimate the Emotional-Visual-Acuity for age groups below 20 years. Our data do not allow this work because our sample is supra normal for a population ranging from 20 to 70 years old.
On reading all the graphs, which are easy and intuitive to interpret, we can see the interest of carrying out measurements of Emotional-Visual-Acuity with M.A.R.I.E., from the birth. This would help us to guide the hypotheses of five parameters, namely, 1) the innate, 2) acquired, 3) organic, 4) functional, 5) influenced by education, and even the intertwining of these 5 components to varying degrees in each person and their natural course. In other words: 1) the Visual-Facial-Emotions-Recognition; 2) Emotional-Visual-Acuity; 3) the Facial- Visual-Emotionality and 4) the Visual-Emotional-Feeling specific to each emotion with intra and extra ethnic Face-Test can be analyzed for these five parameters. We could expect to see a linear evolution of this skill or a stepwise maturation including rapid increase like the vocabulary explosion at the age of 24 months. In this way, the identification of an age of physiological onset would allow us to work on screening in the event of socialization difficulties or behavioral disorders in children. We may also run into a developmental critical period just like the other sensory modalities like vision and hearing, or acquisition of language have. Screening for Kanner’s autism or autism spectrum disorders would be greatly facilitated in mass screenings or in a clinical situation. Such developmental critical period may impact children suffering from classical Kanner’s infantile autism, just like they face the difficulties in acquiring language skills after age 6.
6. Conclusions
This work is a continuation of the author’s previous publications [95] [96] [97] [98] [99]. These studies have made it possible to simplify the methodology and to develop new concepts with their graphic representations. These latter are intuitive, self-explanatory, informative, and simple. These measurements make it possible to study Visual-Facial-Emotions-Recognition and Emotional-Visual- Acuity. These results allow us to postulate the existence of Facial-Visual-Emo- tionality; or more specifically Visual-Emotional-Feeling. This concept is the result of the observation of a facial emotion by an observer with a beating heart on the face of another person sitting across facing him/her and with a beating heart. This situation is closer to everyday reality but difficult to study experimentally. However, the “fear”, the “disgust” and the “amorous transport” that can arouse (the “vibes”) in the face-to-face encounter as described above and in the discussion with certain people who are there to attest to it. These feelings are absent or minimal for the same emotions in remote communication with a camera in the virtual world.
The compared measurements: 1) of the Visual-Facial-Emotions-Recognition, 2) Emotional-Visual-Acuity, 3) of the Facial-Visual-Emotionality, and 4) Visual-Emotional-Feeling with the Images-Stimuli of the different: 1) Emotional-Series; 2) Face-Tests and 3) age groups for the observers, open new opportunities for research in the following domains: 1) basic behavior science; 2) pharmacology; 3) clinical science; 4) criminology; 5) artificial intelligence, etc. This observation could allow us to identify observers with 1) instantaneous emotional-decision-making (“I” effect); 2) progressive ones (“N” and “M” effect); 3) attentional errors (effect “A” and “V”).
The face of an individual can be defined by the morphology that defines its identity but also by the Emotional-Fingerprint that it leaves on an observer or on a sample of observers. This Emotional-Fingerprint is unique. But it depends on the Polynomial factors [Ethnic-Sample-Population*identity-Face-Test*Ethnic- Face-Test*Emotions*environment*other]. It is comparable to 1) the speaker: a) his linguistic accent; b) his identity; c) what he says, d) the prosody and melody of his voice, etc., and the one who receives the speech: 2) the listener: a) his auditory hearing; b) his identity; c) his mood; d) his cognition; e) his level of acceptance of the speaker, etc. The presence of an Emotional-Fingerprint specific to each Face-Test for the same population implies that the Emotional-Visual- Acuity is not identical. It is specific to the Polynomial factors, [Ethnic-Sample- Population*Ethnic-Face-Test*Emotions*environment*other]. It would be idiosyncratic and not universal. The measurements of these differences do not change significantly with the advancing age of the population sample.
We find that the Emotional-Visual-Acuity does not depend on gender or level of education. In addition, 1% of the supra-normal population, though normal in terms of cognitions, anxiety levels and mood, does not recognize basic emotions in the sense of Paul Ekman. The question then arises for a normal population in terms of cognitions, anxiety, and mood. It is not excluded that this rate of 1% is higher for normal population. Confirmation of this hypothesis could explain certain types of homicides, rapes, other anti-social behavior and wanton violence predicated upon lack of empathy, innate or acquired.
This methodology should be extended to samples of populations different by race, ethnicity, culture, education, religion, and cognitions with age ranges ranging from birth to 100 years. This work will reinforce the distinction between 1) idiosyncrasy vs universality, 2) categorical vs continuous, 3) functional aspect vs organic support, 4) Syn-diachronically monitored physiological and neurobiological development of: a) the Visual-Facial-Emotions-Recognition b) Emotional- Visual-Acuity c) Facial-Visual-Emotionality. The collection of numerical and graphic references will allow the compiling of an “atlas of data” (analogue and digital data base) which will be associated with each Polynomial factor [Ethnic-Sample-Population*identity-Face-Test*Ethnic-Face-Test*Emotions*environment*other]. This is possible due to the idiosyncrasy of the Visual-Facial-Emotions- Recognition.
In addition, the impact of face masking of mothers, maternal substitutes, and caregiving adults during the 2 years of infant development during COVID confinement can be studied and monitored over time. Use of M.A.R.I.E. is easy and simple; its power and further developments look promising. To date it has brought up the following concepts: 1) the Visual-Facial-Emotions-Recognition; 2) Emotional-Visual-Acuity; 3) Facial-Visual-Emotionality; 4) Emotional-Visual- Feeling; 5) Fundamental-Individual-Graph-Emotional-Decision-Making; 6) Individual-Graph-Emotional-Decision-Making; 7) Emotional-Decision-Making- Graph or Gun-Graph; 8) Emotional-Fingerprint or Key-Graph (Key-Gr); 9) Emotional-Fingerprint-syn-diachronous-graph; 10) Plate-Stacking-Graph of the Populational-Key-Graph.
This preliminary research work concerns only 6 emotional series. It should be extended to the remaining 21 − 6 = 15 emotional-series. This combination of 2 out of 7 (6 primary emotions in Ekman’s parlance + neutral) will allow a mathematical modeling of the emotional exchanges between two living persons with beating hearts.
7. Implications
The graphs of emotion-decision-making and emotional-fingerprint with their numerical measurements open perspectives in the medicine, neurosciences, education, and socialization and more in the psycho-social rehabilitation and cognitive remediation of children, adolescents, and adults in terms of Facial- Visual-Emotionality. Like teaching reading, it would be possible to teach children and adults to visually recognize facial emotions correctly. This type of rehabilitation may be incorporated in Occupational Therapy and Physical Therapy. Then, with the measurements and graphs clinicians could ensure the good quality of their cerebral integration. The M.A.R.I.E., could be used in the screening of pervasive developmental disorders, autism spectrum disorders or classical autism, personality disorders, antisocial type, borderline type, Huntington’s chorea, Alzheimer disease, etc. Wearing any cultural, religious face covering masks or face covering clothing or anti-COVID protective mask that obscures the facial features of mothers or nursing mothers, or mother surrogates, could affect children’s learning of Facial-Visual-Emotionality as evidenced by more and more authors. This could also explain the subsequent emergence of behavioral problems during childhood, adolescence, and adulthood. It is the same for families where there are very few parent-child emotional exchanges from birth initially suspected in the etiology of infantile autism. In these situations, there is very little or no inter-emotional social communication. Mothers and fathers need to be in a close bond with their newborn from birth throughout their childhood and adolescence. At the earliest, young children should go to preschool, then schooling or social ties between their pairs or friends. These must be taken into consideration and valued for normal emotional development. Lack of such opportunities to inculcate and promote prosocial behaviors leads to socialization difficulties with more likelihood for violence and dangerousness of these individuals, both men, and women from lack of development of empathy. Bowlby’s famous monogram on Maternal Deprivation and Child Mental Health published by WHO (UNO) in 1951 had similar findings but they were not tied to the deficits in facial emotional interpersonal communication of infants and children with their caregiving and loving parents.
It becomes possible to select candidates for professions in Strong and Sustained Emotional Interface with Others (SSEIWO). The application of MARIE in the field of departments of justice or law and order departments could make it possible to study the existence of a link between a deficit of Facial-Visual-Emo- tionality and the type of hetero-aggressive violence. A possible explanation would be the inability of some criminals to feel empathy for other people and their victims. It is not excluded that they have a deficit in or even an absence of 1) Visual-Facial-Emotions-Recognition and/or 2) Emotional-Visual-Acuity, and/or 3) Facial-Visual-Emotionality.
8. Limitation of This Study
This study focuses on a small sample of residents of Lille in northern France. This sample is made up of culturally homogeneous French-speaking white Caucasians. These results may not be fully applicable on a cross-cultural level because variables concerning memories, cognitions, anxiety, and depression are controlled and optimal. Hence, we use the qualifier, “Supra-Normal”. The disadvantage of this methodology is that we do not have the numerical and graphical characteristics of a “normal” population. This sample does not really follow the observers from their 20s to their 70s. In addition, this work should have presented 3 Faces-Tests to observers who should be more than 30 in 6 groups: 1) men, 2) women, 3) children, 4) adolescents, 5) recently settled immigrants of an ethnic group different from the Hosts and 6) long-established immigrants of an ethnic group different from the Hosts.
Finally, the population sample should have been extended to age groups from 0 to 2.5 years, from 2.6 years to 5 years, from 5.1 years to 10 years, from 10.1 years to 15 years from 15.1 years to 20 years from 21 years to 50 years by age groups of 10 years and from 71 years to 100 by age group of 5 years. Ideally, each age group should have more than 30 observers but that may not be practical.
9. Negatives Findings
Visual-Facial-Emotions-Recognition does not seem universal but idiosyncratic. Nevertheless, because of the importance of the consequences of this observation, additional research studies are necessary. If the idiosyncratic hypothesis is confirmed then the consequences in the field of Artificial Intelligence (AI) would be phenomenal because it will have to adapt to the “geo-populational” variables on a global scale, or for global territory to which it is addressed. This means that the AI will not be able to identify the emotions among travelers at an airport or a rail or subway station. Some people with normal intelligence without anxiety and with normal mood do not have perfect visual-facial-emotion recognition because 1% of this study population does not recognize facial emotions. It is possible that this rate increases with a normal population that is not selected for normality in terms of cognition, anxiety, and mood.
10. Future Directions for Research
A study like ours should be conducted on samples of populations as diverse as possible in terms of language, culture, ethnic group, religion, political regime, etc. In addition, with populations closer to “nature” than to “culture” meaning aborigine tribes, etc., the M.A.R.I.E., should be used, in different countries, by a single research team to avoid methodological biases. A possible alternative, because of the cost, would be for research teams from these countries to use the M.A.R.I.E. with a strictly identical protocol in internet. However, for these two possibilities, the choice of Face-Tests is fundamental. On the one hand, it will have to consider the Face-Test of people from the population sample studied. On the other hand, the scientific investigator will have to administer the same Face-Tests as in our study. In this way: 1) we will be able to reinforce the hypothesis of the idiosyncrasy or the universality of Emotional-Visual-Acuity, 2) we measure the differences in Emotional-Visual-Acuity for identical Face-Tests and different samples populations; this is in an attempt to reconcile these differences with one or more variables: ethnic, cultural, religious, etc. In this eventuality, the work of Paul Ekman on the universality of Visual-Facial-Emotions-Recognition would be discussed with more objectivity and specificity thanks to quantified measurements. The study of the cross-cultural dimension of Emotional-Visual- Acuity becomes possible insofar as a black African, white European, Asian, Japanese, or Native Indian or East Indian face can be tested on the population belonging to the same but also on other populations. The comparison of “emotional-fingerprints-graph” and the emotional-quotient will allow these comparisons. These studies will constitute a base for a better inter-personal communication when two individuals do not speak the same language and with the Artificial Intelligence which will adapt to the Polynomial factors [Ethnic-Sample- Population*identity Face Test*Ethnic-Face-Test*Emotions*environment* other] (LLMs).
For a better understanding of the categorical and continuous mechanisms, it would be relevant to isolate two homogeneous population samples. The former makes emotional decisions with only effect “I”. The second makes decisions with only an effect “M”. This would involve identifying impacting variables: educational, cognitive, psychological, neurobiological, morphological imagery, functional imagery, etc.
However, as of now, this experimental paradigm can be applied to the faces of politicians as well as film actors/actresses, television presenters, etc. The results would allow a first step in understanding their success, their leadership. In addition, it could allow an approach to understanding the feeling of love when faced with the face of the loved one.
As applications outside the domain of behavioral science, our paradigm can be extended to the morphology in designing of objects, the shape of a car, choosing designs of any object, to study which shapes and colors or designs have emotional appeal to what kind of population, etc. The M.A.R.I.E. can be extended to hearing, to study the integration of phonemes in the context of auditory dyslexia such as the difficulty in differentiating a “B” from a “P”, etc.
Performing assessment for Individual-Emotional-Fingerprint and an Individual-Graph-Emotional-Decision-Making is easy, quick, and feasible at the bedside in clinical routine. The use of a booklet with an image-stimulus on each page and a pencil would be sufficient for initial cursory assessment. On the other hand, the computer tablet will measure the decision-making time. The Individual-Emotional-Fingerprint Individual-Graph-Emotional-Decision-Making can be associated with a screening assessment associated with assessments specific to certain disorders or diseases. This has preventive and public health implications.
Moreover, the principle of morphing can be extended to sounds, colors, geometric shapes. Firstly, it seems to us that it would be interesting to study the perception of the body image that we have of others and of ourselves. To do this, it suffices to perform a morphing between the same lean and obese person. Then we ask to a population sample of lean, normal, and obese people if the image of the series is considered “fat” or “skinny”. This would allow us to better understand the disorder of body image perception (some body dysmorphic disorders) and even more importantly in gender dysphoria.
In addition, MARIE is a tool that allows the identification of many parameters that could allow us to measure and understand more subtle and universal emotions such as the feeling of love, affection and compassion. Hopefully, it could be engineered someday to measure and understand maternal affection or love (“Vatsalya” in Sanskrit for this specific love) by analyzing her emotional expressions while emotionally communicating with her infant.
Acknowledgements
This work was supported by the grant 1998/1954 of the Programme Hospitalier de Recherche Clinique (P.H.R.C.) of the French government. I thank these colleagues and researchers from the commissions invested by the French government who, without knowing the principal author’s full credentials and limited experience as a starting investigator, believed in the idea of a young, totally unrecognized scientist. By their choice, they offered a grant which enabled not only the detailed study of the original ideas and concepts of these authors but also the establishment of the standards. This is the reason that compels us to thank the French Government recognizing the role that chance and destiny may have played. Thanks are due to Paul Ekman who gave permission to use photographs from “Unmasking the faces” (Ekman & Friesen, 1975) and to Olivier Lecherf who designed the computer program for processing and displaying the pictures. This study was made possible thanks to the Professor Christian Libersa of the Centre d’Investigation Clinique (CIC-CHU/INSERM, Lille). I would like to thank Professor Olivier Godefroy Neurologist at Amiens University Hospital center South St Vincent de Paul, 80054 Amiens Cedex for his help and unconditional support for more than 24 years. Without him, this fundamental research work would not have seen the light of day. My thanks and appreciation go to Mr. Jay Vinekar who allowed a critical and constructive reading of this work. Finally, Professor of Neurology Patrick Vermersch from Salengro Hospital at the Centre Hospitalier Régional De Lille (France) supported Dr. Granato, the principal author, in conducting this research. This article represents the capstone representation of 25 years of professional research work by the authors. It would not have been possible without the unconditional psychological and emotional support of their wives. The life partners must no longer be forgotten by doctors and researchers in the world of science who sacrifice personal comfort and well-being, and that of their wives, for psychiatry, science, and the progress of humanity. We would like to pay tribute and respect to Mrs. SHYAMALA HEMCHANDRA SASHITTAL Shyamala, the wife of Professor VINEKAR Shreekumar and Mrs. WALENCKI Martine, the wife of Dr. GRANATO Philippe.