Personality, Ideology, Intelligence, and Self-Rated Strengths


This study looked at the individual difference correlates of self-rated character strengths and virtues. In all, 280 adults completed a short 24-item measure of strengths, a short personality measure of the Big Five traits and a fluid intelligence test. The Cronbach alphas for the six higher order virtues were satisfactory but factor analysis did not confirm the a priori classification yielding five interpretable factors. These factors correlated significantly with personality and intelligence. Intelligence and neuroticism were correlated negatively with all the virtues, while extraversion and conscientiousness were positively correlated with all virtues. Structural equation modeling showed personality and religiousness moderated the effect of intelligence on the virtues. Extraversion and openness were the largest correlates of the virtues. The use of shortened measured in research is discussed.

Share and Cite:

Furnham, A. & Ahmetoglu, G. (2014). Personality, Ideology, Intelligence, and Self-Rated Strengths. Psychology, 5, 908-917. doi: 10.4236/psych.2014.58102.

1. Introduction

Clear evidence exists for a division in cognitive functioning between the left and right hemispheres of the brain (Corballis, 1991; Sperry, 1982) . It is argued that this division of processing enables more efficient processing of information, as each hemisphere is specialized for specific information input (Bradshaw, 2001; Levy, 1977) . Coined the Modal Model by Bryden (1990) , the left hemisphere is often thought of as the language center, as well as an analytic and serial processor responsible for coordinating rapid, sequential processing tasks. In con- trast, the right hemisphere is thought of as the visuospatial and emotional center, as well as a holistic and inte- grative processor, responsible for parallel processing of information.

Stemming from this view, the left and right hemispheres of the brain are largely thought to play complemen- tary roles in cognitive processing (Hellige, 1993) . Left-hemisphere dominance for language processing has been long established (Broca, 1861; Hecaen, DeAgostini, & Monzon-Montes, 1981; Kimura, 1961) . Despite an his- torical reputation as the emotional perception hemisphere, evidence regarding the lateralization of emotional processing is divided, suggesting a complex processing network (Wagner, Phan, Liberzon, & Taylor, 2003). Evidence for right-hemisphere dominance in perceiving emotional stimuli comes from tachistoscopic studies using emotional faces (McKeever & Dixon, 1971; Suberi & McKeever, 1977) as well as dichotic listening stu- dies, which employ the simultaneous presentation of different auditory stimuli to each ear, using emotional in- tonation (Bryden & McRae, 1988; Bulman-Fleming & Bryden, 1994) in neurologically healthy populations.

Although evidence from visual-field and dichotic-listening studies provide consistent evidence of right-he- misphere dominance in emotional perception and processing, it has been argued that the left hemisphere also plays a role in emotional processing, particularly in the expression and experience of emotions. The Valence Hypothesis (Ahern & Schwartz, 1979) and the Valence-arousal Hypothesis (Heller, 1993) both posit a dual he- misphere processing of emotional information, with the right hemisphere processing negative or withdrawal emotions and the left hemisphere processing positive or approach emotions. Davidson (1984; 1992) proposed a modified variant of the Valence Hypothesis wherein he posits a similar right- and left-hemisphere division for expression and experience of actual emotion, but argues for a right-hemisphere dominance for the perceptual processing of emotional stimuli (for a detailed review see Demaree, Everhart, Youngstrom, & Harrison, 2005) . Evidence in support of a left-hemisphere role in emotional expression has been found in studies of patients with unilateral brain damage. Patients with left-hemisphere lesions are more likely to experience depression follow- ing injury than patients with right-hemisphere lesions (Robinson & Price, 1982) . Conversely, patients with right- hemisphere lesions are more likely to experience positive changes in mood than patients with left-hemisphere lesions (Sackheim et al., 1982). A similar pattern has been observed in patients undergoing sodium amytal test- ing, with left carotid artery injections resulting in a dramatic negative emotional experience (e.g. crying, pessi- mistic thoughts, and worry); right carotid artery injections resulted in a dramatic positive emotional experience (laughter, smiling, general euphoria; Perria, Rosadini, & Rossi, 1961; Rossi & Rosadini, 1967; Terzian, 1964 ). With regard to emotional perception, research indicates a greater deficit in the perception of negative emotions following right hemisphere damage, with little or no deficit in emotional perception following left-hemisphere damage (Adolphs, Damasio, Tranel, & Damasio, 1996; Borod et al., 1998) . This evidence suggests a division of processing between the left and right hemispheres for the expression and experience of positive and negative emotions, respectively.

A more consistent pattern highlighting the right hemisphere’s emotional dominance emerges for the percep- tion of paralinguistic information ( Belin, Zatorre, & Ahad, 2002; Friederici & Alter, 2004; Poeppel, 2003 ; for a review see Lindell, 2006 ). These features include prosody (Blonder, Bowers, & Heilman, 1991; Ley & Bryden, 1982; Grimshaw, 1998) as well as non-linguistic emotional vocalizations (such as sighing or laughing; Carmon & Nachshon, 1973; King & Kimura, 1972; Poyatos, 2002; Trager, 1958 ).

These population-level biases in linguistic and paralinguistic-emotional processing suggest a complementary organization of these two functions within the brain. There is a left-hemisphere bias for linguistic processing and a right-hemisphere bias for emotional perception at the population level when examined between participants; however, what is still unclear is how these two asymmetrical functions are organized within individuals; does the complementary pattern observed across individuals still hold when examining the relationship in degree and direction of lateralization of left- and right-lateralized functions within individuals, or does a different pattern emerge?

When examining the relationship between the asymmetrical lateralization of two complementary functions within individuals there are three possible patterns that can be observed. First is a causal pattern of complemen- tarity (Bryden, 1990; Bryden, Hecane, & DeAgostini, 1983) . Here, it is argued that the asymmetrical lateraliza- tion of one function drives the opposite asymmetrical lateralization of the other function. For example, the mod- al model presents the left hemisphere as the linguistic side of the brain whereas the right hemisphere is the vi- suospatial side of the brain. MacNeilage (1991) proposed a theory of language lateralization suggesting that a right-hemisphere visuospatial bias guiding predation resulted in the left hemisphere governing postural support. This left-hemisphere postural bias provided the necessary framework upon which language processing and pro- duction abilities were developed. If the right hemisphere is specialized for the processing demands of visuospa- tial processing, the left hemisphere will then be driven to specialization for the processing demands of linguistic processing. Further, this position would also predict that, in the case where the left hemisphere becomes specia- lized for visuospatial processing, the right hemisphere would then become specialized for linguistic processing. As this pattern predicts that complementary functions should be opposite in lateralization, a negative correlation in the degree of lateralization between the complementary functions should be observed.

The second pattern is a bias pattern of complementarity (Bryden, 1990) . Here, it is argued that asymmetries of either attention or ascending sensory systems result in an overall bias in processing information. In the former case it argued that an overall attentional bias to the right side would produce a large right ear advantage (REA) for a linguistic task and a smaller REA for a non-verbal emotional processing task (Efron, Koss, & Yund, 1983; Kinsbourne, 1975) . Similarly, an asymmetry in the ascending sensory systems would result in an overall bias within each sensory system. As such, a right-side auditory asymmetry would result in a large REA for a linguis- tic task and a smaller REA for a non-verbal emotional processing task (Sidtis, 1982; Teng, 1981) . Contrary to the causal pattern, bias complementarity does not predict that complementary functions be lateralized to oppo- site hemispheres. Rather, complementary functions would be more likely to be asymmetrically lateralized within the same hemisphere; thus, a positive correlation in the degree of lateralization between the two functions should be observed.

The third pattern is a statistical pattern of complementarity (Bryden, 1982; Bryden, 1990; Bryden, Hecaen, & DeAgostini, 1983) . Here, the sources of influence underlying asymmetrical lateralization of complementary functions are independent of one another. Each function has an independent statistical probability of being late- ralized to a specific hemisphere based on the source of influence driving its lateralization. For example, the un- derlying influence biasing linguistic processing to be asymmetrically lateralized to the left hemisphere is unre- lated to the underlying influence biasing emotional processing to be asymmetrically lateralized to the right he- misphere. Although the population-level pattern observed may appear causal in nature, examination of the rela- tionship between the degree of lateralization for two statistically independent complementary functions should result in no correlation being observed.

There are few studies in the literature that have directly measured the correlation in degree and direction of lateralization between left-hemisphere and right-hemisphere lateralized tasks within individuals. Ley and Bry- den (1982) examined the relationship between linguistic and emotional processing using sentence stimuli pre- sented in different emotional tones. Although they observed the expected REA for the linguistic sentence stimuli and a left ear advantage (LEA) for the emotional content, when the relationship between the two lateralized tasks was examined, no significant correlation was observed. Ley and Bryden (1982) suggested that this result supports the statistical model of complementarity. Similarly, Saxby and Bryden (1984) assessed the comple- mentarity of emotional and linguistic processing in children using the same method as Ley and Bryden (1982) ; they also found no significant relationship in the degree and direction of lateralization for linguistic and emo- tional processing, further suggesting a statistical pattern of complementarity between these processes. McNeely and Parlow (2001) examined the complementarity of linguistic and prosodic processing using the Fused Words Dichotic-listening Task (FWDT) and the Dichotic Emotion Recognition Test (DERT). They observed the ex- pected overall REA for linguistic processing and LEA for prosodic processing at the population level, but no significant correlation was found between the two functions when examined within individuals. More broadly, Andresen and Marsolek (2005) examined the relationship in degree and direction of lateralization for right- and left-hemisphere lateralized shape-recognition and spatial-relation tasks within individuals and found no rela- tionship between the left-hemisphere and right-hemisphere lateralized tasks.

Support for statistical complementarity has also come from recent studies using functional transcranial Dopp- ler (fTCD) to measure lateral differences in cerebral blood flow. Whitehouse and Bishop (2009) used fTCD la- terality indices (LI) to examine the relationship in degree and direction of lateralization between visuospatial memory and linguistic processing and found no significant correlation between the LIs for both tasks. Similarly, Rosch, Bishop, and Badcock (2012) found no correlation between fTCD LIs measured for a word generation and a visuospatial landmark task.

However, not all research in this area has found support for statistical complementarity. At this time, two stu- dies have provided evidence against statistical complementarity. In the first discrepant finding, Elias, Bulman- Fleming, and Guylee (1999) recruited participants with atypical laterality profiles and the relationship between lateralized linguistic and prosodic processing within these individuals. They found the expected population-level laterality effects for the linguistic and prosodic processing tasks: an overall REA for linguistic processing and an overall LEA for prosodic processing. Contrary to prior research, however, they found a significant positive cor- relation in degree and direction of lateralization between the two tasks, suggesting bias complementarity. In the other discrepant finding, Badzakova-Trajkov, Haberling, Roberts, and Corballis (2010) used functional magnetic resonance imaging (fMRI) to examine the lateralization of face, linguistic, and visuospatial processing. They found significant negative correlations observed between LIs for word generation and LIs for both emotional fa- cial processing and visuospatial processing, providing partial support for causal complementarity. A non-signi- ficant positive correlation was observed between the LIs for emotional face processing and visuospatial processing. The authors suggested that these results provide evidence for the influence of multiple lateralizing influences rather than a single lateralizing force.

In this study, we specifically examined the pattern of relationship between the perception of speech and the perception of non-linguistic emotional vocalizations within-subjects. Non-linguistic emotional sounds were chosen for this study to allow for the separation of paralinguistic information from any linguistic processing. Using asymmetry scores obtained using both a speech task and an emotional vocalizations task, we examined the pattern of lateralization observed for each task in the overall sample. We also examined the relationship in the degree and direction of lateralization for speech processing and for non-linguistic emotional vocalization processing within-subjects in order to assess which of the above patterns of complementarity is observed for these asymmetrically lateralized functions. If the general assumption that the modal model reflects a causal rela- tionship between left- and right-lateralized functions is accurate, then we should observe a significant negative correlation. If an attentional or sensory system bias is governing these lateralized processes, then we should ob- serve a significant positive correlation. Finally, if independent processes are responsible for the lateralization of each of these left- and right-lateralized processes, then no correlation in the laterality scores should be observed.

2. Method

2.1. Participants

Fifty-two (11 males and 41 females) neurologically healthy undergraduate students from the University of Saskatchewan participated in the present study for course credit (mean age 20.4, SD = 4.65). All participants were right-handed as assessed by the short version of the Waterloo Handedness Questionnaire―Revised ( Elias, Bryden, & Bulman-Fleming, 1998 ; mean = 21.62, SD = 4.33). All participants reported normal hearing with no history of hearing loss. The data from one female participant was excluded because of reported temporal lobe epilepsy.

2.2. Materials

2.2.1. Fused Rhymed Words Task

Lateralization for speech processing was assessed using the Fused Rhymed Words Test (FRWT; Wexler & Halwes, 1983 ). The test consists of 15 dichotic pairs of rhymed words that differ on initial phoneme (e.g. boy- toy). The simultaneous presentation of these words results in a fusing of the two words, such that one word is typically perceived. Each pair of rhyming stimuli was presented 16 times (eight times on each channel) for a to- tal of 240 trials. The trials were divided into two main blocks of 120 trials each (Block A and Block B). Each main block was further divided into four blocks of 30 trials. The stimuli were played off a CD using Windows Media Player through Sennheiser headphones (model HD-437). Participants were presented with an answer booklet and were asked to circle which word was heard from a list of four possible choices.

2.2.2. Emotional Sounds Task

Lateralization for emotional vocalization perception was assessed using an emotional sounds task (EST) de- signed for the present study. King and Kimura (1972) used non-lingusitic emotional sounds in a dichotic listen- ing task to assess lateralization of emotional processing and found a significant LEA. In their study, participants were presented with a dichotic pair of emotional sounds (e.g. crying and sighing) and were then asked to identi- fy the two sounds presented from a set of four options presented binaurally. As it has been argued that the mem- ory load for such a task may influence performance (Bryden, 1982) , we used the same stimuli but altered the response procedure. A stimulus set was created using non-linguistic emotional sounds (e.g. growling, gasping, sighing, and moaning). Fifteen emotional sounds were downloaded from public access sound effects websites; an additional three sounds were recorded using Audition (Adobe, 2009) for a total of 18 sounds (sampling rate: 44.1 KHz, resolution: 16 bit); nine positive emotions (e.g. laugher and cheering) and nine negative emotions (e.g. crying and screaming) were sampled. The sounds were played for 10 observers who were asked to indicate whether the sound matched the emotional label assigned to it. Out of the 18 total sounds, 12 sounds were un- animously rated as matching their assigned emotional label and were used to create the task. All sounds were edited to a common length of 1000 msec and were equalized for intensity. Each emotional token was paired di- chotically with every other token to produce 132 emotional sound pairs. The sound pairs were presented using E-prime (Psychology Software Tools, 2003) . In order to assess the influence of valence effects, one positive (“content”) emotional sound and one negative (“depressed”) emotional sound were chosen from the 12 tokens as the target sounds for the task.

2.3. Procedure

Participants were tested individually. Once informed consent was provided, participants filled out a demograph- ics questionnaire that assessed handedness and footedness (Elias et al., 1998) and addressed sex, age, and vision or hearing impairments. Once the questionnaire was completed, participants completed the FRWT and EST, counterbalanced for order of presentation. All tests were completed in a single session lasting approximately one hour.

2.3.1. FRWT

The participants were seated at a table and given a response booklet to record their responses on the task. Fol- lowing an explanation of the nature of the dichotic listening task, participants completed 30 practice trials. Each of the 30 words used to create the rhymed pairs was presented once, binaurally. Participants were asked to circle which word was heard in the response booklet. The practice trials were then followed by the test trials. Partici- pants were asked to respond as quickly and as accurately as possible. Word pairs were presented with an inter- stimulus interval of 2.5 seconds. Participants were given a brief break after each block of 30 word pairs before beginning the next block of trials. The position of the headphones was reversed four times throughout the task (after blocks 1, 3, 5, and 7).

2.3.2. EST

Unlike the FRWT, which was administered using a CD with participants providing their responses with pen and paper, the EST is a computer-based task. Participants were seated in front of the computer and were instructed to listen for a target emotional sound (“content” or “depressed”). They were then asked to listen to each of the 12 emotional sound tokens while reading the associated emotional label on the computer screen to familiarize them with their target emotional sound. Next, the participants completed 26 practice trials. They were instructed to rest the index and middle fingers of their right hand on the “y” and “u” keys. Each of the sound tokens was then presented binaurally and the participants were asked to indicate whether their target emotional sound was present by pressing either the yes key (y) or the no key (u) on the keyboard. Participants were asked to respond as quickly and as accurately as possible. Upon completion of the practice trials, the participants were instructed to press the space bar to begin the test trials. The computer monitor then displayed a black screen while the sound pair was presented. Immediately following presentation of a dichotic emotional sound pair participants were presented with a visual message prompting them to indicate whether they heard the target emotional sound (“yes or no?”). A response prompted the start of the next trial. After the first block of trials was completed, par- ticipants were instructed to listen for the other target emotional sound; the procedure was repeated. Order of emotional targets was counterbalanced between participants.

2.4. Calculation of Asymmetry Scores

The data from the FWRT and EST were converted to lambda (λ) scores, as described by Bryden and Sprott (1981) :

Where RE indicates right ear responses and LE indicates left ear responses. This measure of degree of lateraliza- tion, based on the log-odds ratio, provides a measure of lateralization that is approximately normal in its distri- bution and is not dependent on overall performance. Positive λ values reflect a right-ear performance advantage (REA).

3. Results

3.1. FRWT

The FRWT elicited the expected REA in 90% of participants (46/51). One participant exhibited no ear advan- tage on the task. This observed overall REA was significant, t(50) = 8.122, p < .001 (M = .285, SD = .251). Per- formance did not differ between Block A and Block B of the task, t(50) = .978, p = .333. A one-sample t-test examining overall accuracy revealed that performance on the task was above chance levels, t(50) = 93.206, p < .001 (chance = 120, M = 230.02, SD = 8.42).

3.2. EST

A paired-samples t-test revealed no influence of block on performance accuracy for the task, t(50) = −.231, p = .818, so data was collapsed across blocks for the remaining analyses. A 2 ´ 2 ANOVA examining the influence of target (content, depressed) and ear (left, right) on performance accuracy revealed no main effect of target, F(1, 50) = .16, p = .691, and no main effect of ear, F(1,50) = 2.27, p = .138. The interaction was not significant, F(1, 50) = .84, p = .363. When the λ values were examined, the EST elicited the expected LEA in 55% of participants (28/51). Nine participants did not elicit any ear advantage on the task. The observed overall LEA was not significant, t(50) = −1.580, p = .120 (M = −.313, SD = 1.415). As approximately half of the participants performed at 100% accuracy for one or both blocks of trials, it is possible that ceiling performance effects attenuated the observed LEA. We assessed the overall LEA with all ceiling-performances removed and found a significant effect, t(23) = −2.183, p = .04, (M = −.734, SD = 1.648). A one-sample t-test examining overall accuracy revealed that performance on the task was above chance levels, t(50) = 34.926, p < .001 (chance = .08, M = .888, SD = .165).

3.3. Lateral Preferences

Participants were all right-handed (M = 21.53, SD = 4.33). Forty-nine participants showed a right-foot prefe- rence, the remaining two participants showed no foot preference (M = 5.31, SD = 2.33). Handedness and footed- ness were significantly correlated (r = .436, p = .001). Neither handedness nor footedness varied significantly with performance on the FWRT (Hand: r = .106, p = .46; Foot: r = −.156, p = .274). Similarly, neither handed- ness nor footedness varied significantly with performance on the EST (Hand: r = .045, p = .752; Foot: r = −.023, p = .873).

3.4. Complementarity of Speech and Emotional Vocalization Processing

The correlation between FRWT and EST lambda scores was not significant, r = .101, p = .482 (See Figure 1). Most participants displayed the typical complementary pattern of left-hemispheric dominance for speech pro- cessing and right-hemispheric dominance for emotional vocalization processing (29/52 = 57%), and only one participant showed the reverse pattern of hemispheric dominance (1/51 = 2%). The remaining participants dem- onstrated same-side dominance for both tasks, with 18/51 (35%) of participants displaying left-hemisphere do- minance and 3/51 (6%) of participants displaying right-hemisphere dominance. As the presence of ceiling per- formances may have influenced the overall pattern of results, we also examined the correlation between FRWT and EST lambda scores with all ceiling performances removed. The correlation was not significant, r = .247, p = .245 (see Figure 2). The majority of participants displayed left-hemisphere dominance for speech processing and right-hemisphere dominance for emotional vocalization processing (15/24 = 63%), and no participants dis- played the reverse pattern of hemispheric dominance. Again, the remaining participants demonstrated same-side dominance for both tasks, with 7/24 (29%) of participants displaying left-hemisphere dominance and 2/24 (8%) of participants displaying right-hemisphere dominance.

Figure 1. Individual λ scores on the Fused Rhyming Words Test (FRWT) versus λ scores on the Emotional Sounds Task (EST) for all participants. Positive values indicate a right ear advantage (REA).

Figure 2. Individual λ scores on the Fused Rhyming Words Test (FRWT) versus λ scores on the Emotional Sounds Task (EST) with participants performing at ceiling on the EST removed. Positive val- ues indicate a right ear advantage (REA).

4. Discussion

This present study examined the pattern of asymmetrical lateralization observed between speech vocalization processing within individuals, using the FRWT (Wexler & Halwes, 1983) , and paralinguistic emotional vocali- zation processing, using an emotional sounds task (EST). Of the few studies examining the within-subjects pat- tern of lateralization using a variety of cognitive processing tasks, most have provided additional evidence for a statistical pattern of complementarity between right- and left-lateralized functions (Hellige, Bloch, & Taylor, 1988; Nestor & Safer, 1990) . For example, Andresen and Marsolek (2005) examined lateralization of shape- recognition and spatial-relations within individuals and found no significant correlations between these, respec- tively, left- and right-lateralized tasks, suggesting a statistical pattern of lateralization between the tasks. How- ever, examining the more broad relationship between right- and left-lateralized cognitive functions, there have also been a few studies that have found evidence of either bias ( Elias, Bulman-Fleming, & Guylee, 1999 ; White- house & Bishop, 2009) or causal (Badzakova-Trajkov, Haberling, Roberts, & Corballis, 2010) patterns of latera- lization.

As highlighted by Andresen and Marsolek (2005) , a statistical pattern of complementarity need not predict only null correlations. In its strictest version, statistical complementarity may predict that all cognitive functions are lateralized by independent sources. However, a less strict view of the statistical pattern would predict a finite number of independent sources underlying the lateralization of cognitive functions. Under this view, it is not unexpected, then, that a single source may influence asymmetrical lateralization of more than one function. This would result in a mixed pattern of results overall whereby some functions show independent (statistical) rela- tionships with one another and causal or bias relationships with other functions. The fairly consistent finding of a statistical pattern across the literature (Andresen & Marsolek, 2005; Hellige, Bloch, & Taylor, 1988; Ley & Bryden, 1982; McNeely & Parlow, 2001; Nestor & Safer, 1990; Saxby & Bryden, 1984) , punctuated by a few significant positive (Elias, Bulman-Fleming, & Guylee, 1999) or negative (Badzakova-Trajkov et al., 2010) correlations between specific cognitive processes, may reflect a less strict form of statistical complementarity whereby lateralization for most cognitive functions is driven by independent processes, with a few processes la- teralized by a common influence.

For this study, non-linguistic human emotional sounds were chosen to isolate the paralinguistic information from the influence of any linguistic processing. The stimuli were similar to those employed by King and Kimura (1972) and Carmon and Nachshon (1973) , both of whom observed slight LEA for processing of these non-ling- uistic emotional sounds. However, the methodologies differed. In our task, we chose to have participants pro- vide a present/absent response for their target emotion rather than reporting the two sounds heard following presentation of four binaural auditory choices (King & Kimura, 1972) or presentation of a visual selection dis- play (Carmon & Nachshon, 1973) . This method was chosen to reduce the potential for performance influence due to a high memory load (Bryden, 1982) . These methodological differences did, however, appear make the task too simple, as evidenced by the high proportion of participants achieving ceiling performance.

Analysis of the overall lateral biases for the speech and emotional vocalization processing tasks revealed a significant REA on the FRWT and a significant LEA on the EST, following removal of ceiling performances. These observed lateral biases replicated the expected population-level pattern of lateralization across the sample. However, further examination of the relationship between the degrees of lateralization on these two tasks within participants revealed a statistical pattern of complementarity suggesting the influence of independent processes on the lateralization of speech and emotional vocalization processing.

These results are consistent with the findings of McNeely and Parlow (2001) who examined the complemen- tarity of linguistic and prosodic processing. They measured linguistic lateralization using the FRWT and meas- ured prosodic lateralization using the Dichotic Emotion Recognition Test (DERT). In the DERT, pairs of non- sense sentences are presented dichotically, one in a neutral tone and the other in an emotional tone (happy, sad, angry, or afraid). Participants are asked to report the emotional tone for each pair presented. They observed an overall right ear advantage for linguistic processing and an overall left ear advantage for prosodic processing, but no significant correlation between the two functions was observed when assessed within individuals. Other studies examining lateralization of linguistic and prosodic processing have also reported similar findings: Ley and Bryden (1982) presented sentences spoken in emotional tones to participants and observed a similar popula- tion-level pattern with a REA for the linguistic content of the stimuli and a LEA for the emotional content; however, the overall pattern when examining performance within subjects reflected a statistical pattern. Using a similar method and procedure, Saxby and Bryden (1984) examined the lateralization or emotional and linguistic processing in kindergarten, Grade 4, and Grade 8 children; no association between linguistic and emotional la- teralization was observed, suggesting a statistical pattern of complementarity.

However, our results are inconsistent with the findings of Elias, Bulman-Fleming, and Guylee (1999) who examined the relationship between lateralized linguistic and prosodic processing within individuals displaying atypical laterality profiles. They also measured linguistic lateralization using the FRWT. Prosodic lateralization was measured using the Emotional Words Task (EWT; Bryden & MacRae, 1989 ) where two-syllable, rhyming words, differing in the first phoneme (bower, dower, tower, and power) are presented in one of four emotional tones (happy, sad, angry, or neutral). A pair of words is presented, one word to each ear, and participants are asked to report which emotional tone they heard. Elias and colleagues found the expected population-level pat- tern of laterality for linguistic and prosodic processing across the sample, with an overall REA for the FRWT and an overall LEA for the EWT. However, in contrast to our findings, they observed a significant positive cor- relation between the tasks, suggesting a bias pattern of complementarity. The specific recruitment of participants with laterality profiles instead of relying solely on right-handers alone may explain the differences in findings between the two studies. For example, left-handers are much more likely to show right-hemisphere lateralization for language compared to right-handers. This increase in variability with regard to lateralization may influence the pattern observed.

5. Conclusion

An examination of the lateralization of speech and emotional vocalization processing revealed an expected overall complementary pattern of lateralization. However, further examination of the relationship in degree and direction of lateralization between these two functions within individuals revealed evidence of a statistical pat- tern of complementarity. These present results provide additional evidence that the discrete asymmetries ob- served across individuals may not reflect the patterns and relationships observed when these asymmetries are examined together within individuals. Rather, these findings provide support for a statistical complementarity pattern reflecting independent processes governing the lateralization of linguistic and emotional processing.


This research was supported by a Canada Graduate Scholarship from the Natural Sciences and Engineering Re- search Council of Canada to V. Harms, and by a grant from the Natural Sciences and Engineering Research Council of Canada to L. J. Elias.

Conflicts of Interest

The authors declare no conflicts of interest.


[1] Akaike, H. (1973). Information Theory and an Extension of the Maximum Likelihood Principle. In B. N. Petrov, & F. Csaki (Eds.), Proceedings of the 2nd International Symposium on Information Theory (pp. 267-281). Budapest: Akademiai Kiado.
[2] Arbuckle, J. L., & Wothke, W. (1999). AMOS 4.0 User’s Guide. Chicago, IL: Smallwaters Corp.
[3] Baddeley, A. (1968). A 3 Min Reasoning Test Based on Grammatical Transformation. Psychonomic Science, 10, 341-342.
[4] Bollen, K. A. (1989). Structural Equations with Latent Variables. New York, NY: Wiley.
[5] Brdar, I., & Kashdan, T. (2010). Character Strengths and Well-Being Croatia. Journal of Research in Personality, 44, 151-154.
[6] Browne, M. W., & Cudeck, R. (1993). Alternative Ways of Assessing Model Fit. In K. A. Bollen, & J. S. Long (Eds.), Testing Structural Equation Models (pp. 136-162). Newbury Park, CA: Sage.
[7] Cawley, M., Martin, J., & Johnson, J. (2000) A Virtues Approach to Personality. Personality and Individual Differences, 28, 997-1013.
[8] Chamorro-Premuzic, T. (2008). The Numerical Reasoning Test 20-Items (NRT-20). Goldsmiths: University of London.
[9] De Raad, B., & van Oudenhoven, J. P. (2011). A Psycholexical Study of Virtues in the Dutch Language, and Relations between Virtues and Personality. European Journal of Personality, 25, 43-52.
[10] Duan, W., Ho, S., Yu, B., Tang, X., Zhang, Y., Li, T., & Yuen, T. (2012) Factor Structure of the Chinese Virtues Questionnaire. Research on Social Work Practice, 22, 680-688.
[11] Furnham, A. (2001). Self Estimates of Intelligence: Culture and Gender Differences in Self and Other Estimates of General and Multiple Intelligences. Personality and Individual Difference, 31, 1381-1405.
[12] Furnham, A. (2008a). Personality and Intelligence at Work. London: Routledge.
[13] Furnham, A. (2008b). Relationship among Four Big Five Measures of Different Lengths. Psychological Reports, 102, 312-316.
[14] Furnham, A., & Lester, D. (2012). The Development of a Short Measure of Character Strength. European Journal of Psychological Assessment, 28, 95-101.
[15] Furnham, A., & McManus, I. C. (2004). Student Attitudes to University Education. Higher Education Review, 36, 29-38.
[16] Furnham, A., Keser, A., Yilmaz, G., & Ahmetoglu, G. (2014). Gender and Culture Differences in Self-Rated Character Strengths and Virtues. Under Review.
[17] Herzberg, P., & Brahler, E. (2006). Assessing the Big-Five Personality Domains via Short Forms. European Journal of Psychological Assessment, 22, 139-148.
[18] Kanazawa, S. (2010). Why Liberals and Atheists Are More Intelligent. Social Psychology Quarterly, 73, 33-57.
[19] Kristjansson, K. (2010). Positive Psychology, Happiness and Virtue. Review of General Psychology, 14, 296-310.
[20] Linley, P., Maltby, J., Wood, A., Joseph, S., Harrington, S., Peterson, C., Park, N., & Seligman, M. (2007). Character Strengths in the United Kingdom: The VIA Inventory of Strengths. Personality and Individual Differences, 43, 341-351.
[21] Lynn, R., & Kanazawa, S. (2008). How to Explain High Jewish Achievement: The Role of Intelligence and Values. Personality and Individual Differences, 44, 801-808.
[22] Macdonald, C., Bore, M., & Munro, D. (2008). Values in Action Scale and the Big 5. Journal of Research in Personality, 42, 787-799.
[23] McManus, I. C., & Furnham, A. (2006). Aesthetic Activities and Aesthetic Attitudes: Influences of Education, Background and Personality on Interest and Involvement in the Arts. British Journal of Psychology, 97, 555-587.
[24] McManus, I. C., Smithers, E., Partridge, P., Keeling, A., & Fleming, P. (2003). A Levels and Intelligence as Predictors of Medical Careers in UK Doctors: 20 Year Prospective Study. BMC Medical Education, 5, 38-46.
[25] Muck, P., Hell, B., & Gosling, S. (2007). Construct Validation of a Short Five-Factor Model Instrument. European Journal of Psychological Assessment, 23, 166-175.
[26] Mulaik, S. A., James, L. R., Van Alstine, J., Bennett, N., Lind, S., & Stilwell, C. D. (1989). Evaluation of Goodness-of-Fit Indices for Structural Equation Models. Psychological Bulletin, 105, 430-445.
[27] Neto, J., Neto, F., & Furnham, A. (2013). Gender and Psychological Correlates of Self-Rated Strengths among Youth. Social Indicators Research.
[28] Park, N., & Peterson, C. (2006a). Moral Competence and Character Strengths among Adolescents. Journal of Adolescence, 29, 891-909.
[29] Park, N., & Peterson, C. (2006b). Methodological Issues in Positive Psychology and the Assessment of Character Strengths. In A. Ong, & M. van Dulmen (Eds.), Handbook of Methods in Positive Psychology (pp. 292-305). New York: Oxford University Press.
[30] Park, N., Peterson, C., & Seligman, M. (2004). Strengths of Character and Well-Being. Journal of Social and Clinical Psychology, 23, 603-619.
[31] Park, N., Peterson, C., & Seligman, M. (2006). Character Strengths in Fifty-Four Nations and Fifty US States. Journal of Positive Psychology, 1, 118-129.
[32] Pearl, J. (2000). Causality: Models, Reasoning, and Inference. Cambridge: Cambridge University Press.
[33] Peterson, C., & Park, N. (2006). Character Strengths in Organizations. Journal of Organizational Behavior, 27, 1149-1154.
[34] Peterson, C., & Seligman, M. (2004). Character Strengths and Virtues: A Handbook of Classification. Washington DC: APA Press.
[35] Peterson, C., Park, N., & Seligman, M. (2006). Greater Strengths of Character and Recovery from Illness. Journal of Positive Psychology, 1, 17-26.
[36] Peterson, C., Park, N., Pole, N., D’Andrea, W., & Seligman, M. (2008). Strength of Character and Posttraumatic Growth. Journal of Traumatic Stress, 21, 214-217.
[37] Shryack, J., Steger, M., Krueger, R., & Kallie, C. (2010). The Structure of Virtue. Personality and Individual Differences, 48, 714-719.
[38] Tanaka, J. S., & Huba, G. J. (1985). A Fit Index for Covariance Structure Models under Arbitrary GLS Estimation. British Journal of Mathematical and Statistical Psychology, 38, 197-201.
[39] Toner, E., Haslam, N., Robinson, J., & Williams, P. (2012). Character Strengths and Wellbeing in Adolescents. Personality and Individual Differences, 52, 637-642.
[40] Vincent, C., & Furnham, A. (1997). Complementary Medicine. Chichester: Wiley.
[41] Wonderlic, E. (1992). Wonderlic Personnel Test. Libertyville: WPTT.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.