Psychometric Evidence of the Brief Resilience Scale (BRS) and Modeling Distinctiveness of Resilience from Depression and Stress

Abstract

The purpose of this study was to evaluate the factor structure and measurement invariance across gender and age of the Brief Resilience Scale (BRS) in 2272 Greek adults of the general population. The sample was split into three parts (20%, 40%, 40%). EFA was carried out in the first subsample (20%) evaluating 3 models. CFA was next carried out in the second subsample (40%) evaluating seven models. All models were examined further in a different CFA with a subsample of equal power (40%). The single factor of BRS was deemed unstable across the two CFA subsamples. A two-factor model was the optimal model emerged in the Greek context. Measurement invariance across gender and age was successfully established. Internal consistency reliability (α and ω) and AVE based convergent validity were adequate for the entire BRS. A consistent pattern of relationships emerged from correlation analysis with 12 different measures, suggesting convergent and discriminant validity. The distinctiveness of BRS from depression and stress was evidenced using CFA and EFA with different compound models of BRS and scales of depression, anxiety, and stress. These findings further confirmed that the Greek version of BRS has construct validity.

Share and Cite:

Kyriazos, T. , Stalikas, A. , Prassa, K. , Galanakis, M. , Yotsidi, V. and Lakioti, A. (2018) Psychometric Evidence of the Brief Resilience Scale (BRS) and Modeling Distinctiveness of Resilience from Depression and Stress. Psychology, 9, 1828-1857. doi: 10.4236/psych.2018.97107.

1. Introduction

Resilience has been a central focus of the empirical research, applied psychology and public health for decades ( Duarte Alonso, 2015; McGreavy, 2015; Abramson et al., 2014; cited in Salisu & Hashim, 2017 ). Unsurprisingly, resilience has also received many different definitions. The American Psychological Association described resilience as a process of well-adapting when confronting adversity, trauma, tragedy, threat, or serious sources of stress (American Psychological Association, 2015). In a similar vein, it was also conceptualized as the ability of a social system (i.e. organization, community or society) to adapt proactively and recover from within-system instabilities that are unexpected and non-normal (Connor & Davidson, 2003). All definitions of resilience focus on the ability to recover from adversity or to be able to adapt successfully (Fletcher & Sarkar, 2013; Chadwick, 2014; Singh et al., 2016).

Resilience has been reported to closely relate to all well-being dimensions (emotional, social, and psychological well-being; Keyes, 2002, 2005; Ryff, 1989 ) both in general populations (Pinheiro & Matos 2013) or health-care professionals (Koen et al., 2011). Resilience has also been found to strongly correlate with Positive Affect (Huppert & So, 2013), physical health (Montross et al., 2006), optimism (Lee et al., 2008), satisfaction with life (Abolghasemi & Varaniyab, 2010), mindfulness (Keyes & Pidgeon, 2013) and equally strongly but negatively with depression (Hardy et al., 2002, 2004) and Negative Affect (Singh & Yu, 2010) as reviewed by Singh et al. (2016).

The results of this prolific literature are numerous resilience measures. Windle et al. (2011) reported over 19 scales in their comparative review and Salisu & Hashim (2017) as many as 25. However, the measures developed to measure resilience focused on protective factors and/or resources that enable resilience (Ahern, Kiehl, Sole, & Byers, 2006). Smith et al. (2008) noted the measurement inaccuracy and developed the Brief Resilience Scale (BRS). The new measure was focused on the core and essential property of resilience, the capacity to bounce back from stress and adversity ( Smith, Tooley, Christopher, & Kay, 2010 ; cited in Salisu & Hashim, 2017 ).

BRS is a brief, single-factor instrument with 3 positively worded items and 3 negatively worded items to minimize response bias (Smith et al., 2008). According to Windle et al. (2011), the Brief Resilience Scale (Smith et al., 2008) was among the scales with the most satisfactory psychometric properties. More recently, it was also evaluated (Salisu & Hashim, 2017) as one of the most frequently used resilience scales in a total of 25 scales.

In the original study (Smith et al., 2008), the unidimensional factor of BRS explained 55% - 67% of the variance over the 4 samples tested with PCA. Internal consistency reliability was satisfactory α = .80 - .91. Convergent, discriminant and concurrent validity were estimated with three other measures of resilience, measures of personal characteristics (Scheier, Carver, & Bridges, 1994; Ryff & Keyes, 1995; Bagby, Parker, & Taylor, 1994; Denollet, 2005; Watson, Clark, & Tellegen, 1988). Coping Styles (Carver, 1997), social relationships (Cohen, Mermelstein, Karmarck, & Hoberman, 1985; Sherbourne & Stewart, 1991; Finch, Okun, Barrera, Zautra, & Reich, 1989) and measures of health-related outcomes. Samples included two special populations with chronic pains: cardiac rehabilitation patients and women with fibromyalgia. Consequently, BRS was validated in many different samples in Malaysia (Amat, et al., 2014), Brazil (de Holanda Coelho et al., 2016), Spain ( Rodríguez-Rey, Alonso-Tapia, & Hernansaiz-Garrido, 2016), Germany (Chmitorz et al., 2018), and Holland (Consten, 2016).

BRS was adapted for the Malaysian context (Amat et al., 2014) in a sample of 120 international university students, 63% males. The single factor structure of the original version was verified using PCA. The factor that emerged explained 74% of the variance. Internal consistency reliability, as measured by Cronbach’s alpha was reported .93.

The validation of BRS in the Brazilian cultural context ( De Holanda Coelho et al., 2016) was carried out in two samples of university students. Initially, PCA was performed and one single factor emerged, accounting for 43% of the total variance, after the removal of item 5. Internal consistency reliability was adequate. Next, this factor structure was successfully replicated in a second sample with CFA. Measurement equivalence with the student sample of the original study (Smith et al., 2008) was evaluated and partial, strong measurement invariance was established. They reported significant but weak correlations between BRS and positivity (Caprara et al., 2012), or flourishing (Diener et al., 2010), extraversion, openness, and agreeableness but negative correlations with neuroticism (John, Donahue, & Kentle, 1991).

The Spanish version of BRS (Rodríguez-Rey, Alonso-Tapia, & Hernansaiz-Garrido, 2016) was validated in both adults of special populations and adults of the general population. Confirmatory Factor Analysis was carried out, confirming the single factor structure of BRS. It should be noted that although this structure was scored as a single-factor, it was actually a two-factor first-order structure of a second-order BRS Resilience factor. One factor comprised the positively worded items and the second the negatively worded items to account for the wording effect ( Alonso-Tapia & Villasana, 2014; Marsh, 1996; Wu, 2008 cited in Rodríguez-Rey et al., 2016 ). Wording effect was attributed to positively and negatively worded items of BRS to avoid response bias ( Cronbach, 1950 see Rodríguez-Rey et al., 2016 ). Item reversing at this extent (50%) forced the items to separate in two factors despite that they measure the same dimension. To evaluate convergent, discriminant and predictive validity the following constructs were used: resilience (Campbell-Sills & Stein, 2007; Connor & Davidson, 2003), trauma (Davidson et al., 1997), stress (Cohen, Kamarck & Mermelstein, 1983), emotionality (Fredrickson, Tugade, Waugh, & Larkin, 2003), hospital anxiety and depression (Zigmond & Snaith, 1983), posttraumatic growth (Tedeschi & Calhoun, 1996), situational resilience ( Hernansaiz-Garrido et al., 2014b), situational coping (Hernansaiz-Garrido et al., 2014a), and resilience personality factors. Measurement invariance between the two samples was also examined. Correlation analysis showed positive and statistical significant relationships between the BRS and the CD-RISC (Campbell-Sills & Stein, 2007; Connor & Davidson, 2003), positive emotions (Fredrickson, Tugade, Waugh, & Larkin, 2003), problem-centered coping, sense of mastery, sense of relatedness and emotional reactivity (Rodríguez-Rey et al., 2016). Authors reported negative correlations with stress, negative emotions, and emotion-centered coping.

Lastly, the German BRS version (Chmitorz et al., 2018), was validated in two large samples of the general population using CFA. One-factor, two-factor and a method model were evaluated. The method model was specified to account for the wording effect and consisted of a general resilience factor with all 6 items and a specific method factor with only the negatively worded items and showed optimal fit. Internal consistency reliability was reported α = .85 and ω = .85. Convergent validity was supported by a positive and significant relationship of BRS with well-being, social support, optimism, and the active coping strategies. Negative relation was reported with somatic symptoms, anxiety and insomnia, social dysfunction, depression, and the coping strategies of religion, denial, venting, substance use, and self-blame (Chmitorz et al., 2018).

Summing up BRS factor structure of the versions adapted for different cultures, BRS in a Malaysian sample (Amat et al., 2014) and in a Brazilian sample (de Holanda Coelho et al., 2016) was reported to be unidimensional. The Spanish (Rodríguez-Rey et al., 2016) and German (Chmitorz et al., 2018) version reported having a two-factor structure to account for method effects. There is also a Dutch BRS version (Consten, 2016) validated in a special population of a rehabilitation facility. Thus BRS has been validated in collectivistic and individualistic cultural contexts (Hofstede, 2001; Triandis, 1995), and special populations like cardiac rehabilitation patients and women with fibromyalgia (chronic pains; Smith et al., 2008 ), HIV-positive diagnosed (Rodríguez-Rey et al., 2016), cancer outpatients (Rodríguez-Rey et al., 2016), parents with children either with intellectual disabilities, development disorders or parents of oncological outpatient children (Rodríguez-Rey et al., 2016), or members of a rehabilitation facility (Consten, 2016).

Sociodemographic differences in BRS scores emerged regarding gender and age. Smith et al. (2008) found that male cardiac patients scored higher than females. However, the samples of students presented no gender differences. R odriguez-Rey et al. (2016) also reported higher BRS scores for males. No age-related differences were reported by Smith et al. (2008) in contrast to Smith et al., (2010). Rodriguez-Rey et al. (2016) also found lower BRS scores for respondents aged from 20 to 30 years than those older than 31 years. Generally, these findings are supported by inconsistent findings of the relationship of resilience and gender or age (Lundman et al., 2007; Mehta et al., 2008). Lower Income and education were also related to lower resilience levels (Wagnild, 2003; Campbell-Sills et al., 2009)as described by Singh et al. (2016).

The purpose of this study is 1) To validate the BRS factor structure and measurement invariance across gender and age using the 3-faced validation method (Kyriazos, Stalikas, Prassa, & Yotsidi, 2018a, 2018b). 2) To model the distinctiveness of BRS with EFA and CFA from depression and stress evidencing construct validity further. 3) To examine internal consistency reliability and 4) To evaluate Convergent and Discriminant validity.

2. Methods

2.1. Participants

The sample included 2272 Greek adults of the general population (females 63%) aged on average M = 35.54 years (SD = 12.35). Most of the respondents were older than 33 years old (51%). The 51% of the respondents were either single or married/living together (41%) and divorced (8%). Most of the respondents had no children (59%), 1 child (14%), 2 children (22%), or more (5%). Most of the respondents received a Bachelor degree (42%), finished high-school (24%), had a postgraduate degree (19%), were undergraduate university students (14%), or attended primary education (1%). See details about the method used to recruit participants in the Procedure section.

2.2. Materials

1) Brief Resilience Scale (BRS)

The BRS (Smith, Dalen, Wiggins, Tooley, Christopher, & Bernard, 2008) is a 6-item measure of resilience, focusing on the ability to recover from stress and adversity. Responses are rated on a 5-point Likert scale from Strongly Disagree (1) to Strongly Agree (5). The higher the mean BRS score the more resilient the respondent is. BRS is a single factor scale. Half of the items are reversed scored to avoid social desi liability response bias (Cronbach, 1950). Smith et al. (2008) reported Cronbach’s alpha from .80 - .91 over four samples. BRS was translated in Greek by Stalikas & Kyriazos (2017) with the translation/back-translation method (Brislin, 1970). Items 2, 4, 6 were reversed in all analyses, as proposed by Smith et al. (2008) to avoid desirability response bias (Cronbach, 1950)...

2) The scale of Positive and Negative Experience (SPANE-12)

This is a 12-item scale of emotionality by Diener et al., (2009, 2010) with two factors with 6 one-word items each:.positive experiences (SPANE-P) and negative ones (SPANE-N). Items are scored on a Likert scale from 1 (very rarely or never) to 5 (very often or always). Experiences are evaluated over a 4-week time frame. Possible scores per dimension range from 6 to 30. Their difference (Affect Balance or SPANE-B) can range from −24 to 24. In this study, Cronbach’s alpha for SPANE-P, SPANE-N was .90, .85 respectively.

3) The scale of Positive and Negative Experience 8 (SPANE-8)

This is a briefer version of SPANE with 8 items (4 in SPANE-P and 4 in SPANE-N). It is a post-hoc empirical version (Kyriazos, Stalikas, Prassa, Yotsidi, 2018b; Diener et al., 2010) with one general feeling per factor instead of three in the original SPANE (Diener et al., 2010: p. 145). The 4 positive experiences are Pleasant, Happy, Joyful, Contented and the 4 negative Bad, Sad, Afraid, Angry. Cronbach’s Alpha in this study was .85 for SPANE-8 P, .75 for SPANE-8 N.

4) Depression Anxiety Stress Scale (DASS)

The DASS (Lovibond & Lovibond, 1995) measures depression, anxiety, and stress in three 7-item factors. Items are rated on a four-point Likert scale from 0 (did not apply to me at all) to 3 (applied to me very much, or most of the time) over the past week. The higher the score the more intense the emotional distress. Cronbach’s alpha was reported α = .97 for adults of the general population (Henry & Crawford, 2005). In this study, internal consistency reliability for Depression, Anxiety, Stress and DASS-21 Total was .90, .88, .89 and .95 respectively (see also Kyriazos et al., 2018a ).

5) Depression Anxiety Stress Scale, Short form (DASS-9)

DASS-9 ( Yusoff, 2013 and in Greek by Kyriazos, Stalikas, Prassa, & Yotsidi, 2018a ) is a briefer version of DASS-21 (Lovidond & Lovibond, 1995), empirically derived by Yusoff (2013). DASS-9 evaluates Depression, anxiety, and stress in three factors with 3 items each. All 9 items are rated on a 4-point Likert scale from 0 (did not apply to me at all) to 3 (applied to me very much, or most of the time). Symptoms are evaluated over the last week. Internal consistency reliability for Depression, Anxiety and Stress were .52, .57, and .55 respectively (Yussof, 2013). In this study, internal consistency reliability for Depression, Anxiety, Stress and DASS-9 Total was .79, .77, .73 and .89 respectively (see also Kyriazos et al., 2018a ).

6) World Health Organization Quality of Life-Brief scale (WHOQOL-BREF)

The WHO Quality of Life-Brief (WHOQOL Group, 1998a, 1988b) measures perceived life quality. It is the short version of the WHOQOL-100 (c.f. Skevington, 1999). It comprises 26 items rated over a 5-point Likert scale indicating either intensity, capacity, frequency, or judgment (Skevington et al., 2004). Items are tapping four QOL dimensions: Physical health, Psychological health, Social Relations and Environment. Cronbach’s alphas were reported .82, .81, .68, and .80 respectively (Skevington, Lotfy, & O’Connell, 2004). Cronbach’s alpha for the total scale in this study was α = . 91.

7) Flourishing Scale (FS)

The FS (Diener et al., 2009, 2010) is an 8-item unidimensional measure of flourishing. Items are rated over a 7-point Likert scale from 1 = Strongly Disagree to 7 = Strongly Agreed). Diener et al. (2010) reported an internal consistency reliability of α = .87. In this study, Cronbach’s alpha was .81.

8) Warwick-Edinburgh Mental Well-being Scale (WEMWBS)

The WEMWBS (Universities of Warwick and Edinburgh; Tennant et al., 2007 ) is a unidimensional scale of mental well-being and psychological functioning. The 14 items of the scale are rated on a 5-point Likert scale from None of the time to All of the time. Internal consistency reliability was adequate (.91 in an adult sample and .89 in an undergraduate student sample; Tennant et al., 2007 ). Internal consistency reliability in this study was α = .91.

9) Mental Health Continuum-Short Form (MHC-SF)

Mental Health Continuum-Short Form (Keyes et al., 2008; Keyes, 2002) is a 14-item measure of well-being containing 3 factors: emotional (EWB), social (SWB) and psychological (PWB). Responses are rated on a 6-point Likert scale (never, once or twice a month, about once a week, two or three times a week, almost every day, every day) over the last month. Internal consistency reliability for the total MHC-SF was reported to be greater than .80 (Keyes, 2005). Internal reliability for the total scale in this study was α = .90.

10) Satisfaction with Life Scale (SWLS)

The Satisfaction with Life Scale (Diener, Emmons, Larsen, & Grifin, 1985) is a brief measure of life satisfaction. The 5 Items of the scale are rated on a 7-point Likert scale (1 = Strongly Disagree) to 7 = Strongly Agree). Internal consistency reliability (Cronbach’s alpha) was reported from .79 to .89 (Pavot & Diener, 1993). In this study, Cronbach’ s alpha was α = .88.

11) Meaning in Life Questionnaire (MLQ)

The MLQ (Steger et al., 2006) measures the presence of and search for meaning in life, with 10 items tapping two factors (Presence of meaning and search for meaning). Items are rated on a 7-point Likert scale (from “Absolutely True” to “Absolutely Untrue”). Steger et al. (2006) reported Cronbach’s alphas of .86 for Presence factor and .87 for Search. Internal consistency reliability in this study was α = .78

12) Trait Hope Scale (HS)

Trait Hope Scale (Snyder et al., 1991) is a 12-item measure of dispositional hope having two factors: Agency and Pathways. Items are rated on a Likert scale ranging from 1 (Definitely False) to 8 (Definitely True). Snyder et al. (1991) reported Cronbach’s alphas for the total scale from .74 to .84. Internal reliability in this study was α = .89.

13) The Gratitude Questionnaire (GQ-6)

The GQ-6 (McCullough, Emmons, & Tsang, 2002) is a 6-item scale of gratitude experience. Items are rated on a 7-point Likert scale from 1 = strongly disagree to 7 = strongly agree). GQ-6 has a unidimensional factor structure. Items 3 and 6 are reversed scored. The internal consistency reliability of GQ-6 in the original study was .82 (McCullough et al., 2002) and in this study, it was α = .68.

2.3. Procedure

One hundred and fifty undergraduate psychology students assisted the online data collection procedure by forwarding a link to an electronic test battery (in Google Forms© format) to 15 - 20 adults from their social environment. Students participating in the study received extra-credit. All the fields of the digital test-battery were set as required. The following process took place for the data collection. Initially, students attended a training course on the administration of digital psychology questionnaires. Then, pilot-testing of the digital test-battery followed to track ambiguities in the questionnaire used or potential flaws in the digital procedure. The completion time was approximately 15 minutes. After successful pilot testing, students received a link to the official study.

2.4. Research Design

Research scope was twofold: a) on three subsamples (EFA, CFA 1, and CFA 2) to establish construct validity with EFA and confirm it with two CFA in a sample of equal power; b) over the entire sample (Total sample) to evaluate strict measurement invariance across gender and age. This is a construct validation procedure we termed “3-faced construct validation method” (see Kyriazos et al., 2018a, 2018b ). Table 1 presents an overview of the method as implemented in the present study.

Regarding the factor analysis methods applied, in the first subsample (EFA subsample), Exploratory Factor Analysis (EFA) and Bifactor EFA were implemented, testing three alternative models. In the second subsample (CFA 1 subsample) Independent Cluster Model Confirmatory Factor Analysis (ICM-CFA), CFA Bifactor and Exploratory Structural Equation Modeling Analysis (ESEM) and ESEM Bifactor Analysis were carried-out in seven alternative models. The third subsample (CFA 2 subsample) was used to cross-validate the CFA model established in the CFA 1 subsample. Then, a multigroup CFA (MGCFA) was carried out in the entire sample (N = 2272) using the CFA 2 optimal model as a baseline model, to test for strict measurement invariance across gender and age (see Table 1 for an overview of this method). Reliability analyses using Cronbach’s α (Cronbach, 1951) and McDonald’s ω (McDonald, 1999; Werts, Lim, & Joreskog, 1974) coefficients were carried out in the entire sample. Convergent validity was examined first with Average Variance Extracted (AVE; Fornell & Larcker, 1981 ). Convergent/discriminant validity was then examined based on correlation analysis over the entire sample. The correlation of BRS with mental distress, well-being, emotionality, positivity, and quality of life was examined. Next, 20 composite models were evaluated using BRS and DASS-21 (Lovibond & Lovibond, 1995) and BRS and DASS-9 (Yusoff, 2013; Kyriazos et al., 2018a ) with EFA and CFA in two different subsamples. Finally, normative data were calculated. Google Forms® was used for data collection. SPSS Version 25, (IBM, 2017 ) and Stata Version 14.2 (StataCorp, 2015) and MPlus Version 7.0 (Muthen & Muthen, 2012) were used for all analyses.

3. Results

3.1. Data Screening and Sample Power

There were no missing values in all variables the data set because all the fields of the digital test-battery were set as required (see Procedure section). To examine the construct validity of BRS the total sample (N = 2272) was randomly split into three parts (20%, 40%, and 40%). EFA was carried out in the first subsample (nEFA = 452, 20%). CFA followed both in the second subsample (nCFA1 = 910, 40%) and in the third (CFA 1 and CFA 2 respectively). The third subsample was of equal sample power to the second (nCFA2 = 910, 40%). CFA 2 was carried out to cross-validate the optimal model established in CFA 1. The number of cases per BRS indicator for the total sample, first subsample (EFA) and second and third subsamples (CFA 1 and CFA 2) was 378.67, 75.33 and 151.67 respectively.

3.2. Univariate and Multivariate Normality

The data in all four samples (Total, EFA, CFA1, and CFA2) violated the univariate and multivariate normality assumption. Kolomogorov-Smirnov tests

Table 1. Overview of the 3-faced construct validation method for BRS.

EFA = Exploratory Factor Analysis, ICM-CFA = Independent Cluster Model Confirmatory Factor Analysis, ESEM = Exploratory Structural Equation Modeling.

(Massey, 1951) on all 6 BRS items were statistically significant (p < .001), indicating an absence of univariate normality.

Multivariate normality was examined by 1) Mardia’s multivariate kurtosis test (Mardia, 1970); 2) Mardia’s multivariate skewness test (Mardia, 1970); 3) Henze-Zirkler’s consistent test (Henze & Zirkler, 1990), and 4) Doornik-Hansen omnibus test (Doornik & Hansen, 2008). The null hypothesis was rejected in all four tests (almost all p values < .0001; see Table 2), suggesting a multivariate normality violation in all four samples (Total, EFA subsample, CFA 1 subsample, CFA 2 subsample).

3.3. Exploratory Factor Analysis (EFA)

After sample splitting, in this phase of the 3-faced construct validation method (Kyriazos et al., 2018a, 2018b), EFA was carried out in the first subsample (20%, nEFA = 452) to establish a structure (Howard et al., 2016). EFA factors were extracted with MLR rescaling-based estimator (Muthen & Muthen, 2012). Chi-square and standard errors by MLR are corrected for non-normality, unlike similar estimation methods (Muthén & Asparouhov, 2011; Wang & Wang, 2012; Brown, 2015). Furthermore, MLR can also handle a sample of all sizes (Bentler & Yuan, 1999; Muthen & Asparouhov, 2002; Wang & Wang, 2012; Brown, 2015). Geomin factor rotation was used in the EFA models and Bi-Geomin (Jennrich & Bentler, 2011) in the EFA Bifactor model. Goodness of fit of the EFA models was evaluated with: RMSEA ≤ .06 (90% CI ≤ .06), SRMR ≤ .08, CFI ≥ .95, and TLI ≥ .95 (Hu & Bentler, 1999; Brown, 2015). Additionally, the chi-square/df ratio ≤ 3 rule was also used (Kline, 2016).

Three EFA Models were tested. MODEL 1 is the original single factor model proposed by Smith et al. (2008) and replicated by others (Amat et al., 2014; de Holanda Coelho et al., 2016). MODEL 2 is a two-factor model having items 1, 3, and 5 in Factor 1 and items 2, 4, and 6 in Factor 2 separating non-reversed items (Factor 1) from reversed (Factor 2). This structure extracted was identical to a two-factor first-order CFA BRS structure proposed by Rodríguez-Rey et al. (2016) to account for response bias method effects ( Alonso-Tapia & Villasana, 2014;

Table 2. Multivariate normality tests over the entire sample and the three subsamples.

Note. All p values < .0001, *p = .0723.

Marsh, 1996; Wu, 2008; as reported by Rodríguez-Rey et al., 2016 ). Note however that the model tested here is an EFA structure that emerged, while the model proposed was defined by CFA. MODEL 3 is a higher-order EFA Bifactor model (Jennrich & Bentler, 2011) with items 1, 3, 5 in Factor 1 (non-reversed items) and items 2, 4, 6 in Factor 2 (reversed items) and a General BRS resilience factor. Concerning model fit, MODEL 1 hardy showed a fit within acceptable limits. MODEL 2 had a good fit with all goodness of fit measures far better than acceptability limits, factor intercorrelation at 0.146 and factor loadings in Factor 1 from .512 to .729 and in factor 2 from .555 to .730. MODEL 3 failed to be identified. (See Table 3 for all EFA model fit statistics).

3.4. Confirmatory Factor Analysis (CFA 1)

In this phase of the 3-faced construct validation method (Kyriazos et al., 2018a, 2018b), a CFA was carried out in the second subsample (40%, nCFA1 = 910) to validate the BRS structure extracted by an EFA in the previous phase. MLR was also used to estimate model parameters and goodness of fit of all the CFA models was examined with: RMSEA ≤ .06 (90% CI ≤ .06), SRMR ≤ .08, CFI ≥ .95, and TLI ≥ .95 (Hu & Bentler, 1999; Brown, 2015). Additionally, the chi-square/df ratio ≤ 3 rule was also used (Kline, 2016).

Based on previous literature and EFA that was carried out in the previous phase, the following seven models were tested. MODEL 1 was the single factor model originally proposed by Smith et al. (2008) and validated by Amat et al. (2014) and de Holanda Coelho et al. (2016). MODEL 2 is a variation of MODEL 1 with error covariances added (items 3 - 4, 4 - 5 and 4 - 6). MODEL 3 was a two-factor model emerged in EFA with factor 1 containing the reversed items and factor 2 the non-reversed items. This model also replicated the first order factor structure proposed by Rodriguez-Ray et al. (2016) in a second-order model to account for the response bias effect method ( Alonso-Tapia & Villasana, 2014; Marsh, 1996; Wu, 2008; cited in Rodríguez-Rey et al., 2016 ). MODEL 4 was a variation of Model 3 with the Exploratory Structural Equation Model method (ESEM; Asparouhov & Muthen, 2009 ). We did not test the higher order model proposed by Rodríguez-Rey et al. (2016 ) because traditional higher-order CFA models with first-order factors ≤ 3 are not possible due to under-identification

Table 3. Model fit for the EFA models evaluated of BRS.

Note. Factor 1 = Items 1, 3, 5; Factor 2 = items 2, 4, 6; FI = Factor Intercorrelations; Estimator = MLR; EFA Factor rotation = Geomin.

(Wang & Wang, 2012). Instead, we tested a higher order CFA Bifactor (Harman, 1976; Holzinger & Swineford, 1937) and ESEM Bifactor model with two factors (MODEL 5 and 6 respectively) since Bifactor models do not have this restriction (see Brown, 2015 ). MODEL 7 was a CFA Bifactor model with the two-factor structure proposed by Chmitorz et al. (2018).

Regarding model fit, MODEL 1 showed an acceptable fit, except for the RMSEA. MODEL 2 showed a remarkably improved fit after the addition of error covariances to MODEL 1 with all measures within limits and with a significant fit, factor loadings from .572 - .739. MODEL 3 achieved an adequate fit with almost all measures within acceptability and RMSEA on the verge of acceptability, factor loadings per factor from .626 - .685 (Factor 1) and .630 - .739 (Factor 2), factor intercorrelation .828 (see in Table 4 the goodness of fit statistics for all models). MODELS 4 - 7 either failed to be identified or to converge. Thus, two competing optimal models emerged, a) the single factor with error covariances (MODEL 2) and b) the two factor model with reversed and non-reversed items separated in 2 factors (MODEL 3).

3.5. Cross-Validating the Optimal CFA Models in a Different Subsample (CFA 2)

In this phase of the 3-faced construct validation method (Kyriazos et al, 2018a, 2018b), a second CFA was carried out in the different subsample of equal power to the previous one (40%, nCFA2 = 910). Here, the fit of all the models tested in CFA 1 was evaluated further. MODEL 1 showed a poor fit. MODEL 2 (Figure 1(a)) had Chi-square/df, TLI and RMSEA beyond acceptable limits, showing a fit divergence in comparison to CFA 1. The fit of MODEL 3 (Figure 1(b)) was satisfactory, with all measures within expected limits and with a good fit, factor loadings from .559 - .706 (Factor 1) and .671 - .733 (Factor 2), Factor intercorrelations at .745 < .80 (See Table 5 for details).

3.6. Measurement Invariance across Age and Gender

In this phase of the 3-faced construct validation method (Kyriazos et al., 2018a, 2018b), we examined BRS measurement invariance across gender and age in the entire sample (N = 2272) using the two-factor model as a baseline model. Invariance was examined with the ΔCFI ≤ −.01, and ΔRMSEA ≤ .015 criteria, N = 2272 > 300 (Chen, 2007: p. 501).

Table 4. Model fit for the BRS CFA Models tested in the first CFA.

Note. Factor 1 = Items 1, 3, 5; Factor 2 = Items 2, 4, 6; FI = Factor intercorrelation; Estimator = MLR; Bold indicates optimal fit.

Table 5. Model fit for the Models tested in the second CFA.

Note. Factor 1 = Items 1, 3, 5; Factor 2 = items 2, 4, 6; FI = Factor intercorrelation; Estimator = MLR; Bold indicates optimal fit.

Figure 1. BRS Path diagrams emerged from the cross-validating CFA 2: (a) of the single factor BRS model with error covariances and (b) of the two-factor BRS model.

To evaluate measurement invariance across gender we tested the 2-factor model separately in each gender group (males, N = 832 versus females, N = 1440). The fit of this model was good for males (Chi-square = 17.08, Chi-square/df = 2.14, CFI = .988, RMSEA = .037) and equally good for females (Chi-square = 25.75, Chi-square/df = 3.22, CFI = .989, RMSEA = .039). Next, the model was evaluated in both gender groups simultaneously. This model (M1) showed acceptable fit (see Table 6), indicating that configural invariance was supported. Then, factor loadings were constrained to equality. As shown in Table 6, the goodness of fit measures of this model (M2) suggested that weak invariance was supported. Then, all indicator means were constrained to equality. In this model (M3) ΔCFI and ΔRMSEA were beyond acceptability to support strong invariance, as expected (see also De Holanda Coelho et al., 2016 ). Thus, comparisons of BRS indicator means between men and women must be compared cautiously. Finally, indicator residuals were constrained to equality and this model (M4) suggested that strict measurement invariance was supported.

The process was repeated to evaluate invariance across age testing the 2-factor model separately in two age groups (18 - 32 years, 49% versus 33 - 69 years, 51%). The fit of this model was good for those aged from 18 - 32 years (Chi-square = 21.39, Chi-square/df = 2.67, CFI = .988, RMSEA = .039) and equally good for those aged from 33 - 69 years (Chi-square = 22.31, Chi-square/df = 2.79, CFI = .989, RMSEA = .039). Next, the model was evaluated in both age groups simultaneously. This model (M1) showed good fit suggesting that configural invariance was supported. Then, factor loadings (M2), indicator means (M3) and indicator residuals (M4) were consecutively constrained to equality, evaluating weak, strong and strict invariance respectively. Model fit comparison between MODEL 2 to 1, showed no statistically significant difference supporting weak invariance. Model fit comparison between MODEL 3 to 2 and MODEL 4 to 3 indicated that ΔCFI (but not ΔRMSEA) was beyond acceptability to support strong invariance and strict invariance. This means that age comparisons in indicator means and indicator residuals should be made with caution (see measurement invariance results in Table 6).

Table 6. Fit measures of the nested BRS models tested to validate strict measurement invariance across gender and age.

Note. Estimator = MLR.

3.7. Reliability and AVE-Based Validity

To examine internal consistency reliability Cronbach’s alpha (α; Cronbach, 1951 ) and Omega coefficient (ω total; McDonald, 1999, Werts, Lim, & Joreskog, 1974 ) were respectively estimated. Average Variance Extracted (AVE; Fornell & Larcker, 1981 ) was also calculated to examine convergent validity (Malhotra & Dash, 2011). Alpha and Omega values ≥ .70 are considered adequate (Hair et al., 2010), whereas Kline (1999) suggested that alphas can be as low as .60 for psychological constructs. The suggested threshold for AVE is ≥.50 (Fomell & Larcker, 1981; Hair et al., 2010; Awang et al., 2015 ).

Cronbach’s alpha (Cronbach, 1951) in the total sample (Ν = 2272) for the entire BRS was .80. Omega Total (McDonald, 1999; Werts, Lim, & Joreskog, 1974)in the total sample (Ν = 2272) for the entire BRS was ω = .78 and Average Variance Extracted (AVE; Fornell & Larcker, 1981 ) was AVE = .44. Alpha, Omega Total and AVE values per factor are presented in Table 7.

3.8. Convergent and Discriminant Validity with Correlation Analysis

The correlation between BRS and other constructs was evaluated in the total sample (N = 2272) with 12 measures separated in five groups (Table 8). Correlations between BRS total and Groups of measures indication Mental Distress, Well-Being, Positivity, Affect, and Quality of Life were on average medium to strong, M = −.40, M = .36, M = .32, M = .47 (SPANE-12 B & SPANE-8 B), and M = .36 respectively (all significance levels at p < .001). Correlations ranged from .49 (Trait Hope by Snyder et al., 1991 and WEMWBS by Tennant et al., 2007 ) to −.45 (DASS-21 Depression by Lovibond & Lovibond, 1995 ) and −.42 (DASS-9 Depression, Yussof, 2013 and Kyriazos et al., 2018a ). For BRS Factor 1 (items 1,3,5) correlations with Mental Distress, Well-Being, Positivity, Affect, and Quality of Life group of measures were of weak to moderate magnitude, M = −.30, M = .34, M = .30, M = .39 (SPANE-12 B & SPANE-8 B), and M = .35 respectively (all significance levels at p < .001). They ranged from .46 (Trait Hope) to −.35 (DASS-21 Depression) and −.32 (DASS-9 Depression). The BRS Factor 2 (items 2, 4, 6) with Mental Distress, Well-Being, Positivity, Affect, and Quality of Life Group of measures was on average correlated with a moderate magnitude, M = −.40, M = .31, M = .27, M = .44 (SPANE-12 B & SPANE-8 B), and M= .32, ranging from .44 (SPANE-B) to −.44 (DASS-21 Depression), all significance

Table 7. Reliability and AVE convergent validity for BRS.

Note. BRS Factor 1 = items 1, 3, 5 and BRS Factor 2 = items 2, 4, 6.

Table 8. Correlations between BRS and other measures.

Note. All correlation coefficients were significant at p < .001 level. BRS Factor 1 = items 1, 3, 5 (reversed items). BRS Factor 2 = items 2, 4, 6 (non-reversed items).

levels at p < .001. Second largest positive correlations were with Trait Hope and WEMWBS (.43) and second largest negative with SPANE-N (−.43). See all correlations on Table 8.

3.9. Modeling BRS Distinctiveness from DASS-Depression and DASS-Stress

The BRS was developed to measure resilience, in other words, the ability to recover from stress and adversity (Smith et al., 2008). In this line, when during hardship the absence of depression or anxiety was conceptualized as the presence of resilience (Chmitorz et al., 2018). Therefore, we examined to what extend resilience as measured by BRS was distinct from Stress and Depression― thus supporting construct validity further―we carried out an EFA and a CFA. Depression and Stress were measured both by DASS-21 (Lovibond & Lovibond, 1995) and by DASS-9 (Yussof, 2013; Kyriazos et al., 2018a ). The sample (N = 2272) was split into two new subsamples to perform EFA and CFA in a different subsample, one for EFA (n = 500) and one for CFA (n = 1772).

3.10. Exploratory Factor Analysis of the Compound BRS Models

During this phase, 10 alternative EFA models were extracted (MLR extraction with Geomin rotation), either single factor (Table 9, MODELS 1-3) or two-factor (Table 9, MODELS 4-6). The single factor models had one factor either with BRS (Smith et al., 2008) and DASS-21 (Lovibond & Lovibond, 1995) or BRS and DASS-21 Stress or BRS and DASS-21 Depression collapsed in one factor. They all had a poor fit with negative factor loadings, as expected to suggest the distinctiveness of BRS from DASS-21, DASS-21-Stress, and DASS-21 Depression. The two-factor models extracted had one factor with BRS and the second with either DASS, DASS-21 Stress or BRS and DASS-21 Depression (Table 9, MODELS 4-6). Two clear factors emerged with adequate primary loadings. As expected the factor intercorrelations were all negative and strong, for BRS with DASS-21 −0.377, with DASS-21 Stress −0.407, and with DASS-21 Depression −0.393. The strongest negative factor intercorrelation was with DASS-21 Stress, evidencing that resilience measured with BRS (Smith et al., 2008) shows a negative relationship with Stress and Depression. The goodness of fit measures of the two-factor models of BRS with DASS-21 Stress and BRS with DASS-21 Depression were within acceptable limits.

Then this process was repeated with BRS (Smith et al., 2008), DASS-9 Stress (Yussof, 2013; Kyriazos et al., 2018a ) and DASS-9 Depression (Yussof, 2013; Kyriazos et al., 2018a ) see in Table 9, MODELS 7-10. The single factor models extracted (BRS and DASS-9 Stress, BRS and DASS-9 Depression) showed poor fit with negative factor loadings, suggesting that BRS and DASS-9 measure distinct constructs. The two dual-factor models extracted had BRS in Factor 1 and either DASS-9 Stress or DASS9 Depression in Factor 2 (Table 9, MODELS 9 and 10). As expected factor intercorrelations were negative and strong, for BRS with DASS-9 Stress −.482, and with DASS-9 Depression −.344. The goodness of fit

Table 9. The fit of the EFA models with BRS-DASS-21 and BRS-DASS-9.

Note. *Factor 1 = BRS Factor 2 = DASS or DASS-Depression or DASS-stress; FI = Factor intercorrelation; Estimator = MLR, Factor rotation = Geomin.

measures of the two-factor models was tolerable, the two-factor structure was clear with adequate primary loadings (See Table 9).

Confirmatory Factor Analysis

The two factor-models were next evaluated further with CFA (MLR parameter estimation) in a subsample of n = 1772. The first three models had BRS (Smith et al., 2008) in one factor and either DASS-21 (Lovibond & Lovibond, 1995), DASS-21 Stress, or and DASS-21 Depression in a second orthogonal factor (Table 10, MODELS 1-3). Additionally, three alternative two-factor models were tested with BRS in one factor and either DASS-21, DASS-21 Stress, or and DASS-21 Depression in a second correlated factor (Table 10, MODELS 4-6). The fit of the models with orthogonal factors was poor with most goodness of fit measures out of acceptable limits or on the verge of acceptability. The same was true for the model with two correlated factors of BRS and the entire DASS-21. The models having two correlated factors of BRS and DASS-21 Stress (Table 10, MODEL 5) or BRS and DASS-21 Depression (Table 10, MODEL 6) had a good fit with all goodness of fit measures in acceptable limits. As expected the factor intercorrelations were all negative and strong, for the BRS with DASS-21 −.561, with DASS-21 Stress −.505 (see Figure 2(a)), and with DASS-21 Depression −.545 (see Figure 2(b)).

Table 10. Model fit for the CFA two-factor models of BRS-DASS 21, and BRS-DASS 9.

Note. *Factor 1 = BRS Factor 2 = DASS or DASS-Depression or DASS-stress; FI = Factor intercorrelation; Estimator = MLR.

Factor loadings were strong in all models (See Table 10 for more details).

Similar findings emerged for the dual factor models tested with BRS (Smith et al., 2008) in the first factor and DASS-9 Stress or DASS-9 Depression (Yussof, 2013; Kyriazos et al., 2018a ) in a second, orthogonal factor (Table 10, MODELS 7 and 8). Two variations of these models were also tested with BRS in the first factor and DASS-9 Stress or DASS-9 Depression in a second, correlated factor (Table 10, MODELS 9 and 10). The goodness of fit measures of these two-factor models of BRS with DASS-9 Stress (MODEL 9) and BRS with DASS-9 Depression (MODEL 10) in two correlated factors was adequate (See Table 10). As expected factor intercorrelations were negative and strong, for BRS with DASS-9 Stress −469 (see Figure 3(a)), and with DASS-9 Depression −.532 (see Figure 3(b)). All factor loadings were acceptable (Table 10). This is an evidence that resilience, as measured by BRS and Depression and Anxiety, are distinct but correlated constructs, suggesting BRS has construct validity.

3.11. Normative Data

Across the total sample (N = 2272), mean BRS score was 3.46 (SD = .76), corres-

Figure 2. Path diagrams: (a) of the Dual Factor Stress Model with BRS (f1) and DASS-21 Stress (f2), and (b) of the Dual Factor Depression Model with BRS (f1) and DASS-21 Depression (f2).

ponding to a point between “Neutral” 3) and “Agree” 4), of the 5-point Likert scale. The 25%, 50% and 75% of the respondents in this sample scored ≤ 3.00, ≤ 3.50 and ≤ 4.00 respectively. Smith et al., also reported scores of 3.53 - 3.61 across four samples.

4. Discussion

The purpose of this research was: a) to evaluate construct validity with EFA and confirm it with CFA with a construct validation procedure we termed “3-faced construct validation method” (see Kyriazos et al., 2018a, 2018b ); b) to examine measurement invariance across gender and age; c) to assess reliability and validity; d) to establish convergent and discriminant validity; e) to evaluate model the distinctiveness of BRS (Smith et al., 2008) from DASS-12 (Lovibond & Lovibond, 1995) and from DASS-9 (Yussof, 2013; Kyriazos et al., 2018a ) as an additional evidence of BRS construct validity.

To validate the BRS (Smith et al., 2008) factor structure, the total sample was randomly split into three parts (20%, 40%, and 40%). Sample-splitting (Guadagnoli & Velicer, 1988; MacCallum, Browne, & Sugawara, 1996) is used as a cross-validation method of construct validity because the researcher repeats the CFA process in a different sample (Byrne, 2010; Brown, 2015). The number of

Figure 3. Path diagrams: (a) of the Dual Factor Stress Model with BRS (f1) and DASS-9 Stress (f2), and (b) of the Dual Factor Depression Model with BRS (f1) and DASS-9 Depression.

cases per BRS indicator was multiple times the minimum requirements of 10:1 (Osborne & Costello, 2005), or 20: 1 (Schumacker & Lomax, 2015). This is an indication of robustness and reliability of the emerging solutions (Brown, 2015; Kline, 2016).

After sample splitting, the 3-faced construct validation method was implemented (Kyriazos et al., 2018a, 2018b). In the first phase of the method, EFA was carried out in the first subsample (20%) to retrieve a factor structure (Howard et al., 2016). A total of three EFA models were evaluated. In the next phase of the 3-faced construct validation method (Kyriazos et al., 2018a, 2018b), a CFA was carried out in a second subsample (40%) to validate the BRS structures extracted in the previous EFA. Based on the existing BRS literature and EFA, seven models were estimated. The BRS unifactorial model with error covariances (in items 3 - 4, items 4 - 5, and 4 - 6) showed a significant fit. The two-factor model with unreversed and reversed items also showed acceptable fit. Four models either failed to be identified or to converge, namely ESEM, CFA Bifactor, ESEM Bifactor, and the method model proposed by Chmitorz et al. (2018).

Thus, two competing optimal models emerged, a) the single factor (Smith et al., 2008) with error covariances and b) the two factor model with unreversed and reversed items separated in two factors. This two-factor model was also proposed by Rodriguez-Ray et al. (2016) as a first order factor structure of a second-order, “traditional” CFA model. Rodriguez-Ray et al. (2016) attribute the 2-factor structure to a response bias effect method ( Alonso-Tapia & Villasana, 2014; Marsh, 1996; Wu, 2008; as quoted by Rodríguez-Rey et al., 2016 ).

In the next phase of the 3-faced construct validation method (Kyriazos et al., 2018a, 2018b), a second CFA was carried out in the different subsample of equal power to the previous one (40%) to evaluate model fit further getting a clearer picture. The single factor model with error covariances showed a notable fit divergence, in comparison to the fit in the previous CFA. The fit of the two-factor model was adequate, with all goodness of fit measures within acceptability. More importantly, the two-factor model showed a comparably good fit―if not better―to its equivalent CFA 1 model. Thus, considering model fit across the two CFA, the unidimensional model with error covariances was considered unstable. The fit of this model in CFA 1 was probably a local optimum. Anyhow, this fit divergence is empirically evidencing the usefulness of the 3-faced construct validation method (Kyriazos et al., 2018a, 2018b). The two-factor model showed a consistently adequate fit across all CFA, thus it was considered more reliable. Generally, a discrepancy in the proposed factor structures of BRS emerges from existing empirical literature, suggesting both single factor (Amat, et al., 2014; De Holanda Coelho et al., 2016) and two-factor structures for BRS (Rodríguez-Rey et al., 2016; Chmitorz et al., 2018). We did not evaluate a traditional higher-order CFA model because for a two first-order factorial structure, like BRS, evaluating if the second-order factor improves the model fit when compared to the first-order solution is not possible due to under-identification (Wang & Wang, 2012; Brown, 2015).

In the optimal model, the correlation between exogenous constructs did not exceed .85. Thus, we rejected the possibility that the two exogenous constructs are redundant or have a serious multicollinearity problem (Claes & Larker, 1981). As far as construct validity is concerned, all fit measures for both versions reached the suggested levels of significance (Hair, Black, Babin, & Anderson, 2010) indicating that items are measuring adequately the latent constructs.

In the next phase of the 3-faced construct validation method ( Kyriazos et al., 2018a, 2018b), we evaluated if BRS showed measurement invariance across gender and age with the two-factor model as a baseline model. Regarding gender, full configural and weak and strict invariance were supported. Strong invariance was not supported. This suggests that mean comparisons of the BRS indicators between men and women must be done cautiously. Configural and weak invariance across age were fully supported but fit measures (ΔCFI and ΔRMSEA) were in disagreement about strong and strict invariance. This means that age comparisons in indicator means and covariances should also be made with caution. Similar findings on strong invariance were also reported by De Holanda Coelho et al. (2016) for the Brazilian version. Additionally, BRS scores were reported to differ across gender (Smith et al., 2008; Rodríguez-Rey et al., 2016) and age (Rodríguez-Rey et al., 2016). Findings of age and gender differences reported in resilience studies are in general conflicting too (Singh et al., 2016).

The internal consistency reliability and convergent validity were examined with the following: a) Cronbach’s alpha (α; Cronbach, 1951 ) to examine internal consistency of the responses. Alpha values above .70 are generally acceptable (Hair et al., 2010); b) Omega Total coefficient (ω total; McDonald, 1999, Werts, Lim, & Joreskog, 1974 ). For omega a, value of .70 or greater is acceptable (Hair et al., 2010); c) Average Variance Extracted (AVE; Fornell & Larcker, 1981 ) to evaluate convergent validity. Malhotra & Dash (2011) commented that ω alone is weak, potentially allowing an error variance as high as 50%. Therefore, AVE in combination with ω coefficient offers a more reliable measure of convergent validity (Malhotra & Dash, 2011). The threshold for AVE is .50 ( Fornell & Larcker, 1981; Hair et al., 2010; Awang et al., 2015). Internal consistency reliability for the entire BRS was adequate and Omega total (McDonald, 1999, Werts, Lim, & Joreskog, 1974) incomparable, equally adequate levels. Taking into account the brevity of the scales per factor internal consistency reliability was also acceptable given the dependence of Cronbach’s alpha coefficient (Cronbach, 1951) to scale length (Cortina, 1993; Green, Lissitz, & Mulaik, 1977; Nunnally & Bernstein, 1994). Average Variance Extracted (AVE; Fornell & Larcker, 1981 ) was on the verge of acceptability. However, AVE estimates convergent validity, therefore reversing 50% of the items in the scale probably was one of the reasons that kept the AVE lower than expected.

Then convergent and discriminant validity of BRS was estimated using 12 different scales. Included scales were separated in five groups: 1) mental distress, containing the 3 factors of DASS-21 (Lovibond & Lovibond, 1995) and DASS-9 (Yusoff, 2013; Kyriazos et al., 2018a); 2) well-being, with WEMWBS (Tennant et al., 2007), MHC-SF (Keyes, 2008), the Flourishing Scale (Diener et al., 2010) and Satisfaction with Life Scale (SWLS; Diener et al., 1985 ); 3) positivity scales including the Trait HOPE (Snyder et al., 1991), Meaning in Life Questionnaire (MLQ; Steger et al., 2006 ) and Gratitude 6 Questionnaire (McCullough et al., 2002); 4) Affect measures containing the Scale of Positive and Negative Experiences (SPANE; Diener et al., 2010 ) and SPANE-8 (Kyriazos et al., 2018b); and 5) The WHOQOL-BFEF (WHOQOL Group, 1998a, 1988b). The relationship between BRS (Smith et al., 2008) and mental distress, well-being, hope, life meaning, gratitude, affect, and life quality was on average medium to strong. As expected strongest positive correlations were observed between Trait Hope (Snyder et al., 1991) and WEMWBS (Tennant et al., 2007) and strongest negative with DASS-21 depression. The two BRS factors were weakly to moderately related with mental distress, well-being, hope, gratitude, life meaning, affect, and quality of life with the same pattern of relationships with the total BRS.

The BRS was developed to measure resilience, in other words, the ability to recover from stress and adversity (Smith et al., 2008). To examine the above hypothesis supporting BRS construct validity further, an EFA and a CFA were carried out, to examine how resilience, as measured by BRS, was related to Stress and Depression as measured both by DASS-21 (Lovibond & Lovibond, 1995)and by DASS-9 (Yusoff, 2013; Kyriazos et al., 2018a). Prior to this analysis, the total sample was split into two new subsamples to perform EFA and CFA in a different subsample.

EFA models extracted either had a single factor or two factors. The single factor models had one factor with all items either of BRS and DASS-21 (Lovibond & Lovibond, 1995), BRS and DASS-21 Stress or BRS and DASS-21 Depression, BRS and DASS-9 (Yussof, 2013; Kyriazos et al., 2018a ), BRS and DASS-9 Stress or BRS and DASS-9 Depression. They all had a poor fit with negative factor loadings, supporting the distinctiveness of BRS from DASS-21 and DASS-9. The two-factor EFA models extracted had one factor with BRS and the second with either DASS-21, DASS-21 Stress, DASS-21 Depression, DASS-9, DASS-9 Stress, DASS-9 Depression. The two-factor structures emerged was optimal. Crucially, resilience measured with BRS (the ability to bounce back from stress, Smith et al., 2008 ) showed a negative relationship with Stress and Depression, and these findings propose that BRS has construct validity.

Next, the two-factor models were evaluated further with a CFA in two different conditions: with the two factors being either orthogonal or correlated. The two-factor correlated models tested had either BRS and DASS-21 Stress in two factors or BRS and DASS-21 Depression, BRS and DASS-9 Stress, BRS and DASS-9 Depression. All these compound models of resilience and mental distress in two correlated factors showed a good fit with negative factor intercorrelations. On the other hand, the orthogonal models showed a hardly tolerable fit. This verified the EFA findings, suggesting that BRS resilience, had a negative relationship with Stress and Depression, evidencing again BRS construct validity. Moreover, the similarity of the findings for BRS (Smith et al., 2008) and DASS-21 (Lovibond & Lovibond, 1995) with those of BRS and DASS-9 (Yussof, 2013; Kyriazos et al., 2018a ) is an additional evidence of the construct validity of DASS-9 in measuring mental distress in a similar manner to DASS-21.

5. Conclusion

To conclude, BRS, as measured in Greek adults, has a two-factor structure. BRS is gender and age-invariant as long as indicator means are compared cautiously, in line with previous literature findings. BRS construct validity was also demonstrated modeling its distinctiveness with EFA and CFA from depression and stress. In this line, with existing resilience literature in general and BRS empirical findings in particular, when during hardship the absence of depression or anxiety was conceptualized as the presence of resilience (Chmitorz et al., 2018). Therefore, resilience as measured by BRS was distinct from Stress and Depression and had a negative factor correlation in all compound BRS-DASS models evaluated. Thus BRS construct validity was confirmed further. BRS has also adequate reliability as indicated by alpha, Omega total and convergent validity as suggested by Average Variance Extracted. Additional evidence of convergent/discriminant validity by using 12 different scales verified that BRS, Greek Version is a valid scale.

Limitations of the present research are that students were involved in the data collection and the effects of this method (if any) must be taken into account when attempting to generalize findings. Secondly, the sample of men and women in the invariance across gender was not absolutely balanced. Despite the above limitations, BRS is a reliable resilience measure for adults of the general population in the Greek cultural context.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Abolghasemi, A., & Varaniyab, T. (2010). Resilience and Perceived Stress: Predictors of Life Satisfaction in the Students of Success and Failure. Journal of Social and Behavioral Sciences, 5, 748-752.
[2] Abramson, D. M., Grattan, L. M., Mayer, B., Colten, C. E., Arosemena, F. A., Bedimo-Rung, A., & Lichtveld, M. (2014). The Resilience Activation Framework: A Conceptual Model of How Access to Social Resources Promotes Adaptation and Rapid Recovery in Post-Disaster Settings. The Journal of Behavioral Health Services and Research, 42, 42-57.
https://doi.org/10.1007/s11414-014-9410-2
[3] Ahern, N. R., Kiehl, E. M., Sole, M. L., & Byers, J. (2006). A Review of Instruments Measuring Resilience. Issues in Comprehensive Pediatric Nursing, 29, 103-125.
https://doi.org/10.1080/01460860600677643
[4] Alonso-Tapia, J., & Villasana, M. (2014). Assessment of Subjective Resilience: Cross- Cultural Validity and Educational Implications. Infancia y Aprendizaje, 37, 629-664.
https://doi.org/10.1080/02103702.2014.965462
[5] Amat, S., Subhan, M., Jaafar, W. M. W., Mahmud, Z., & Johari, K. S. K. (2014). Evaluation and Psychometric Status of the Brief Resilience Scale in a Sample of Malaysian International Students. Asian Social Science, 10, 240-245.
https://doi.org/10.5539/ass.v10n18p240
[6] American Psychological Association (2015). The Road to Resilience.
http://www.apa.org/helpcenter/road-resilience.aspx
[7] Asparouhov, T., & Muthen, B. (2009). Exploratory Structural Equation Modeling. Structural Equation Modeling, 16, 397-438.
https://doi.org/10.1080/10705510903008204
[8] Awang, Z., Afthanorhan, A., & Asri, M. A. M. (2015). Parametric and Non Parametric Approach in Structural Equation Modeling (SEM): The Application of Bootstrapping. Modern Applied Science, 9, 58-67. https://doi.org/10.5539/mas.v9n9p58
[9] Bagby, M. R., Parker, J. D. A., & Taylor, G. J. (1994). The Twenty Item Toronto Alexithymia Scale-I Item Selection and Cross Validation of the Factor Structure. Journal of Personality and Social Psychology, 38, 23-32.
[10] Bentler, P. M., & Yuan, K. H. (1999). Structural Equation Modeling with Small Samples: Test of Global Self-Esteem. Journal of Personality and Social Psychology, 89, 623-642.
[11] Brislin, R. W. (1970). Back-Translation for Cross-Cultural Research. Journal of Cross- Cultural Psychology, 1, 185-216. https://doi.org/10.1177/135910457000100301
[12] Brown, T. A. (2015). Confirmatory Factor Analysis for Applied Research (2nd ed.). New York, NY: Guilford Publications.
[13] Byrne, B. M. (2010). Structural Equation Modeling with AMOS: Basic Concepts, Applications, and Programming (2nd ed.). New York, NY: Routledge.
[14] Campbell-Sills, L., & Stein, M. B. (2007). Psychometric Analysis and Refinement of the Connor-Davidson Resilience Scale (CD-RISC): Validation of a 10-Item Measure of Resilience. Journal of Traumatic Stress, 20, 1019-1028.
https://doi.org/10.1002/jts.20271
[15] Campbell-Sills, L., Forde, D. R., & Stein, M. B. (2009). Demographic and Childhood Environmental Predictors of Resilience in a Community Sample. Journal of Psychiatric Research, 43, 1007-1012. https://doi.org/10.1016/j.jpsychires.2009.01.013
[16] Chadwick, S. (2014). Social and Emotional Resilience. In Impacts of Cyberbullying, Building Social and Emotional Resilience in Schools (pp. 31-55). Berlin: Springer International Publishing. https://doi.org/10.1007/978-3-319-04031-8_3
[17] Chen, F. F. (2007). Sensitivity of Goodness of Fit Indexes to Lack of Measurement Invariance. Structural Equation Modeling, 14, 464-504.
https://doi.org/10.1080/10705510701301834
[18] Chmitorz, A., Wenzel, M., Stieglitz, R.-D., Kunzler, A., Bagusat, C., Helmreich, I. et al. (2018). Population-Based Validation of a German Version of the Brief Resilience Scale. PLoS ONE, 13, e0192761. https://doi.org/10.1371/journal.pone.0192761
[19] Claes, F., & Larker, D. F. (1981). Evaluating Structural Equation Models with Unobservable Variables and Measurement Error. Journal of Marketing Research, 18, 39-50.
https://doi.org/10.2307/3151312
[20] Cohen, S., Kamarck, T., & Mermelstein, R. (1983). A Global Measure of Perceived Stress. Journal of Health and Social Behavior, 24, 385-396.
https://doi.org/10.2307/2136404
[21] Connor, K. M., & Davidson, J. R. (2003). Development of a New Resilience Scale: The Connor-Davidson Resilience Scale (CD-RISC). Depression and Anxiety, 18, 76-82.
https://doi.org/10.1002/da.10113
[22] Consten, C. P. (2016). Measuring Resilience with the Brief Resilience Scale: Factor Structure, Reliability and Validity of the Dutch Version of the BRS (BRSnl).
http://essay.utwente.nl/70095/
[23] Cortina, J. M. (1993). What Is Coefficient Alpha? An Examination of Theory and Applications. Journal of Applied Psychology, 73, 98-104.
https://doi.org/10.1037/0021-9010.78.1.98
[24] Cronbach, L. J. (1950). Further Evidence on Response Sets and Test Design. Educational and Psychological Measurement, 10, 3-31.
https://doi.org/10.1177/001316445001000101
[25] Cronbach, L. J. (1951). Coefficient Alpha and the Internal Structure of Tests. Psychometrika, 16, 297-334. https://doi.org/10.1007/BF02310555
[26] Davidson, J. R. T., Book, S. W., Colket, J. T., Tupler, L. A., Roth, S., David, D., & Feldman, M. (1997). Assessment of a New Self-Rating Scale for Post-Traumatic Stress Disorder. Psychological Medicine, 27, 153-160.
https://doi.org/10.1017/S0033291796004229
[27] De Holanda Coelho, G. L. H., Cavalcanti, T. M., Rezende, A. T., & Gouveia, V. V. (2016). Brief Resilience Scale: Testing Its Factorial Structure and Invariance in Brazil. Universitas Psychologica, 15, 397-408.
https://doi.org/10.11144/Javeriana.upsy15-2.brst
[28] Denollet, J. (2005). DS14: Standard Assessment of Negative Affectivity, Social Inhibition, and Type D personality. Psychosomatic Medicine, 67, 89-97.
https://doi.org/10.1097/01.psy.0000149256.81953.49
[29] Diener, E., Emmons, R. A., Larsen, R. J., & Griffin, S. (1985). The Satisfaction with Life Scale. Journal of Personality Assessment, 49, 71-75.
https://doi.org/10.1207/s15327752jpa4901_13
[30] Diener, E., Wirtz, D., Biswas-Diener, R., Tov, W., Kim-Prieto, C., Choi, D., & Oishi, S. (2009). New Measures of Well-Being. In E. Diener (Eds.), Assessing Well-Being. Social Indicators Research Series (Vol. 39, pp. 247-266). Dordrecht: Springer.
https://doi.org/10.1007/978-90-481-2354-4_12
[31] Diener, E., Wirtz, D., Tov, W., Kim-Prieto, C., Choi, D, Oishi, S., & Biswas-Diener, R. (2010). New Well-Being Measures: Short Scales to Assess Flourishing and Positive and Negative Feelings. Social Indicators Research, 97, 143-156.
https://doi.org/10.1007/s11205-009-9493-y
[32] Doornik, J. A., & Hansen, H. (2008). An Omnibus Test for Univariate and Multivariate Normality. Oxford Bulletin of Economics and Statistics, 70, 927-939.
https://doi.org/10.1111/j.1468-0084.2008.00537.x
[33] Duarte Alonso, A. (2015). Resilience in the Context of Two Traditional Spanish Rural Sectors: An Exploratory Study. Journal of Enterprising Communities: People and Places in the Global Economy, 9, 182-203. https://doi.org/10.1108/JEC-11-2014-0026
[34] Fletcher, D., & Sarkar, M. (2103). Psychological Resilience: A Review and Critique of Definitions, Concepts, and Theory. European Psychologist, 18, 12-23.
https://doi.org/10.1027/1016-9040/a000124
[35] Fornell, C., & Larcker, D. F. (1981). Structural Equation Models with Unobservable Variables and Measurement Error: Algebra and Statistics. Journal of Marketing Research, 18, 382-388. https://doi.org/10.2307/3150980
[36] Fredrickson, B. L., Tugade, M. M., Waugh, C. E., & Larkin, G. (2003). What Good Are Positive Emotions in Crises? A Prospective Study of Resilience and Emotions Following the Terrorist Attacks on the United States on September 11th, 2001. Journal of Personality and Social Psychology, 84, 365-376.
https://doi.org/10.1037/0022-3514.84.2.365
[37] Green, S. B., Lissitz, R. W., & Mulaik, S. A. (1977). Limitations of Coefficient Alpha as an Index of Test Unidimensionality. Educational and Psychological Measurement, 37, 827-836.
https://doi.org/10.1177/001316447703700403
[38] Guadagnoli, E., & Velicer, W. F. (1988). Relation to Sample Size to the Stability of Component Patterns. Psychological Bulletin, 103, 265.
https://doi.org/10.1037/0033-2909.103.2.265
[39] Hair, J., Black, W., Babin, B., & Anderson, R. (2010). Multivariate Data Analysis (7th ed.). Upper Saddle River, NJ: Prentice-Hall, Inc.
[40] Hardy, S. E., Concato, J., & Gill, T. M. (2002). Stressful Life Events among Community—Living Older Persons. Journal of General Internal Medicine, 17, 841-847.
https://doi.org/10.1046/j.1525-1497.2002.20105.x
[41] Hardy, S. E., Concato, J., & Gill, T. M. (2004). Resilience of Community-Dwelling Older Persons. Journal of the American Geriatrics Society, 52, 257-262.
https://doi.org/10.1111/j.1532-5415.2004.52065.x
[42] Harman, H. H. (1976). Modern Factor Analysis (3rd ed.). Chicago: University of Chicago Press.
[43] Henry, J. D., & Crawford, J. R. (2005). The Short-Form Version of the Depression Anxiety Stress Scales (DASS-21): Construct Validity and Normative Data in a Large Non-Clinical Sample. British Journal of Clinical Psychology, 44, 227-239.
https://doi.org/10.1348/014466505X29657
[44] Henze, N., & Zirkler, B. (1990). A Class of Invariant Consistent Tests for Multivariate Normality. Communications in Statistics: Theory and Methods, 19, 3595-3617.
https://doi.org/10.1080/03610929008830400
[45] Hernansaiz-Garrido, H., Rodríguez-Rey, R., Alonso-Tapia, J., Ruiz-Diaz, M. A., & Nieto-Vizcaíno, C. (2014a). Coping Assessment: Importance of Identifying the Person Situation Interaction Effects. The European Health Psychologist, 16, 754.
[46] Hernansaiz-Garrido, H., Rodríguez-Rey, R., Alonso-Tapia, J., Ruiz-Diaz, M. A., & Nieto-Vizcaíno, C. (2014b). Resilience Assessment: Importance of Identifying the Person Situation Interaction. The European Health Psychologist, 16, 482.
[47] Hofstede, G. (2001). Culture’s Consequences: Comparing Values, Behaviors, Institutions, and Organizations across Nations (2nd ed.). Thousand Oaks, CA: Sage.
[48] Holzinger, K. J., & Swineford, F. (1937). The Bifactor Method. Psychometrika, 2, 41-54.
https://doi.org/10.1007/BF02287965
[49] Howard, J., Gagné, M., Morin, A. J. S., Wang, Z. N., & Forest, J. (2016). Using Bifactor Exploratory Structural Equation Modeling to Test for a Continuum Structure of Motivation. Journal of Management, 1-59.
[50] Hu, L., & Bentler, P. M. (1999). Cutoff Criteria for Fit Indexes in Covariance Structure Analysis: Conventional Criteria versus New Alternatives. Structural Equation Modeling, 6, 1-55.
https://doi.org/10.1080/10705519909540118
[51] Huppert, F. A., & So, T. T. C. (2013). Flourishing across Europe: Application of a New Conceptual Framework for Defining Well-Being. Social Indicators Research, 110, 837- 861.
https://doi.org/10.1007/s11205-011-9966-7
[52] IBM (2017). IBM SPSS Statistical Software, Release 25. Spring House, PA: IBM Corporation.
[53] Jennrich, R. I., & Bentler, P. M. (2011). Exploratory Bi-Factor Analysis. Psychometrika, 76, 537-549. https://doi.org/10.1007/s11336-011-9218-4
[54] John, O. P., Donahue, E. M., & Kentle, R. L. (1991). The “Big Five” Inventory & Versions 4a and 54. Berkeley, CA: University of California, Institute of Personality and Social Research.
[55] Keyes, C. L. M. (2002). The Mental Health Continuum: From Languishing to Flourishing in Life. Journal of Health and Social Behavior, 43, 207-222.
https://doi.org/10.2307/3090197
[56] Keyes, C. L. M. (2005). Mental Illness and/or Mental Health? Investigating Axioms of the Complete State Model of Health. Journal of Consulting and Clinical Psychology, 73, 539-548. https://doi.org/10.1037/0022-006X.73.3.539
[57] Keyes, C. L. M., & Pidgeon, A. M. (2013). An Investigation of the Relationship between Resilience, Mindfulness, and Academic Self-Efficacy. Open Journal of Social Sciences, 1, 1-4. https://doi.org/10.4236/jss.2013.16001
[58] Keyes, C. L. M., Wissing, M., Potgieter, J., Temane, M., Kruger, A., & van Rooy, S. (2008). Evaluation of the Mental Health Continuum-Short Form (MHC-SF) in Setswana-Spea- king South Africans. Clinical Psychology and Psychotherapy, 15, 181-192.
https://doi.org/10.1002/cpp.572
[59] Kline, P. (1999) Handbook of Psychological Testing. London: Routledge.
[60] Kline, R. B. (2016). Principles and Practice of Structural Equation Modeling (4th ed.). New York, NY: The Guilford Press.
[61] Koen, M. P., Van Eeden, C., & Wissing, M. P. (2011). The Prevalence of Resilience in a Group of Professional Nurses. Health SA Gesondheid, 16, 11 p.
https://doi.org/10.4102/hsag.v16i1.576
[62] Kyriazos, T. A., Stalikas, A., Prassa, K., & Yotsidi, V. (2018a). Can the Depression Anxiety Stress Scales Short Be Shorter? Factor Structure and Measurement Invariance of DASS-21 andDASS-9 in a Greek, Non-Clinical Sample. Psychology, 9, 1095-1127.
https://doi.org/10.4236/psych.2018.95069
[63] Kyriazos, T. A., Stalikas, A., Prassa, K., & Yotsidi, V. (2018b). A 3-Faced Construct Validation and a Bifactor Subjective Well-Being Model Using the Scale of Positive and Negative Experience, Greek Version. Psychology, 9, 1143-1175.
https://doi.org/10.4236/psych.2018.95071
[64] Lee, H. S., Brown, S. L., Mitchell, M. M., & Schiraldi, G. R. (2008). Correlates of Resilience in the Face of Adversity for Korean Women Immigrating to the US. Journal of Immigrant Minority Health, 10, 415-422. https://doi.org/10.1007/s10903-007-9104-4
[65] Lovibond, S. H., & Lovibond, P. F. (1995). Manual for the Depression Anxiety Stress Scales. Sydney: Psychology Foundation.
[66] Lundman, B., Strandberg, G., Eisemann, M., Gustafson, Y., & Brulin, C. (2007). Psychometric Properties of the Swedish Version of the Resilience Scale. Scandinavian Journal of Caring Sciences, 21, 229-237.
https://doi.org/10.1111/j.1471-6712.2007.00461.x
[67] MacCallum, R. C., Browne, M. W., & Sugawara, H. M. (1996). Power Analysis and Determination of Sample Size for Covariance Structure Modeling. Psychological Methods, 1, 130. https://doi.org/10.1037/1082-989X.1.2.130
[68] Malhotra, N. K., & Dash, S. (2011). Marketing Research an Applied Orientation. London: Pearson Publishing.
[69] Mardia, K. V. (1970). Measures of Multivariate Skewness and Kurtosis with Applications. Biometrika, 57, 519-530. https://doi.org/10.1093/biomet/57.3.519
[70] Marsh, H. W. (1996). Positive and Negative Global Self-Esteem: Substantively Meaningful Distinction or Artifactors. Journal of Personality and Social Psychology, 70, 810-819.
https://doi.org/10.1037/0022-3514.70.4.810
[71] Massey, F. J. (1951). The Kolmogorov-Smirnov Tests for Goodness of Fit. Journal of American Statistical Association, 46, 68-78.
https://doi.org/10.1080/01621459.1951.10500769
[72] McCullough, M. E., Emmons, R. A., & Tsang, J. (2002). The Grateful Disposition: A Conceptual and Empirical Topography. Journal of Personality and Social Psychology, 82, 112-127.
https://doi.org/10.1037/0022-3514.82.1.112
[73] McDonald, R. P. (1999). Test Theory: A Unified Treatment. Mahwah, NJ: Erlbaum.
[74] McGreavy, B. (2015). Resilience as Discourse. Environmental Communication, 10, 104-121.
https://doi.org/10.1080/17524032.2015.1014390
[75] Mehta, M., Whyte, E., Lenze, E., Hardy, S., Roumani, Y., Subahan, P. et al. (2008). Depressive Symptoms in Late Life: Associations with Apathy, Resilience and Disability Vary between Young-Old and Old-Old. International Journal of Geriatric Psychiatry, 23, 238-243.
https://doi.org/10.1002/gps.1868
[76] Montross, L. P., Depp, C., Daly, J., Reichstadt, J., Golshan, S., Moore, D. et al. (2006). Correlates of Self-Rated Successful Aging among Community-Dwelling Older Adults. American Journal of Geriatric Psychiatry, 14, 42-51.
https://doi.org/10.1097/01.JGP.0000192489.43179.31
[77] Muthen, B., & Asparouhov, T. (2002). Using Mplus Monte Carlo Simulations in Practice: A Note on Non-Normal Missing Data in Latent Variable Models. Mplus Web Notes: No. 2.
[78] Muthén, B., & Asparouhov, T. (2011). Beyond Multilevel Regression Modeling: Multilevel Analysis in a General Latent Variable Framework. In J. Hox, & J. K. Roberts (Eds.), Handbook of Advanced Multilevel Analysis (pp. 15-40). New York, NY: Taylor & Francis.
[79] Muthen, L. K., & Muthen, B. O. (2012). Mplus Statistical Software, 1998-2012. Los Angeles, CA: Muthen & Muthe.
[80] Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric Theory (3rd ed.). New York, NY: McGraw-Hill.
[81] Osborne, J. W., & Costello, A. B. (2005). Sample Size and Subject to Item Ratio in Principal Components Analysis. Practical Assessment, Research & Evaluation, 9, 8.
[82] Pavot, W., & Diener, E. (1993). The Affective and Cognitive Context of Self-Reported Measures of Subjective Well-Being. Social Indicators Research, 28, 1-20.
https://doi.org/10.1007/BF01086714
[83] Pinheiro, M., & Matos, A. P. (2013). Exploring the Construct Validity of the Two Versions of the Resilience Scale in a Portuguese Adolescent Sample. The European Journal of Social & Behavioural Sciences, 179-189.
[84] Rodríguez-Rey, R., Alonso-Tapia, J., & Hernansaiz-Garrido, H. (2016). Reliability and Validity of the Brief Resilience Scale (BRS) Spanish Version. Psychological Assessment, 28, e101-e110. https://doi.org/10.1037/pas0000191
[85] Ryff, C. D. (1989). Happiness Is Everything, or Is It? Explorations on the Meaning of Psychological Wellbeing. Journal of Personality and Social Psychology, 57, 1069-1081.
https://doi.org/10.1037/0022-3514.57.6.1069
[86] Ryff, C. D., & Keyes, C. L. M. (1995). The Structure of Psychological Well-Being Revisited. Journal of Personality and Social Psychology, 69, 719-727.
https://doi.org/10.1037/0022-3514.69.4.719
[87] Salisu, I., & Hashim, A. (2017). Critical Review of Scales Used in Resilience Research. Journal of Business and Management, 19, 23-33.
https://doi.org/10.9790/487X-1904032333
[88] Scheier, M. F., Carver, C. S., & Bridges, M. W. (1994). Distinguishing Optimism from Neuroticism (and Trait Anxiety, Self-Mastery, and Self-Esteem): A Reevaluation of the Life Orientation Test. Journal of Personality and Social Psychology, 67, 1063-1078.
https://doi.org/10.1037/0022-3514.67.6.1063
[89] Schumacker, R. E., & Lomax, R. G. (2015). A Beginner’s Guide to Structural Equation Modeling (4th ed.). New York, NY: Routledge.
[90] Singh, K., & Yu, X. N. (2010). Psychometric Evaluation of the Connor-Davidson Resilience Scale (CD-RISC) in a Sample of Indian Students. Journal of Psychology, 1, 23-30.
https://doi.org/10.1080/09764224.2010.11885442
[91] Singh, K., Junnarkar, M., & Kaur, J. (2016). Measures of Positive Psychology, Development and Validation. Berlin: Springer. https://doi.org/10.1007/978-81-322-3631-3
[92] Skevington, S. M., Lotfy, M., & O’Connell, K. A. (2004). The World Health Organization’s WHOQOL-BREF Quality of Life Assessment: Psychometric Properties and Results of the International Field Trial. A Report from the WHOQOL Group. Quality of Life Research, 13, 299-310. https://doi.org/10.1023/B:QURE.0000018486.91360.00
[93] Smith, B. W., Dalen, J., Wiggins, K., Tooley, E., Christopher, P., & Bernard, J. (2008). The Brief Resilience Scale: Assessing the Ability to Bounce Back. International Journal of Behavioral Medicine, 15, 194-200. https://doi.org/10.1080/10705500802222972
[94] Smith, B. W., Tooley, E. M., Christopher, P. J., & Kay, V. S. (2010). Resilience as the Ability to Bounce Back from Stress: A Neglected Personal Resource? The Journal of Positive Psychology, 5, 166-176. https://doi.org/10.1080/17439760.2010.482186
[95] Snyder, C. R., Harris, C., Anderson, J. R., Holleran, S. A., Irving, L. M., Sigmon, S. T., Yoshinobu, L., Gibb, J., Langelle, C., & Harney, P. (1991). The Will and the Ways: Development and Validation of an Individual-Differences Measure of Hope. Journal of Personality and Social Psychology, 60, 570-585.
https://doi.org/10.1037/0022-3514.60.4.570
[96] Stalikas, A., & Kyriazos, T. A. (2017). The Scale of Positive and Negative Experience (SPANE), Greek Version. Athens: Hellenic Association of Positive Psychology.
[97] StataCorp (2015). Stata: Release 14. Statistical Software. College Station, TX: StataCorp.
[98] Steger, M. F., Frazier, P., Oishi, S., & Kaler, M. (2006). The Meaning in Life Questionnaire. Assessing the Presence of and Search for Meaning in Life. Journal of Counseling Psychology, 53, 80-93. https://doi.org/10.1037/0022-0167.53.1.80
[99] Tedeschi, R. G., & Calhoun, L. G. (1996). The Posttraumatic Growth Inventory; Measuring the Positive Legacy of Trauma. Journal of Traumatic Stress, 9, 455-471.
https://doi.org/10.1002/jts.2490090305
[100] Tennant, R., Hiller, L., Fishwick, R., Platt, S., Joseph, S. et al. (2007) The Warwick-Edin- burgh Mental Well-Being Scale (WEMWBS): Development and UK Validation. Health and Quality of Life Outcomes, 5, 63. https://doi.org/10.1186/1477-7525-5-63
[101] Triandis, H. C. (1995). Individualism and Collectivism. Boulder, CO: Westview.
[102] Wagnild, G. (2003). Resilience and Successful Aging: Comparison among Low and High Income Older Adults. Journal of Gerontological Nursing, 29, 42-49.
https://doi.org/10.3928/0098-9134-20031201-09
[103] Wang, J., & Wang, X. (2012). Structural Equation Modeling. Hoboken, NJ: Wiley, Higher Education Press. https://doi.org/10.1002/9781118356258
[104] Watson, D., Clark, L. A., & Tellegen, A. (1988). Development and Validation of Brief Measures of Positive and Negative Affect: The PANAS Scales. Journal of Personality and Social Psychology, 54, 1063-1070. https://doi.org/10.1037/0022-3514.54.6.1063
[105] Werts, C. E., Linn, R. N., & Karl, G. J. (1974). Interclass Reliability Estimates: Testing Structural Assumptions. Educational & Psychological Measurement, 34, 25-33.
https://doi.org/10.1177/001316447403400104
[106] WHOQOL Group (1998a). The World Health Organization Quality of Life Assessment (WHOQOL): Development and General Psychometric Properties. Social Science & Medicine, 46, 1569-1585. https://doi.org/10.1016/S0277-9536(98)00009-4
[107] WHOQOL Group (1998b). Development of the World Health Organization WHOQOL BREF Quality of Life Assessment. Psychological Medicine, 28, 551-558.
https://doi.org/10.1017/S0033291798006667
[108] Windle, G., Bennett, K. M., & Noyes, J. (2011). A Methodological Review of Resilience Measurement Scales. Health and Quality of Life Outcomes, 9, 1-18.
https://doi.org/10.1186/1477-7525-9-8
[109] Wu, C. (2008). An Examination of the Wording Effect in the Rosenberg Self-Esteem Scale among Culturally Chinese People. The Journal of Social Psychology, 148, 535-551.
https://doi.org/10.3200/SOCP.148.5.535-552
[110] Yusoff, M. S. B. (2013). Psychometric Properties of the Depression Anxiety Stress Scale in a Sample of Medical Degree Applicants. International Medical Journal, 20, 295-300.
[111] Zigmond, A. S., & Snaith, R. P. (1983). The Hospital Anxiety and Depression Scale. Acta Psychiatrica Scandinavica, 67, 361-370.
https://doi.org/10.1111/j.1600-0447.1983.tb09716.x

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.