An Alternative Factorization of the Metacognitive Awareness of Reading Strategies Inventory Associated with the Greek National Curriculum and Its Psychometric Properties

Abstract

In the current study the Metacognitive Awareness of Reading Strategies Inventory was adapted to the Greek secondary student population and the instrument’s psychometric properties were examined. The inventory was administered to a sample of 632 students, aged 12 - 24, attending all secondary levels, from 68 schools in various urban, semi-urban and rural regions. Exploratory factor analysis was performed. A new factorial structure with only two factors (MARSI-2fGR) emerged. The new structure comprises 26 items divided between the textor subscale, for text-oriented reading strategies, and the textout subscale, for extratextual reading strategies. These two factors were discussed in relation to students’ reading habits associated with the Greek national curriculum. The results shed new light on the way that students read academic or school-related material and provide evidence for the utility of the scale as a valid and reliable tool to assess metacognitive awareness of reading strategies.

Share and Cite:

Mavrogianni, A. , Vassilaki, Ε. , Spantidakis, G. , Sarris, A. , Papadaki-Michailidi, E. and Yachnakis, E. (2020) An Alternative Factorization of the Metacognitive Awareness of Reading Strategies Inventory Associated with the Greek National Curriculum and Its Psychometric Properties. Creative Education, 11, 1299-1323. doi: 10.4236/ce.2020.118096.

1. Introduction

Since the foundational work of Flavell (1979) and Brown (1980, 1987; Brown, Armbruster, & Baker, 1986) and after long and in-depth research in the field of metacognition (Azevedo, 2020; Efklides, 2008; Flavell, Miller, & Miller, 2002; Kuhn, 2000; Livingston, 1997; Pressley, 2005; Rhodes, 2019; Siegesmund, 2016; Veenman, Van Hout-Wolters, & Afflerbach, 2006), research interest has focused on educational programs aimed at enhancing metacognitive learning strategies (Leu et al., 2008; Siegesmund, 2016; Van Campenhout, 2020). When students design, implement, and evaluate reading strategies before, during, and after reading, they enhance the reading comprehension of texts (Alexander & Jetton, 2000; Zhang & Francis, 2010). The last phase, which is the self-evaluation of their strategies (Vanderrgrift, 2003), is considered of significant importance to their metacognitive awareness. An objective way of recording and evaluating the metacognitive awareness of students may play an essential role in this last phase. Although recent research of strategy instruction is also exploring the possibility of qualitative measurements (Pinninti, 2019), the predominantly employed strategy instruction is using quantitative instruments (Bimmel, 2001; Ngo, 2019; Plonsky, 2011; Rubin, Chamot, Harris, & Anderson, 2007).

In recent years, various scales have been developed to monitor and evaluate reading strategies used by students. Such tools include the MARSI scale (Mokhtari & Reichard, 2002; Mokhtari, Dimitrov, & Reichard, 2018), the SORS scale (Mokhtari & Sheorey, 2002) and the OSORS scale (Anderson, 2003), among others. To the best of our knowledge, there has been no adapted instrument to serve the secondary Greek-speaking populations validated for all levels of secondary education (Stalikas, Triliva, & Roussi, 2012). So far, the only relevant attempts have been: 1) the preliminary results presentation of the two-factor and three-factor structure of the Greek adaptation of the MARSI scale to students 13 - 24 years old attending Gymnasium (i.e. Junior High School), Vocational and General Lyceum (i.e. Senior High School) by Mavrogianni, Vasilaki, Spantidakis, Papadaki-Michailidi and Linardakis (2018), and 2) the study of Koulianou, Roussos and Samartzi (2019), who adapted the MARSI to Gymnasium students from three areas of Greece, to compare the use of reading strategies among students with and without learning difficulties. The common feature that the above studies share is that they both confirm the three-factor structure of the scale in Greek educational data. Meanwhile, the present study attempts to bridge the existing gap exploring a possible different factorial structure of the scale catering for the specific needs of Greek students of all levels of secondary education.

The MARSI scale (Mokhtari & Reichard, 2002) has been selected in our study because of its international recognition as a versatile, valid and reliable self-report instrument for assessing metacognitive awareness of reading strategies. The scale contains 30 items, each rated on a 5-point Likert-type scale, which ranges from “1-always false” to “5-always true” to report each respondent’s correspondence to each one of the 30 items. Each item briefly describes the strategies that students use when they study academic or school-related material. The initial, original scale is subdivided into three subscales: Global Reading Strategies (13 items), Problem Solving Strategies (8 items) and Support Reading Strategies (9 items). The interaction of those strategies helps students to construct the meaning of the text. Participants’ responses are scored for each subscale. A raw score is computed for each item and a mean score for each factor subscale.

2. Objectives

The current study aimed at the adaptation and standardization of the Metacognitive Awareness of the Reading Strategies Inventory (MARSI 1.0) as devised by Mokhtari and Reichard (2002) to a sample of the Greek secondary student population and examining the factorial structure and psychometric properties of the adapted instrument. Specifically, it included:

1) Application of exploratory factor analysis on data, for exploring the factor structure of the Greek MARSI inventory (determination of the remaining items and the number of emerging factors, item-composition of each factor, interpretation of factors).

2) Evaluation of psychometric properties of the emerged instrument for demonstration of its reliability and validity (Cronbach’s alpha, language validity through face and content validity, discriminant validity).

3) Reasoning of the emerged factorial structure concerning the students reading habits associated with the Greek national curriculum.

3. Material and Methods of Analysis

First, we describe the process of adapting the original MARSI scale into Greek by ensuring the face validity and content validity of the translated instrument. Next, we provide participants’ demographic and educational information and describe the data gathered, using the Greek version of the inventory. Finally, we present the methods followed for data analyses (Drost, 2011; Pagano, 2009).

3.1. Adaptation into Greek

Written permission to adapt the instrument for the Greek population was granted from the creators of the MARSI scale. Given that a simple “single forward and back-translation procedure” could result in an inadequate translation (van Widenfelt, Treffers, de Beurs, Siebelink, & Koudijs, 2005: p. 136), a more demanding procedure was followed. The present study pursued the recommended practice in test adaptation and cross-cultural validation of scales recommended by the International Test Commission (ITC, 2017), adopting a complex translation method, linguistic adjustment and validation of the translation. The recommendations of Beaton, Bombardier, Guillemin and Ferraz (2000) and Stalikas et al. (2012) were followed in translating the scale into Greek using a forward and backward translation procedure. The instrument was first translated into Greek by four English language experts separately. The four translated versions were reviewed, and a final draft version was produced, which was back-translated into English by an independent and qualified translator. Finally, a bilingual English-Greek translator compared the original MARSI scale, the draft version in Greek and the back translation into English and proposed one minor revision which we incorporated into the final version. Language validity was examined by face validity and content validity as well.

3.1.1. Face Validity

To test the level of linguistic understanding of the final version of the inventory by adolescent students, we eventually tested the scale in two phases. Initially, we administered the translated inventory to a convenience sample of 48 students attending grades 1 to 3 of General Lyceum to evaluate face validity and receive feedback on the clarity of the statements. We noted that item 26 confused some students. The necessary linguistic normalization was made, and the reformed scale was tested on a different group of 73 students, attending the General Lyceum for a second validation test. At this stage, there were no queries on any of the scale statements.

3.1.2. Content Validity

During the content validity test, a panel of ten experts (3 teachers in Gymnasium, 4 in General Lyceum, and 3 in Vocational Lyceum) reviewed the Greek version of the inventory. According to the ten education professionals’ assessments, the items’ CVR ranged between 0.82 and 1.00, while the minimum acceptable CVR recommended by Lawshe (1975) for an evaluation of 10 experts is 0.62. Therefore, the experts agreed to a great extent and accepted each inventory’s item as accurate. Moreover, the concordance by means of Kendall’s coefficient, among experts’ opinions was quite high.

3.2. Participants

Students from sixty-eight secondary public and private schools from urban, semi-urban and rural school districts in Greece participated in the research. Research Ethics Committee of the University of Crete (decision 2/2018/13-03-2018) and the Ministry of Education of Greece (No. 89964/D2/01-06-2018) granted permission to conduct this study. The aims were presented to school directors, teachers, parents, and students themselves. The group of students was recruited voluntarily, and both parents and students granted their consent after they were assured of data confidentiality. The inventory was completed online in each school’s computer laboratory, by rating each of the 30 items on a 1 to 5 scale, according to each participant’s preferences.

From an initial group of 1308 participants who took part in the study, 45 had one or more items with missing values and were excluded from data analyses. Thus, 1263 participants remained. For the exploratory procedure, which was the main objective of this study, a simple random sampling method was used to select ~50% of the students (632) as the testing group, leaving the remaining ~50% (631) to be the validation group for future analyses.

The testing group consisted of 275 males (43.50%) and 357 females (56.50%). The age of participants ranged from 12 to 24 years (M = 15.28, SD = 1.71). For 613 of them (96.99%) the birth country was Greece and for 10 of them (1.58%) was Albania. Each of the other birth countries, European (Italy, Germany, Netherlands, Romania, Bulgaria) and non-European (USA, Russia, Ethiopia, Morocco) were represented with just one participant, summing up a total percentage of 1.43%. It is worth mentioning that due to the challenge of the Greek language, many non-native students have not been able to graduate until the age of 24 when the average graduation age for a native student is the age of 18. Greek was the native language of 583 students (92.20%), while for 49 (7.80%) Greek was their second language. The participants’ place of residence was cited in a city (n = 404, 63.90%), a suburb (n = 39, 6.20%), a town (n = 62, 9.80%), and a village (n = 127, 20.10%). Most of them (n = 605, 95.70%) lived in the family home, while 19 (3%) lived with relatives, just 1 (0.20%) with friends, 2 (0.30%) in a boarding house, and 5 (0.80%) by themselves. Regarding the family status, 150 (23.70%) students were disadvantaged (at least one of the two parents had passed away or was unemployed), and 482 (76.30%) seemed to enjoy a standard nuclear family life (both parents were permanently or occasionally employed or were retired).

Concerning the educational demographics, 287 (45.40%) students attended Gymnasium, and 345 (54.60%) attended Lyceum: 317 (50.20%) in General and 28 (4.40%) in Vocational Lyceum. Seventy of those participants (11.08%) had diagnosed learning disabilities, while 562 (88.92%) had no diagnosis. Regarding grade, 78 (12.34%) of the students attended 1st grade of Gymnasium, 94 (14.87%) 2nd grade, and 115 (18.20%) 3rd grade of Gymnasium. Additionally, 152 (24.05%) attended 1st grade of Lyceum (General and Vocational), 94 (14.87%) 2nd grade, and 99 (15.66%) 3rd grade of Lyceum. Of them, only students attending 2nd and 3rd grade of General Lyceum (n = 190) are subdivided into three different educational orientations; specifically, 76 (40%) have chosen Science Studies, 36 (18.90%) Economic and Computer Studies and 78 (41.10%) Humanities, Law and Social Sciences. Three hundred and fifty-four (56%) were assisted by a private tutor, while 278 (44%) had no additional assistance. Regarding computer literacy, 11 (1.74%) of the students declared “none”, 48 (7.59%) “little”, 201 (31.80%) “good”, 227 (35.92%) “very good”, and 145 (22.94%) “excellent”. Regarding the level of foreign language knowledge, 82 (12.97%) had “none or little”, 153 (24.21%) had “good”, 239 (37.82%) had “very good”, and 158 (25%) had “excellent”.

3.3. Data Gathering Using the MARSI Scale

Schools from different urban, semi-urban and rural areas of Greece were selected by random sampling following: 1) the permission of school principals to engage their school in research; 2) teachers’ willingness to help students complete the scale in the school’s computers lab; 3) the participants’ as well as parents’ consent. The 30-item inventory (Appendix A), followed by the demographic and educational data questionnaire, was given in a digital form to the students participating by their school teacher, as a self-assessment activity. The inventory items were administered in the same order that the initial MARSI creators suggested (Mokhtari & Reichard, 2002) rated on a five-point Likert scale ranging from 1 (“I never or almost never do this”) to 5 (“I always or almost always do this”). Before the digital completion, the teacher advised the students to read the statements very carefully to make sure they fully understood them and clarifications were provided where necessary. Students were instructed to complete the questionnaire in all honesty, avoiding any tendency to embellish reality. The average time to fully complete the digital questionnaire was approximately 15 minutes.

3.4. The Final Data Set

Finally, the 632 participants were characterized by their demographic and educational data, as described above. For the instrument data, there were 30 variables. The result gave a ratio of about 21/1 (21 subjects per variable) which was considered as acceptable. Regarding sample size, Costello and Osborne (2005) suggest a ratio of 10/1 as a minimum but recommend a ratio of 20/1 as an optimal.

3.5. Data Analysis Methods

For all data analyses IBM SPSS Statistics for Windows, Version 25, was used. Initially, descriptive statistics were used to describe demographic and educational data. Before starting exploratory factor analysis, the values of the bivariate correlation matrix of all items were analyzed (inter-item correlations). In the case of bivariate correlation scores either lower than 0.30 or greater than 0.80, the items of the corresponding pair should be considered as prospective for removal according to Field (2013: pp. 685-686).

Exploratory factor analysis (EFA) was performed as an exploratory technique (Yong & Pearce, 2013) to find the underlying factors that summarize the essential information contained in the variables (Beavers et al., 2013; Johnson & Wilchern, 2007) and thus to reveal the factor structure of our Greek version of the inventory. Sampling adequacy for factor analysis was measured by Kaiser-Meyer-Olkin (KMO) and Bartlett’s test of sphericity. Both tests aim to determine the factorability of data as a whole (Johnson & Wilchern, 2007). If the KMO measure is greater than 0.50 (Field, 2013) and Bartlett’s Test of Sphericity is large and significant, it can be assumed that the data set factorability is achievable.

For the EFA, the factor extraction method chosen here was Principal Axis Factoring (PAF), and the rotation method was Promax with Kaiser Normalization. PAF is proposed as best practice by many researchers (Costello & Osborne, 2005; Fabrigar, Wegener, MacCallum, & Strahan, 1999; Fabrigar & Wegener, 2012) as estimating more accurate factor loadings. Promax Rotation was chosen because it is an oblique rotation appropriate when the factors are expected to be correlated (Costello & Osborne, 2005; Pedhazur & Schmelkin, 1991). Oblique rotation is more suitable than orthogonal rotation for research involving human behaviours. Behaviour is rarely separated into aspects that operate independently of each other. So, in the social sciences, some correlation between the factors is generally expected.

The first stage of EFA was performed on all 30 items. In case of items with communality, less than 0.2 the items should be removed, and the EFA should be repeated. The target was to determine the proper number of factors. To optimize the number of factors Gorsuch’s (1983) suggestion was followed for evaluating in common the scree plot, the eigenvalues and the interpretability of factors, to avoid an excessive number of factors given when using Kaiser’s K1 rule alone (Ruiz & San Martín, 1992). The steep curve of the scree plot before the first point that started the flat line trend also suggests the number of factors. According to Kaiser’s criterion, factors with eigenvalues less than 1.0 are not considered statistically significant. Only components with high eigenvalues (>1.0) are likely to represent a real underlying factor. The interpretability here is related to the strategies Greek students usually employ during their academic reading. It is evident that in the Greek educational reality students seem to focus on the text itself when studying it without opting for extratextual support material to complement their understanding (Anagnostopoulou, Hatzinikita, & Christidou, 2010). Based on this observation, we attempted to explore an alternative modified approach to the initial 3-factor structure of the MARSI scale (Mokhtari & Reichard, 2002).

The second stage of EFA with the optimized number of two factors followed. The target was to observe the loadings of all items and detect possible cross-loadings, in order to identify specific items that might be rejected in the next steps of the exploration, and thus to trace an initial factor structure. The item loadings were examined to determine which items constitute each factor. The items identified by this first performed EFA as having weak loadings were excluded from further analysis. Factor loadings can be assessed by looking at the pattern matrix table. Field (2013) argued that the preferable loading value for each item must exceed 0.30, but according to Tabachnick and Fidell (1996), loadings greater than 0.32 are adequate in socio-behavioural research. In case of items with significant cross-loadings (the ratio of absolute values of loadings greater than 75%) the items should be removed, and the EFA should be repeated. Successive EFA procedures were conducted as a refinement method, giving the final factorial structure. In the succession of prementioned EFAs, the percentage of total variance explained was estimated, as well as reliability analysis and validity analysis were performed.

Percentage of total variance explained was co-examined for each factor and cumulatively for the entire model. Each factor explains a percentage of the total variance of data. The cumulative percentage shows the amount of variance explained by the model as a whole in the data under consideration. Factors that do not explain much variance might not be worthy of being included in the final model.

Reliability analysis was conducted to assess the reliability of the inventory as a measurement tool and to check its quality from various aspects. EFA is a statistical method employed to increase the reliability of the scale by identifying inappropriate items that can be removed and the dimensionality of constructs by examining the existence of relationships between items and factors (Netemeyer, Bearden, & Sharma, 2003). Cronbach’s alpha (α) coefficient was computed as an internal consistency estimate of reliability. Internal consistency of a construct implies that all the items of this construct measure the same concept (Cooper & Schindler, 2014; Nunnally & Bernstein, 1994). Excellent internal consistency means that the construct items tend to pull together. In literature, different cutoff points for the acceptable values of Cronbach’s alpha are considered (Blunch, 2008; DeVellis, 2012; Field, 2013; George & Mallery, 2003; Kline, 2000; Tavakol & Dennick, 2011). In general, α ≥ 0.7 is labelled as satisfactory (gradually upwards from “acceptable” to “good” and to “excellent”) while α < 0.7 is marked as problematic (gradually downwards “questionable” to “poor” to “unacceptable”). Moreover, if the deletion of a particular item increases Cronbach’s alpha, it means that this item is preferable to be omitted from the final structure in order to improve the reliability of the entire scale. Any items with values of alpha greater than the overall alpha may need to be deleted (Field, 2013: pp. 713-715). Here Cronbach’s alpha was calculated for 1) the final scale to ensure that all items in it consistently reflect the measured construct, 2) the subscales to ensure that all items in each of them pull together towards the subscale’s concept, and 3) the whole scale if each item would be deleted.

Additionally, to test reliability at the item level, item-to-total correlations (i.e. item to summated instrument score) were used. According to Field (2013), all items should correlate with the total in a reliable scale. Items with such correlations as lower than 0.30 may have to be omitted because they do not correspond very well with the scale overall. Correlations between factors were also considered, along with confidence intervals (95% CI) (Anderson & Gerbing, 1988). In case such correlations are high, the discriminant validity of the instrument was examined. That is the degree to which the subscales (groups of strategies-items) indeed comprise discrete factors or if the instrument is ultimately unifactorial. Distinctness of two factors is supported when the 95% CI of their correlation coefficient (±two standard errors around the correlation estimate) does not contain the value 1.0 (Anderson & Gerbing, 1988).

Reliability concerning selected demographic and educational variables was also examined by means of Cronbach’s alpha.

4. Results

Pearson’s correlations for all pairs of items (inter-item correlations) were computed. All of them were below 0.80. Even though many of them were below 0.30, we preferred to include all items in the subsequent EFA. KMO value (0.93, >0.50, when values in the 0.90s are considered “marvelous”) indicated that the scale was suitable for exploratory factor analysis. Bartlett’s test of sphericity results denoted that the data was appropriate for factor analysis, χ2 (435, Ν = 632) = 6070.47, p < 0.001 (Field, 2013).

For the whole series of EFAs that followed, the factor extraction method chosen here was Principal Axis Factoring (PAF), and the rotation method was Promax with Kaiser Normalization. In the first stage of EFA with all (30) items, all communality values were higher than 0.2 (item4 = 0.25; item14 = 0.26; item30 = 0.30; and all the rest in the range from 0.31 up to 0.57). This initial exploratory analysis revealed five factors with eigenvalues exceeding 1.0 in the unrotated matrix (Table 1). Cattell’s Scree Plot (Cattell, 1966) quite clearly suggested a two-factor solution (Figure 1). By combining all criteria mentioned in the Data Analyses Methods section (eigenvalues, scree plot, interpretability of factors), a two-factor solution was chosen. That is, in the EFA stages that followed two factors were retained.

From the first two-factor run, in order to achieve that two-factor solution, with all 30 items included, a cutoff of 0.35 for item loadings in the Pattern Matrix was chosen (Table 2). The items 10, 13, 14, 21 had lower factor loadings and were omitted in the next two-factor run. In this second two-factor run for the remaining 26 items and with the same cutoff 0.35, the final item loadings were produced. These loadings are presented in the same Table 2. It is worth mentioning that up to this point in any of the runs no item loaded as negative or cross-loading.

The final two-factor inventory was comprised of 26 items and was named MARSI-2fGR. The statements, presented in Table 3, are categorized differently than in the original inventory (Mokhtari & Reichard, 2002). The first factor was termed “textor” and the second factor was termed “textout” for reasons explained in detail in the Discussion section. Items’ factor loadings varied from 0.35 to 0.80. The first factor contains 14 items (1, 2, 3, 4, 5, 6, 8, 11, 12, 16, 19, 20, 22, 27) with loadings ranging from 0.35 to 0.79. The second factor represents a

Table 1. Five factors with eigenvalues exceeding 1.0 in the unrotated matrix as revealed by initial exploratory analysis.

Note. aExtraction Method: Principal Axis Factoring.

Table 2. Item factor loadings for both two-factor runs.

Note. NI = Not Included. aBoth rotations converged in 3 iterations. bEmpty cells in the 1st two-factor run correspond to loading < 0.35, thus not included in the 1st two-factor run.

Table 3. Principal axis factoring results with Promax rotation for the two-factor instrument.

Note. M = Mean. SD = Standard Deviation. Rotation converged in 3 iterations. aRefers to the mean scores of Ν = 632 participants to individual items. bFor all items, it was p < 0.001, meaning statistical significance.

Figure 1. Cattell’s scree plot of data, clearly suggesting a two-factor solution.

set of 12 items (7, 9, 15, 17, 18, 23, 24, 25, 26, 28, 29, 30) with loadings ranging from 0.37 to 0.80. In the same Table 3 are displayed: 1) the grouping of the items in the two factors, 2) the reading strategies referring to the items, 3) the means and standard deviations of items, 4) the corrected item-total correlations, and 5) Cronbach’s alpha if item deleted. Also, in Table 4, are presented for each factor, the eigenvalue, the percentage of total variance explained, and the cumulative variance explained.

Both factors on this instrument had a high rating for reliability. Cronbach’s alpha was 0.859 for the textor subscale and 0.835 for the textout subscale. Reliability for the total inventory was 0.902. Findings shown in Table 3 demonstrate that the corrected item-total correlations, ranging from 0.32 to 0.61, were above 0.30, which is good for reliability (Field, 2013). The values in the last column in Table 3 reflect the change in Cronbach’s alpha if a particular item is deleted. These values ranged from 0.896 up to 0.902. None of them is greater than 0.902, which is the Cronbach’s alpha of the whole scale. The first factor “textor” accounted for 27.76% of the total variance, and the second factor “textout” accounted for 5.17% of the total variance (Table 4). This factorial structure of the two extracted components explained cumulatively 32.93% of the variance in the data. Τhe correlation between factor 1 (textor) and factor 2 (textout) was 0.64, 95% CI [0.59 - 0.68].

Estimations of Cronbach’s alpha reliability by referring to demographic and educational variables are presented in Table 5.

Table 4. Eigenvalue and percentage of total variance explained per factor.

Note. aExtraction Method: Principal Axis Factoring.

Table 5. Cronbach’s alpha reliabilities by selected demographic and educational variables.

Note. FI = Functionally illiterate. CEG = Compulsory education graduate. LG = Lyceum graduate. CG = College graduate. UG = University graduate. GY = Gymnasium. VL = Vocational Lyceum. GL = General Lyceum. 1GY = 1st grade of Gymnasium. 2GY = 2nd grade of Gymnasium. 3GY = 2rd grade of Gymnasium. 1LY = 1st grade of Lyceum (General & Vocational). 2LY = 2nd grade of Lyceum (General & Vocational). 3LY = 3rd grade of Lyceum (General & Vocational). SCS = Science Studies. ECS = Economic and Computer Studies. HLS = Humanities, Law and Social Sciences. an = 190.

As presented in Appendix B, for each participant, the raw score of each subscale was calculated by summing up the values of its items. An overall (entire scale) score was calculated by summing the raw scores of the two subscales. The mean score of each structure (subscales and entire scale) was its raw score divided by its number of items (14 for factor 1; 12 for factor 2; 26 for the whole scale). A categorization of these mean scores on any of the structures could be useful as it is related to the reading habits of the students. Based on the criteria outlined by Mokhtari and Reichard (2002) mean scores were categorized into three pre-determined levels of strategy use for each subscale and the whole scale: low (2.4 and less), medium (2.5 to 3.4), and high (3.5 and above).

5. Discussion

5.1. Study Population

For the sake of comparison, it should be noted that the study population in this research is quite different from that of the MARSI 1.0 (Mokhtari & Reichard, 2002). The differences concern the educational system (Greek vs American), size (N = 632 vs N = 443), demographic characteristics (e.g. residence; parent’s educational levels etc.) and educational features (e.g. level, grade, orientation, computer literacy, level of foreign language knowledge, tutor’s help etc.). On the other hand, the study of Koulianou et al. (2019) concerns the same (Greek) educational system, but with different research goals and different size (Ν = 275), demographic (e.g. average age 13.50 etc.) and educational characteristics (e.g. only Gymnasium).

5.2. The Reasoning of Two-Factor Solution

Interpretability played a pivotal role along with scree plot in determining the number of factors in the EFA procedure. The fact that in the structure of MARSI-2fGR instrument subscales were identified differently (finally 2 vs 3 subscales) can be explained. They can be attributed to the differences between the American and the Greek educational system aligned with the Greek curriculum and syllabus, affecting the way teachers and students are involved in the educational process (Anagnostopoulou et al., 2010; Anagnostopoulou, Hatzinikita, Christidou, & Dimopoulos, 2013; Bikos, 2018; Katsarou, 2009; Katsarou & Tsafos, 2009).

It is noticeable that a two-factor solution was also attempted by Mokhtari and Reichard (2002: p. 252) in their first exploratory analysis. Still, they preferred the three-factor solution because the USA data provided more evidence of interpretability. Koulianou et al. (2019), chose to confirm the three-factor structure of the MARSI scale when they adapted the inventory to Greek students attending Gymnasium. Nevertheless, during our attempt to standardize the inventory to a broader sample of Greek students attending all levels of secondary education we were confronted with data that convinced us to explore the two-factor solution as an alternative more appropriate tool.

The Greek national curriculum, despite its recent turn towards multiliteracy, is still constructed in such a way that it aims at homogeneity (Katsarou, 2009). That ensues a homogenized educational material for all school types, public and private ones, throughout the country (Anagnostopoulou et al., 2013; Katsarou & Tsafos, 2009). Notably, in secondary education, the attainment of teaching is exam-controlled, gradually affecting the process of instruction while schools have become “exam-tutoring centres” (Skourtou & Kourtis-Kazoullis, 2003: pp. 1329-1330). Based on existing research (Alahiotis & Karatzia-Stavlioti, 2006; Katsarou, 2009; Koutrouba, 2012) we could assume that this practice has broadly and decisively affected many parameters of both the teaching and learning process (Anagnostopoulou et al., 2013: p. 45).

On the one end of the spectrum, the issue of a single textbook as the primary source of information and instruction raised concerns (as cited in Bikos, 2018: p. 404) even before the changes imposed on the Greek curriculum in recent years. This issue was not resolved after the last curriculum reforms (Anagnostopoulou et al., 2013). In this context, the teacher’s predesignated role is to act as a mean to satisfy the teaching objectives stated by the curriculum. In that sense, teachers feel, if not are, still obliged to instruct students according to the dictated single textbook (Bikos, 2018: p. 403). This practice also serves the purpose of explicitly and sufficiently preparing their students for school exams, since both test items and assessment criteria are extracted from the content of the textbook. Therefore, the desired result of creating high achievers is accomplished (Anagnostopoulou, et al., 2013: p. 641; Zisimopoulos et al., 2004).

On the other end of the spectrum, as students are encultured throughout their school career towards the end of being successful exam performers, they are urged to focus on the textbook rather than reach out to extra material. This practice allows the students to become mere listeners of the taught information from the book rather than reaching out to supplementary material (Bikos, 2018; Bonidis, 2004). The specific practice also serves the purpose of preparing the students for examinations based on the content of the textbook (Anagnostopoulou et al., 2010, 2013).

Conclusively, the way the Greek educational system is organized, namely the predominant role of the single textbook per subject, and the examinations centred curriculum, decisively determine school practices (Anagnostopoulou et al., 2013). This system urges students to opt for strategies that will ultimately lead them to the desired success. The two subscales revealed by our findings are related to the ways the Greek students deal with the textbook.

5.3. Adopting the Terms “Textor”, “Textout”

The first factor (“textor” strategies; 14 items; items: 1, 2, 3, 4, 5, 6, 8, 11, 12, 16, 19, 20, 22, 27) concerns more straightforward reading strategies, oriented toward the analysis of the text itself, and was named after the word “text-oriented”. This factor includes statements referring to simple reading strategies, mostly functional, such as taking notes (item 2), previewing the text before reading it (item 4), reading aloud when text becomes difficult (item 5), summarising to reflect on important information (item 6), and using typographical aids to identify key information (item 22). These functional reading strategies mostly refer to skilful navigation through text that helps the students construct the meaning without seeking additional help to reference materials or other persons. Additionally, the students’ existing knowledge helps them to unlock the deeper meaning of the text, which is represented by statements such as “I think about what I know to help me understand what I read” (item 3) or “I use context clues to help me better understand what I’m reading” (item 19). Therefore, the first factor refers to strategies oriented to the text itself and the readers existing knowledge about the facts written in it.

The second factor (“textout” strategies; 12 items; items: 7, 9, 15, 17, 18, 23, 24, 25, 26, 28, 29, 30) relates to extratextual reading strategies, including supporting mechanisms out of the text. Strategies, such as “using dictionaries” (item 15), “using the tables, figures and pictures that enrich the text” (item 17) and “discussing with others to check one’s understanding” (item 9) involve the use of outside reference materials that supports reading. Additionally, the second factor includes some more profound and theoretical strategies, such as “I stop from time to time and think about what I’m reading” (item 18), “I critically analyze and evaluate the information presented in the text” (item 23), “I check my understanding when I come across conflicting information” (item 25), and “I check to see if my guesses about the text are right or wrong” (item 29). Thereby, the second factor is the outcome of evaluating, analyzing, checking and testing the textual information with procedures that use tangible or mental “textout” material.

5.4. Indices for the MARSI-2fGR Scale

The (corrected) item-total correlation values, shown in Table 3, mean that correspondence of all (26) items with the total scale is very good and none of them should be dropped (Field, 2013). The values in the column labelled “Cronbach’s alpha if item deleted” in Table 3 indicate that none of the items would increase the reliability if deleted because all values are less than or equal to the overall reliability of 0.902 (Field, 2013).

Because of the excellent values of reliability for the total inventory (0.902) as well as for the subscales (0.859; 0.835), the two-factor structure provides a reasonably reliable measure of metacognitive awareness of reading strategies. The estimated value of the cumulative variance explained (32.93%), considerably reduces the complexity of the data set by using these two components, with a 77% loss of information.

5.5. Practical Implications

Our findings suggest that the adopted methodology could enhance the learning outcomes in a number of courses targeting ages of 12 - 24. Moreover, our approach will serve the needs of distance learning courses especially in an era of COVID-19 that does not allow teaching requiring the physical presence of students.

5.6. Future Research

Confirmatory factor analysis is needed for verification of our two-factor structure in the Greek educational environment (Brown, 2015). In the same context, it would be interesting to investigate possible parameters affecting the students’ reading strategies preferences, by means of textor and textout strategies, such as family’s socioeconomic status, parents’ educational level, student’s literacy in foreign languages and computers and other independent variables.

6. Conclusion

In this study, we examined the factorial structure of the Metacognitive Awareness of Reading Strategies Inventory consisted of 30 items, appropriately translated to Greek language and administered to a sample of 632 Greek secondary students. In the final structure MARSI-2fGR, 26 items remained, while the four items, 10, 13, 14 and 21, were omitted. Differently to the three factors of the original structure and their designation, here two factors emerged, the factor textor of 14 text-oriented strategies, and the factor textout of 12 extratextual strategies. The whole scale, as well as its two subscales, was found reliable by means of Cronbach’s alpha. Various aspects of validity (language validity through face and content validity and discriminant validity) were ensured.

Thus, the MARSI-2fGR structure can be used as a valid and reliable tool for measuring the metacognitive awareness of reading strategies employed by Greek-speaking students of all levels of secondary education. The two factors revealed by our findings reflect the way the Greek educational system is organized, decisively determining school practices, namely the predominant role of the single textbook per subject and the examinations-centred curriculum. Moreover, the two-factor structure is related to the ways the Greek students deal with the textbook while reading academic or school-related material.

Funding/Financial Support

The authors have no funding to report.

Other Support/Acknowledgement

The authors are grateful to all school directors, teachers, parents, students and people that directly or indirectly participated and collaborated in this study.

Appendix A

Greek wording of MARSI 1.0 before the two-factor approach

ΚΛΙΜΑΚΑ ΜΕΤΑΓΝΩΣΙΑΚΗΣ ΕΝΗΜΕΡΟΤΗΤΑΣ ΣΤΡΑΤΗΓΙΚΩΝ ΑΝΑΓΝΩΣΗΣ

Ελληνική έκδοση: Α. Μαυρογιάννη ©2018

Appendix B

Scoring rubric for the Greek version of the two-factor approach MARSI-2fGR

It refers to the 26-item structure after the omission of the four items 10, 13, 14 and 21, from the initial MARSI 1.0 inventory.

ΚΛΙΜΑΚΑ ΜΕΤΑΓΝΩΣΙΑΚΗΣ ΕΝΗΜΕΡΟΤΗΤΑΣ ΣΤΡΑΤΗΓΙΚΩΝ ΑΝΑΓΝΩΣΗΣ

Κανόνας αξιολόγησης

Όνομα μαθητή: ……………….................................. Ηλικία: …… Ημερομηνία: ……………………..................................

Τάξη στο σχολείο: ……………..…………………………………………………....................................................................

1) Γράψτε την απάντησή σας σε κάθε δήλωση (δηλ. 1, 2, 3, 4 ή 5).

2) Αθροίστε τη βαθμολογία κάτω από κάθε στήλη. Τοποθετήστε το αποτέλεσμα στη γραμμή κάτω από κάθε στήλη.

3) Διαιρέστε την βαθμολογία με τον αριθμό των προτάσεων σε κάθε στήλη για να βρείτε το μέσο όρο για κάθε υποκλίμακα.

4) Υπολογίστε το μέσο όρο για όλο τον κατάλογο προσθέτοντας τη βαθμολογία από τις υποκλίμακες και διαιρώντας με το 26.

5) Συγκρίνετε τα αποτελέσματά σας με εκείνα που παρουσιάζονται παρακάτω.

6) Συζητήστε τα αποτελέσματα με τον καθηγητή ή τον εκπαιδευτή σας.

ΕΡΜΗΝΕΥΟΝΤΑΣ ΤΙΣ ΒΑΘΜΟΛΟΓΙΕΣ ΣΑΣ: Ο συνολικός μέσος όρος υποδεικνύει πόσο συχνά χρησιμοποιείτε στρατηγικές ανάγνωσης όταν διαβάζετε ακαδημαϊκό υλικό. Ο μέσος όρος για κάθε υποκλίμακα του καταλόγου αποκαλύπτει ποια ομάδα στρατηγικών (δηλ. “κειμενοκεντρικές” και “εξωκειμενικές” στρατηγικές) χρησιμοποιείτε περισσότερο όταν διαβάζετε. Με αυτές τις πληροφορίες γνωρίζετε αν η βαθμολογία σας είναι πολύ υψηλή ή πολύ χαμηλή σε κάποια από αυτές τις στρατηγικές. Σημειώστε, όμως, ότι η καλύτερη χρήση αυτών των στρατηγικών εξαρτάται από την ικανότητα κατανόησης της ελληνικής γλώσσας, το είδος του υλικού που διαβάζετε και το σκοπό για τον οποίο διαβάζετε. Μια χαμηλή βαθμολογία σε κάποια από τις υποκλίμακες ή σε μέρη της κλίμακας υποδεικνύει ότι ίσως υπάρχουν κάποιες στρατηγικές που ίσως θέλετε να μάθετε και σκέφτεστε να χρησιμοποιήσετε όταν διαβάζετε.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Alahiotis, S., & Karatzia-Stavlioti, E. (2006). Effective Curriculum Policy and Cross-Curricularity: Analysis of the New Curriculum Design of the Hellenic Pedagogical Institute. Pedagogy, Culture and Society, 14, 119-147.
https://doi.org/10.1080/14681360600738277
[2] Alexander, P. A., & Jetton, T. L. (2000). Learning from Text: A Multidimensional and Developmental Perspective. In M. Kamil, P. Mosenthal, P. D. Pearson, & R. Barr (Eds.), Handbook of Reading Research (Vol. 3, pp. 285-310). Mahwah, NJ: Erlbaum.
https://psycnet.apa.org/record/2000-07600-005
[3] Anagnostopoulou, K., Hatzinikita, A., & Christidou, V. (2010). Assessed Students’ Competencies in the Greek School Framework and the PISA Survey. Review of Science, Mathematics & ICT Education, 4, 43-61.
[4] Anagnostopoulou, K., Hatzinikita, V., Christidou, V., & Dimopoulos, K. (2013). PISA Test Items and School-Based Examinations in Greece: Exploring the Relationship between Global and Local Assessment Discourses. International Journal of Science Education, 35, 636-662.
https://doi.org/10.1080/09500693.2011.604801
[5] Anderson, J. C., & Gerbing, D. W. (1988). Structural Equation Modelling in Practice: A Review and Recommended Two-Step Approach. Psychological Bulletin, 103, 411-423.
https://doi.org/10.1037/0033-2909.103.3.411
[6] Anderson, N. (2003). Scrolling, Clicking, and Reading English: Online Reading Strategies in a Second/Foreign Language. Reading Matrix: An International Online Journal, 3, 1-33.
http://www.readingmatrix.com
[7] Azevedo, R. (2020). Reflections on the Field of Metacognition: Issues, Challenges, and Opportunities. Metacognition and Learning, 15, 91-98.
https://doi.org/10.1007/s11409-020-09231-x
[8] Beaton, D. E., Bombardier, C., Guillemin, F., & Ferraz, M. B. (2000). Guidelines for the Process of Cross Cultural Adaptation of Self-Report Measures. Spine, 25, 3186-3191.
https://journals.lww.com/spinejournal/toc/2000/12150
https://doi.org/10.1097/00007632-200012150-00014
[9] Beavers, A., Lounsbury, J., Richards, J., Huck, S., Skolits, G., & Esquivel, S. (2013). Practical Considerations for Using Exploratory Factor Analysis in Educational Research. Practical Assessment, Research & Evaluation, 18, 1-13.
[10] Bikos, G. (2018). The Educational Outcomes of the Relationship between Schoolbooks and Teachers. International Journal of Innovation and Research in Educational Sciences, 5, 399-405.
https://www.ijires.org
[11] Bimmel, P. (2001). Effects of Reading Strategy Instruction in Secondary Education—A Review of Intervention Studies. L1-Educational Studies in Language and Literature, 1, 273-298.
https://doi.org/10.1023/A:1013860727487
[12] Blunch, N. J. (2008). Introduction to Structural Equation Modelling Using SPSS and AMOS. Thousand Oaks, CA: Sage Publications Ltd.
https://doi.org/10.4135/9781446249345
[13] Bonidis, K. (2004). The Content of the School Textbook as the Object of Research: Longitudinal Examination of the Related Research and Methodological Approaches. Athens: Metaixmio. (In Greek)
[14] Brown, A. L. (1980). Metacognitive Development and Reading. In R. J. Spiro, B. B. Bruce, & W. F. Brewer (Eds.), Theoretical Issues in Reading Comprehension (pp. 453-481). Hillsdale, NJ: Lawrence Erlbaum.
[15] Brown, A. L. (1987). Metacognition, Executive Control, Self-Regulation, and Other More Mysterious Mechanisms. In F. E. Weinert, & R. H. Kluwe, (Eds.), Metacognition, Motivation, and Understanding (pp. 65-116). Hillsdale, NJ: Lawrence Erlbaum.
[16] Brown, A. L., Armbruster, B. B., & Baker, L. (1986). The Role of Metacognition in Reading and Studying. In J. Orasanu (Ed.), Reading Comprehension: From Research to Practice (pp. 49-76). Hillsdale, NJ: Erlbaum.
[17] Brown, T. A. (2015). Methodology in the Social Sciences. Confirmatory Factor Analysis for Applied Research (2nd ed.). New York: The Guilford Press.
[18] Cattell, R. B. (1966). The Scree Test for the Number of Factors. Multivariate Behavioral Research, 1, 245-276.
https://doi.org/10.1207/s15327906mbr0102_10
[19] Cooper, D. R., & Schindler, P. S. (2014). Business Research Methods (12th ed.). New York: McGraw-Hill/Irwin.
[20] Costello, A. B., & Osborne, J. W. (2005). Best Practices in Exploratory Factor Analysis: Four Recommendations for Getting the Most from Your Analysis. Practical Assessment, Research & Evaluation, 10, 1-9.
https://scholarworks.umass.edu/pare
[21] DeVellis, R. F. (2012). Scale Development: Theory and Applications. Los Angeles, CA: Sage.
[22] Drost, E. (2011). Validity and Reliability in Social Science Research. Education Research and Perspectives, 38, 105-123.
https://www.erpjournal.net
[23] Efklides, A. (2008). Metacognition: Defining Its Facets and Levels of Functioning in Relation to Self-Regulation and Co-Regulation. European Psychologist, 13, 277-287.
https://doi.org/10.1027/1016-9040.13.4.277
[24] Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the Use of Exploratory Factor Analysis in Psychological Research. Psychological Methods, 4, 272-299.
https://doi.org/10.1037/1082-989X.4.3.272
[25] Fabrigar, L., & Wegener, D. (2012). Exploratory Factor Analysis. New York: Oxford University Press.
https://doi.org/10.1093/acprof:osobl/9780199734177.001.0001
[26] Field, A. (2013). Discovering Statistics Using IBM SPSS Statistics (4th ed.). London: Sage Publications Ltd.
https://doi.org/10.1037/0003-066X.34.10.906
[27] Flavell, J. H. (1979). Metacognition and Cognitive Monitoring: A New Area in Cognitive Developmental Inquiry. American Psychologist, 34, 906-911.
https://phys.org/journals/american-psychologist
[28] Flavell, J. H., Miller, P. H., & Miller, S. A. (2002). Cognitive Development (4th ed.). Upper Saddle. River, NJ: Pearson Education Inc.
[29] George, D., & Mallery, P. (2003). SPSS for Windows Step by Step: A Simple Guide and Reference (11.0 Update, 4th ed.). Boston, MA: Allyn & Bacon.
[30] Gorsuch, R. (1983). Factor Analysis. Newark, NJ: Erlbaum.
[31] International Test Commission (2017). The ITC Guidelines for Translating and Adapting Tests (2nd ed.).
http://www.InTestCom.org
[32] Johnson, R., & Wichern, D. (2007). Applied Multivariate Statistical Analysis (6th ed.). London: Pearson Education Publications.
[33] Katsarou, E. (2009). A Multiliteracy Intervention in a Contemporary “Mono-Literacy” School in Greece. International Journal of Learning 16, 55-65.
https://cgscholar.com/bookstore/cgrn/242/249
https://doi.org/10.18848/1447-9494/CGP/v16i05/46285
[34] Katsarou, E., & Tsafos, V. (2009). Students’ Subjectivities vs. Dominant Discourses in Greek L1 Curriculum. International Journal of Learning, 16, 35-46.
https://cgscholar.com/bookstore/cgrn/242/249
https://doi.org/10.18848/1447-9494/CGP/v16i11/46706
[35] Kline, P. (2000). The Handbook of Psychological Testing (2nd ed.). London: Routledge.
[36] Koulianou, M., Roussos, P., & Samartzi, S. (2019). Metacognitive Reading Strategies: Greek Adaptation of the MARSI-(GR) and a Comparative Study of Adolescent Students with and without Special Learning Difficulties. Psychology, 24, 138-156.
https://elpse.com/JournalPsychology/index_el.html
[37] Koutrouba, K. (2012). A Profile of the Effective Teacher: Greek Secondary Education Teachers’ Perceptions. European Journal of Teacher Education, 35, 359-374.
https://doi.org/10.1080/02619768.2011.654332
[38] Kuhn, D. (2000). Metacognitive Development. Current Directions in Psychological Science, 9, 178-181.
https://doi.org/10.1111/1467-8721.00088
[39] Lawshe, C. H. (1975). A Quantitative Approach to Content Validity. Personnel Psychology, 28, 563-575.
https://onlinelibrary.wiley.com/journal/17446570
https://doi.org/10.1111/j.1744-6570.1975.tb01393.x
[40] Leu, D. J., Zawilinski, L., Castek, J., Banerjee, M., Housand, B., Liu, Y., & O’Neil, M. (2008). What Is New about the New Literacies of Online Reading Comprehension? In L. Rush, J. Eakle, & A. Berger (Eds.), Secondary School Literacy: What Research Reveals for Classroom Practices (pp. 3-68). Urbana, IL: National Council of Teachers of English.
[41] Livingston, J. (1997). Metacognition: An Overview. State University of New York at Buffalo.
[42] Mavrogianni, A., Vasilaki, E., Spantidakis, I., Papadaki-Michailidi, E., & Linardakis, M. (2018). Adaptation to the Greek Population of the Metacognitive Awareness of Reading Strategies Inventory (MARSI) Version 1.0. In the 5th Hellenic Cognitive Science Society Conference (pp. 57-58). Leukes, Paros: Hellenic Cognitive Science Society.
http://helleniccognitivesciencesociety.gr/documents/BoA_CogSciGr2018.pdf
[43] Mokhtari, K., & Reichard, C. A. (2002). Assessing Students’ Metacognitive Awareness of Reading Strategies. Journal of Educational Psychology, 94, 249-259.
https://www.apa.org/pubs/journals/edu
https://doi.org/10.1037/0022-0663.94.2.249
[44] Mokhtari, K., & Sheory, R. (2002). Measuring ESL Students’ Awareness of Reading Strategies. Journal of Developmental Education, 25, 2-10.
https://www.jstor.org/journal/jdeveeduc
[45] Mokhtari, K., Dimitrov, D. M., & Reichard, C. A. (2018). Revising the Metacognitive Awareness of Reading Strategies Inventory (MARSI) and Testing for Factorial Invariance. Studies in Second Language Learning and Teaching, 8, 219-246.
http://oaji.net/journal-detail.html?number=6354
https://doi.org/10.14746/ssllt.2018.8.2.3
[46] Netemeyer, R. G., Bearden, W. O., & Sharma, S. (2003). Scaling Procedures: Issues and Applications. Thousand Oaks, CA: Sage Publications.
https://doi.org/10.4135/9781412985772
[47] Ngo, N. (2019). Understanding the Impact of Listening Strategy Instruction on Listening Strategy Use from a Socio-Cultural Perspective. System, 81, 63-77.
https://www.journals.elsevier.com/system
https://doi.org/10.1016/j.system.2019.01.002
[48] Nunnally, J., & Bernstein, I. (1994). Psychological Methods. New York: McGraw-Hill.
[49] Pagano, R. (2009). Understanding Statistics in the Behavioral Sciences. Belmont, CA: Wadsworth Cengage Learning.
[50] Pedhazur, E. J., & Schmelkin, L. (1991). Measurement, Design, and Analysis: An Integrated Approach. Hillsdale, NJ: Lawrence Erlbaum.
[51] Pinninti, L. (2019). Criteria for Qualitative Evaluation of Strategy Training. Electronic Journal of Foreign Language Teaching, 16, 185-195.
https://e-flt.nus.edu.sg
[52] Plonsky, L. (2011). The Effectiveness of Second Language Strategy Instruction: A Meta-Analysis. Language Learning, 61, 993-1038.
https://doi.org/10.1111/j.1467-9922.2011.00663.x
[53] Pressley, M. (2005). Metacognition in Literacy Learning: Then, Now, and in the Future. In I. E. Israel, C. C. Block, K. L. Bauserman, & K. Kinnucan-Welsch (Eds.), Metacognition in Literacy Learning: Theory, Assessment, Instruction, and Professional Development (pp. 391-411). New Jersey: Lawrence Erlbaum Associates.
[54] Rhodes, M. G. (2019). Metacognition. Teaching of Psychology, 46, 168-175.
https://doi.org/10.1177/0098628319834381
[55] Rubin, J., Chamot, A. U., Harris, V., & Anderson, N. J. (2007). Intervening in the Use of Strategies. In A. Cohen, & E. Macaro (Eds.), Language Learner Strategies: Thirty Years of Research and Practice (pp. 141-160). Oxford: Oxford University Press.
[56] Ruiz, M. A., & San Martín, R. (1992). The Behavior of the K1 Rule Estimating the Number of Factors: A Study with Simulated Data. Psicothema, 4, 543-550.
http://www.psicothema.com/english/presentation.asp
[57] Siegesmund, A. (2016). Increasing Student Metacognition and Learning through Classroom-Based Learning Communities and Self-Assessment. Journal of Microbiology and Biology Education, 17, 204-214.
https://doi.org/10.1128/jmbe.v17i2.954
[58] Siegesmund, A. (2016). Increasing Student Metacognition and Learning through Classroom-Based Learning Communities and Self-Assessment. Journal of Microbiology and Biology Education, 17, 204-214.
https://doi.org/10.1128/jmbe.v17i2.954
[59] Skourtou, E., & Kourtis-Kazoullis, V. (2003). The Step from Traditional Pedagogy to Transformative. The International Journal of the Humanities, 1, 1329-1330.
https://theijhss.com
[60] Stalikas, A., Triliva, S., & Roussi, P. (2012). The Psychometric Tools in Greece. Athens: Pedio. (In Greek)
[61] Tabachnick, B. G., & Fidell, L. S. (1996). Using Multivariate Statistics (3rd ed.). New York: Harper Collins.
[62] Tavakol, M., & Dennick, R. (2011). Making Sense of Cronbach’s Alpha. International Journal of Medical Education, 2, 53-55.
https://doi.org/10.5116/ijme.4dfb.8dfd
[63] Van Campenhout R. (2020). Supporting Metacognitive Learning Strategies through an Adaptive Application. In R. Sottilare, & J. Schwarz (Eds.), Adaptive Instructional Systems. HCII 2020 (pp. 218-227). Lecture Notes in Computer Science, Vol. 12214, Cham: Springer.
https://doi.org/10.1007/978-3-030-50788-6_16
[64] van Widenfelt, B. M., Treffers, P. D. A., de Beurs, E., Siebelink, B. M., & Koudijs, E. (2005). Translation and Cross-Cultural Adaptation of Assessment Instruments Used in Psychological Research with Children and Families. Clinical Child and Family Psychology Review, 8, 135-147.
https://doi.org/10.1007/s10567-005-4752-1
[65] Vanderrgrift, L. (2003). Orchestrating Strategy Use: Toward a Model of the Skilled Second Language Listener. Language Learning, 53, 463-496.
https://doi.org/10.1111/1467-9922.00232
[66] Veenman, M. V. J., Van Hout-Wolters, B. H. A. M., & Afflerbach, P. (2006). Metacognition and Learning: Conceptual and Methodological Considerations. Metacognition and Learning, 1, 3-14.
https://doi.org/10.1007/s11409-006-6893-0
[67] Yong, A. G., & Pearce, S. (2013). A Beginner’s Guide to Factor Analysis: Focusing on Exploratory Factor Analysis. Tutorials in Quantitative Method for Psychology, 9, 79-94.
https://doi.org/10.20982/tqmp.09.2.p079
[68] Zhang, Y., & Francis, A. (2010). The Weighting of Vowel Quality in Native and Non-Native Listeners’ Perception of English Lexical Stress. Journal of Phonetics, 38, 260-271.
https://doi.org/10.1016/j.wocn.2009.11.002
[69] Zisimopoulos, G., Kafetzopoulos, K., Moutzouri-Manousou, E., & Papastamatiou, N. (2004). Science Education Topics. Athens: Patakis.

Copyright © 2021 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.