Share This Article:

Assessment Practices of Teachers in Selected Primary and Secondary Schools in Jamaica

DOI: 10.4236/oalib.1105038    101 Downloads   232 Views  

ABSTRACT

The purpose of the study was to determine if there were differences in the perceptions of primary and secondary school teachers’ classroom assess-ment practices in region one in Jamaica. Seven research questions guided the study. An analytic survey research design was used to answer the research questions. A stratified random sampling method was used to select 225 teachers. However, a total of 157 (64 primary & 93 secondary) school teachers participated in the study. Data were collected through the use of a web-based questionnaire which had an overall reliability coefficient of 0.749. The findings showed that closed-book test, portfolio, multiple-choice, short answer, restricted essay, and fill-in-the-blanks were popularly used by the teachers. The findings also showed that there were significant differences among the teachers on the following arranging test items according to types and writing specific instructions, informing students about the areas that will be assessed, administration of assessments, grading assessment, explaining how scores were derived, giving students the opportunity to appeal their grades, and using the results of the statistical analyses to improve assessment practices. Based on these findings recommendations were made on how to improve teachers’ assessment practices.

1. Introduction

In the undergraduate Bachelor of Education programmes in the teacher training colleges and universities in Jamaica, the students would at least take a course on classroom assessment. This introductory course would expose students to types of classroom assessments, skills and techniques in developing appropriate reliable and valid classroom assessment instruments, test administration, grading, analysing, interpreting, and reporting assessment data. After this initial training, many teachers participate in different professional development courses including workshops on classroom assessment designed to improve their assessment practices. In a study done by Chew and Lee [1] to investigate the facilitators’ beliefs and practices of classroom assessments, their findings showed that support from the school leadership, availability of assessment-related training and resources, and accountability to industry partners are important factors impacting on the facilitators’ classroom assessment practices. The assumption is that having taken at least a course in classroom assessment in their college years; teachers are expected to develop effective classroom assessment instruments and techniques to be used in assessing their students.

1.1. Statement of the Problem

Despite the initial and probably continued professional development opportunities, classroom assessment practices remain a concern for the staff of the Ministry of Education, Youth and Information (MOEYI). For instance, during the 2013 Jamaica Teachers’ Association conference, the Permanent Secretary in the Ministry of Education, Elaine Foster Allen was quoted as saying “of a sample of 135 schools inspected across the island, assessment was rated as poor among 42 percent of them” [2] (n.p.). Among other things, Elaine Foster Allen also stated that “assessment practices were found to be weak, irregular and inconsistent; hence most students were not stimulated to learn in these classrooms” [2] (n.p.). Since the comments by Elaine Foster Allen, teachers’ classroom assessment practices continue to be a problem as several heads of learning institutions continue to request for assessment workshops (personal communications, October 28, 2016 ). There is a need to investigate the perceptions of teachers’ assessment practices in selected schools in Jamaica .

1.2. Purpose of the Study

The main purpose of the study was to determine if teachers differ by school type in their perceptions of classroom assessment practices in Jamaica.

1.3. Research Questions

The overarching question is: do the primary and secondary school teachers’ perceptions of assessment practices differ? From this broad question, the following research questions were raised:

1) What types of classroom assessment methods and test items are used by the teachers?

2) To what extent do teachers’ perceptions of preparation for assessment differ?

3) To what extent do teachers’ perceptions of administering assessment differ?

4) To what extent do teachers’ perceptions of grading assessment differ?

5) To what extent do teachers’ perceptions of providing assessment feedback differ?

6) To what extent do teachers’ perceptions of fairness in assessment differ?

7) To what extent do teachers’ perceptions of analysing assessment data differ?

1.4. Operational Definition of Terms

In this study, the term assessment method is defined as the strategies used by teachers to collect information on students’ achievement. These methods include open book test, closed book test, cooperative testing, take home test, collaborative or negotiated test, portfolio, peer and self assessments, etc. This is measured by a web-based questionnaire designed to collect data on the frequency of use of these strategies by the teachers.

In this study, the term classroom assessment practice is defined as the determination of the extent of students’ performance by the teachers through a range of activities starting from the preparation of the assessment instrument to the analysis of the results of the assessment. This is measured by a web-based questionnaire designed to collect data on preparation for assessment, administration of the assessment, scoring of assessment, providing feedback, fairness, and analysing students’ performance.

2. Literature Review

Classroom assessment is defined as “the collection, interpretation, and use of information to help teachers make better decisions” [3] (p. 8). Teachers can use both be formative and summative assessment during the instructional process depending on the purpose it is designed to serve [4] [5] . RestiMartanti [6] listed examples of methods used during formative assessment as “conducting an observation during classroom activities, homework exercises, reflection journals, question and answer sessions, conferences, in-class activities when students informally present their results, and student feedback” (p. 58). While the examples of summative as stated by RestiMartanti [6] are “examinations, final examination, term-papers, projects, portfolios, performances, student evaluation, and instructor self-evaluation” (p. 59). Regardless of the type of assessment, classroom teachers are required to conduct an evaluation of their students’ performance.

There numerous textbooks on classroom assessment written by experts such as Chappuis and Stiggins [7], Miller, Lin, and Gronlund [8], Nitko and Brootkhart [9], Popham [10], and Peacock [11], Reynolds, Livingston, and Willson [12], Russell and Airasian [13], and Waugh and Gronlund [14], just to name a few. As well as published online guidelines and standards on assessment developed by organisations such as the National Council on Measurement in Education (NCME), the Joint Committee on Testing Practices (JCTP), and the Joint Committee on Standards for Educational Evaluation (JCSEE). These resources were designed to be used by persons with the responsibility of developing an assessment instrument as well as using them to assess student performance.

Apart from these textbooks and guidelines on assessment, there are numerous articles on classroom assessment practices. For instance, Benzehaf [15] used an exploratory design to explore the assessment practices and skills of 40 high school teachers in El Jadida, Morocco. The findings showed that the teachers used a variety of assessment strategies. Calveric [16] for the doctoral thesis investigated the fifth-grade teachers’ assessment beliefs and practices in Virginia Commonwealth. The findings showed that although the teachers had limited exposure to assessment training, this did not stop them from having an assessment belief. Han and Kaya [17] examined the Turkish EFL teachers’ assessment preferences and practices. The findings revealed that the assessment methods used were based on their preferences. In Botswana, Koloi-Keaikitse [18], examined the primary and secondary school teachers’ assessment practices by examining their skills and beliefs about assessment. Similarly, Zhang and Burry-Stock [19], in the US, investigated teachers’ assessment practices and self-perceived assessment skills and concluded that teachers with assessment training had a higher level of self-perceived assessment skills.

As it pertains to the assessment practices of teachers in particular subjects such as English and Mathematics, in a doctoral thesis, Susuwele-Banda [20] using the naturalistic inquiry approach, investigated mathematics teachers’ perceived classroom assessment practices. The findings showed that the teachers did not use a variety of assessment methods. On the contrary, in Japan, Wicking [21] examined the English teachers’ beliefs and assessment practices and concluded that the teachers used a variety of assessments, including teacher-made paper-and-pencil tests as well as items from the textbooks. On the other hand, Yang [22] investigated the factors affecting English teachers’ use of multiple classroom assessment practices among young language learners in Taiwanese elementary school. The findings showed that the teachers “perceived assessment competency (self-efficacy) teacher beliefs about the pedagogical benefits of assessment, and teacher education are significantly positively correlated with teachers’ assessment practices, while teacher beliefs about the difficulty of implementing assessment is negatively correlated with teachers’ practices” [22] (p. 85).

The researcher could not locate a study on this area in Jamaica in the existing literature that addressed teachers’ classroom assessment practices. This study attempts to shed light on this issue by investigating the perceptions of Jamaican teachers in selected public primary and secondary schools on their classroom assessment practices.

3. Methods

3.1. Design and Sampling

An analytic survey research design was used for the study to examine the teachers’ classroom assessment practices in Kingston and St. Andrew (region 1) in Jamaica . In Jamaica , public schools are classified under six regions, which represent the 14 parishes. This was done to enhance the effectiveness of the functions of the Ministry of Education, Youth and Information (MoEYI) [23] . In this study, due to financial and budgetary constraints, only the public primary and secondary school teachers in Kingston and St. Andrew were targeted due to their proximity to the researcher. These are the two parishes with the largest number of teachers. A stratified random sampling method was used to select the teachers (n = 225). This was done by using five percent of teachers in the public primary schools (n = 79) and five percent of teachers in the secondary schools (n = 146) in St. Andrew (Table 1). The MoEYI Jamaica School Profiles [24] was used as the sampling frame. Privately owned schools were excluded from the sampling.

As shown in Table 1, the actual number that participated was (n = 157). Therefore, the total response rate was 69.8%. According to Wiersma [25], “when surveying a professional population, 70 percent is considered a minimum response rate” (p. 176). The summary of the demographic characteristics of the participants is presented in Table 2.

As shown in Table 2, a majority of the participants were female teachers. Approximately 36% of the participants were 30 years and below, and a majority (52.9%) had attained a Bachelor’s degree in education. Approximately 41% of the teachers had been teaching for five years and under, and a majority (n = 93) of the participants were teaching at the secondary school level. It should be noted that the teachers who participated in this study at the primary school level, taught a variety of subjects which include Mathematics and English Language to grades one to six pupils. On the other hand, the teachers at the secondary school level who also participated taught subjects such as Information Technology, Integrated Science, Mathematics, English, Office Administration, Principles of Accounts, Principles of Business, Home Economics, Industrial Technology, Technical Drawing, Social Studies, etc., to grades seven to eleven students.

3.2. Data Collection

A web-based questionnaire developed by the researcher was used to collect data from the teachers. The items in the questionnaire were developed from assessment

Table 1. Number of teachers in public primary & secondary schools in Kingston & St. Andrew.

Source: Ministry of Education, Youth, & Information. Jamaica School Profiles (2015-2016).

Table 2. Demographic characteristics of study participants.

textbooks as well as from the researcher’s experience as a lecturer in classroom assessment. In order to ensure that the questionnaire was properly designed, the five basic steps as outlined by Spector [26] was followed. First, the construct of interest, in this case, classroom assessment practices, was defined. Second, the response format was decided based on the nature of the items. The idea was for the responses to be quantifiable. Third, the questionnaire was piloted with 15 primary and 17 secondary school teachers in Kingston , and St. Andrew. These teachers were not used in the main study. However, they were asked to complete the questionnaire and provide comments on the clarity of the items as well as to determine if there was item bias. Fourth, the item analysis was calculated which yielded an internal consistency of 0.70. Fifth, the questionnaire was validated by experts. See the reliability and validity section for more comments.

The first section of the questionnaire contained seven close-ended-items that measured the background of the participants. The second section contained two close-ended items that measured assessment methods and types of items used by the teachers. The third section had 34 Likert-type items on classroom assessment practices. These included items on preparing assessment (items 1 - 16), test administration (items 17 - 19), grading (items 20 - 24), feedback (items 25 - 28), fairness (29 - 31), and analysis (items 32 - 34). These items were adapted from the Joint Committee on Standards for Educational Evaluation [27], Kubiszyn and Borich [28], Miller, Lin, and Gronlund [8], Nitko [29], Nitko and Brootkhart [9], Reynolds, Livingston, and Willson [12], and the guidelines summarized by Reynolds, Livingston, and Willson [12] from the Code of Professional Responsibilities in Educational Measurement by the National Council on Measurement in Education (NCME, 1995), the Code of Fair Testing Practices in Education (CFTPE, 1988), and the Rights and Responsibilities of Test Takers: Guidelines and Expectations (1998) by the Joint Committee on Testing Practices (JCTP, 1988, 1998). These items had a four-point response format of Not Applicable (0), Almost Never (1), Sometimes (2), Often (3), and Almost Always (4). See a copy of the questionnaire in the Appendix.

3.3. Reliability and Validity

Cronbach’s alpha was conducted to determine the reliability of the web-based questionnaire. An overall reliability coefficient of 0.749 was obtained. Although several authors ( [30] [31] [32] [33] [34] ) have reported different levels of alpha ranging from 0.70 to 0.95. For instance, Bastick and Matalon [32] recommended 0.75 as the minimum acceptable alpha level, while Nunnally and Bernstein [33] and supported by Streiner [34] suggested a minimum of 0.70. In all cases, the obtained alpha value in this study was about the same with the values recommended.

Content validity was ascertained through the expert judgment. According to Thorn and Deitz [35], the use of content experts is a practical approach to content validation. Three experts were asked to review the questionnaire for clarity, accuracy, and relevance of the content as it pertains to classroom assessment practices. These three persons are knowledgeable in the area of classroom assessment, educational psychology, and educational research. Regarding the comments provided, and the calculation of the percentage of agreement (83%) among the experts, was high. However, the experts used in this study suggested improving the sentence structure of a few items in sections one and three of the questionnaire.

3.4. Ethical Issues

Although this research falls under the category of studies that are exempted from the Institutional Review Board (IRB) [36], all ethical considerations pertaining to studies of this nature were observed. This included obtaining permission from the Ministry of Education, Youth and Information (MOEYI) in 2016 to distribute the web-based questionnaire to the teachers in the public primary and secondary schools in St. Andrew through the existing Ministry network; taking precautions to protect the privacy of participants and the confidentiality of their personal information; obtaining informed consent before participation; and ensuring that the participants had the right to withdraw without penalty ( [36] [37] ).

3.5. Data Analysis

As indicated earlier, a total of 225 participants were selected; however, 157 teachers eventually participated, resulting in 69.8% response rate. The researcher used all 157 questionnaires in the analyses. For the analyses, the responses from the web-based questionnaire were downloaded from Google in the form of Microsoft Excel sheet and imported into the Statistical Package for the Social Sciences (SPSS) program. The data was coded and vetted before the statistical analyses were conducted by using descriptive statistics (mean, standard deviation, frequency, percent, & cross-tabulation), as well as inferential statistics (independent samples t-test). The independent samples t-test was used to determine if there were differences between the primary and secondary school teachers’ perceptions of assessment practices. The analysis was conducted at the item level and the significant level of 0.05. The results of the Levene’s test for equality of variance were more than 0.05. This paved the way for the use of the t-test.

4. Results

4.1. Types of Classroom Assessment Methods and Test Items Used by Teachers

Research Question One: What type of classroom assessment methods and test items were used by the teachers?

Two items in the questionnaire were used to answer the above research question. First, the teachers were asked to indicate the type of classroom assessment methods they used, and second, they were asked to indicate the types of test items that they used in their assessments. The analysis was done by using cross-tabulation (Table 3).

As shown in Table 3, closed book test and portfolio were the most popular assessment methods used among the primary and secondary school teachers. The least used methods were self-assessment (3.2%), and collaborative or negotiated test (1.9%). Presented in Table 4, are the types of items used by the teachers.

As shown in Table 4, the most popular items used by the teachers were multiple-choice items (87.9%), short answer items (77.7%), restricted essay (70.7%), and fill-in-the-blanks (70.1%). The secondary school teachers mostly used extended essay questions. Presented below are the findings to research question two.

4.2. Teachers’ Perceptions of Preparation for Assessment

Research Question Two: To what extent do teachers’ perceptions of preparation for assessment differ?

Sixteen Likert-type items in the questionnaire were used to collect data from

Table 3. Types of classroom assessment methods used by participants.

Table 4. Types of test items used by participants.

the teachers. For the findings of the descriptive statistics on the items that measured assessment preparation (Table 5), followed by the t-test results.

As shown in Table 5, the mean values for the teachers’ responses indicated that their assessment practices as it pertains to test preparation were between “Often and “Almost Always,” except for two items (3 - 4), which indicated that the primary and secondary teachers’ assessment practices for the use of table of specifications for preparing assessments (M = 1.55, SD = 1.126) and (M = 1.53, SD = 1.049), and the review the guidelines of writing test items when preparing my assessments (M = 2.00, SD = 0.926) and (M = 2.20, SD = 0.854) respectively, were between “Almost Never” and “sometimes.”

Further analysis was done using the independent samples t-test to determine if there were differences between the primary and secondary school teachers’ perceptions on assessment preparation. The result showed that there were significant differences among the teachers on three items. These were “I arrange item types under different sections of my test/exam paper,” t(155) = 2.123, p = 0.0035; “I write specific instructions for the different sections of the test/exam

Table 5. Descriptive statistics for assessment preparation.

paper,” t(155) = 2.422, p = 0.017; and “I inform my students about the areas that will be assessed,” t(155) = −3.190, p = 0.022. The findings showed that the practice of arranging item types under different sections, writing specific instructions for the different sections, and informing students about the areas that will be assessed was “often” and “almost always” practised by the secondary school teachers.

4.3. Teachers’ Perceptions of Administering Assessment

Research Question Three: To what extent do teachers’ perceptions of administering assessment differ?

Three Likert-type items in the questionnaire were used to collect data from the teachers. For the findings of the descriptive statistics on the items that measured administration of assessments (Table 6), followed by the t-test results.

As shown in Table 6, the mean values for the teachers’ responses indicated that their assessment practices as it pertains to the administration of assessments were between “Often and “Almost Always” for all three items.

Additional analysis was done using the independent samples t-test to determine if there were differences between the primary and secondary school teachers’ perceptions of administration of assessments. The result showed that there was a significant difference among the teachers on one item, “I monitor students for cheating during the assessment,” t(155) = −2.249, p = 0.026. The findings showed that more secondary school teachers indicated that they “most often” and “almost always” monitored students for cheating during the assessment when compared to the primary school teachers.

4.4. Teachers’ Perceptions of Grading Assessment

Research Question Four: To what extent do teachers’ perceptions of grading assessment differ?

A total of five Likert-type items in the questionnaire were used to collect data from the teachers. For the findings of the descriptive statistics on the items that measured grading assessment (Table 7), followed by the t-test results.

As shown in Table 7, the mean values for the teachers’ responses indicated that their assessment practices as it pertains to grading assessment were between “Often and “Almost Always” for the first two items (#20 - 21). For instance, for the use of scoring key/guide when grading, the mean values were (M = 3.29, SD = 0.937) and (M = 3.14, SD = 0.851). While for grading assessment with a scoring guide or rubric, the mean values for the primary school teachers were slightly higher (M = 3.04, SD = 0.933) and (M = 2.58, SD = 0.756), respectively. For items 22 to 24, the mean values were between “Almost Never” and “Sometimes,” showing that a majority did not practice these often.

Additional analysis was done using the independent samples t-test to determine if there was a difference between the primary and secondary school teachers’ perceptions on grading assessment. The result showed that there was a significant difference among the teachers on one item (#21), that is, “I grade assessment with a scoring guide/rubric,” t(155) = 3.447, p = 0.001.

Table 6. Descriptive statistics for administration of assessment.

Table 7. Descriptive statistics for grading assessment.

4.5. Teachers’ Perceptions of Providing Assessment Feedback

Research Question Five: To what extent do teachers’ perceptions of providing assessment feedback differ?

Four Likert-type items in the questionnaire were used to collect data from the teachers. For the findings of the descriptive statistics on these items that measured assessment feedback (Table 8), followed by the t-test results.

As shown in Table 8, the mean values for the teachers’ responses indicated that their assessment practices as it pertains to assessment feedback were between “Often” and “Almost Always” for items 25 and 28; and between “Sometimes” and “Often” for items 26 and 27.

Further analysis was done using the independent samples t-test showed that there was a significant difference among the primary and secondary teachers on one item 27, “I explain to the students how their scores were derived,” t(155) = 2.288, p = 0.023.

4.6. Teachers’ Perceptions of Fairness in Assessment

Research Question Six: To what extent do teachers’ perceptions of fairness in assessment differ?

Three Likert-type items in the questionnaire were used to collect data from the teachers. For the findings of the descriptive statistics on the items that measured fairness (Table 9), followed by the t-test results.

As shown in Table 9, the mean values for the teachers’ responses indicated that their assessment practices as it pertains to fairness were between “Often and “Almost Always” for the first item (#29), with mean values of (M = 3.34, SD = 0.717) and (M =3.13, SD = 0.741), respectively. For items 30 to 31, the mean values were between “Almost Never” and “Sometimes,” showing that a majority did not practice making changes to scoring applicable to all students, as well as giving students the opportunity to appeal their grades, which had a slightly higher mean value for the primary school teachers when compared to the secondary school teachers.

Table 8. Descriptive statistics for assessment feedback.

Table 9. Descriptive statistics for fairness in assessment.

The independent samples t-test used in determining if there was a difference between the teachers’ perceptions of fairness, showed that there was significant difference among the teachers on one item (#31), “I give students the opportunity to appeal their grades,” t(155) = −1.998, p = 0.047.

4.7. Teachers’ Perceptions of Analyzing Assessment Data

Research Question Seven: To what extent do teachers’ perceptions of analyzing assessment data differ?

Three Likert-type items in the questionnaire were used to collect data from the teachers. For the findings of the descriptive statistics on the items that measured analysis of assessment data (Table 10), followed by the t-test results.

As shown in Table 10, the mean values for the teachers’ responses indicated that their assessment practices as it pertains to the analysis of assessment data were between “Almost Never” and “Sometimes” for all three items. This showed that a majority of the teachers did not practice these in their classrooms.

An independent samples t-test used to determine if there was a difference between the primary and secondary school teachers’ perceptions of the analysis of assessment data. The findings showed that there were significant differences among the teachers on two items (#33 & #34), “I analyze students’ results using descriptive statistics,” t(155) = −2.115, p = 0.036; and “I use the results of the

Table 10. Descriptive statistics for analysis of assessment data.

statistical analysis to improve my teaching and assessment practices.” The findings showed that less number of teachers analysed assessment data among the primary school teachers when compared to the secondary school teachers.

5. Discussions

Regarding research question one, closed book test was the most popular assessment methods among the primary and secondary school teachers. This is in line with Matthew’s [38] study which stated that closed-book was the “most common type of exam in psychology courses” (p. 1240). On the contrary, in a study by Bulawa, Seeco, Kgosidialwa, and Losike-Sedimo [39], the assessment techniques mainly used lecturers were written assignments, exams, group discussions and presentation.

The second most popular type was portfolio assessment. The reason provided by some of the teachers was that the students take national exams at the primary and secondary school levels, and should be tested using the same type of methods used in the national exams. The results of the current study also showed that even though the teachers used mostly closed book test, they also used other types of assessments. This is consistent with the findings of the study by McMillan and Workman [40], which showed that teachers used several methods to assess and grade students. According to Wallace and White [41], teachers should use different assessment methods in order to provide evidence of student learning.

The results also showed that the primary and secondary school teachers mostly used multiple-choice items, short answer items, restricted essay, and fill-in-the-blanks, while the secondary school teachers used extended essay items. The findings were consistent with the results of the study done by Frey and Schmitt [4], who found out that essays, short answer/fill-in-the-blank, and multiple choice items were the frequently used by the teachers. The literature also indicates that “multiple-choice items are more popular among many teachers” [42] (p. 89).

Regarding research question two, the findings showed that a majority of the teachers indicated that they developed their assessments according to some of the guidelines prescribed by the experts. These include 1) considering the purpose of the assessment, 2) ensuring that the instructional objectives are clearly stated, 3) including a variety of items/tasks in the assessments, 4) ensuring that each item matches the instructional objectives stated, 5) arranging item types under different sections of the test/exam paper, 6) having a balance of easy, moderate and difficult items in the test/exam papers, 7) ensuring the security of the assessments, 8) informing the students the purpose of the assessment and what the results will be used for, 9) informing the students about the areas that will be assessed, 10) informing the students about when the assessment will be administered, 11) using a venue that will not impact on students’ performance, 12) motivating the students to do their best before the assessment, and 13) giving the students tips on assessment-taking skills.

This is in keeping with the recommendations made by Kubiszyn and Borich [28], who stated that persons who are to conduct assessments should decide on the purpose of that assessment. On the issue of instructional objectives, Reynolds, Livingston, and Willson [12] and Waugh and Gronlund [14] stated that clearly stated instructional objectives would enhance the instructional and assessment process. Reynolds, Livingston, and Willson [12] add that instructional objectives play an important role in the development of classroom assessments.

However, it is worth noting that the mean values for the item on using a table of specifications were low. This showed that several of the teachers did not indicate practising these skills. Reynolds, Livingston, and Willson [12] gave the following reasons why test blueprint should be used. These include minimising “the chance of overlooking important concepts or including irrelevant concepts;” encouraging “teachers to use items of varying complexity;” and providing students with the “basis for study and reviews” [12] (p. 175).

On the item that measured the teachers’ perceptions on reviewing the guidelines of writing test items when preparing my assessments, different authors have suggested guidelines for writing different types of items. These include but not limited to Kubiszyn and Borich [28], Miller, Lin, and Gronlund [8], Nitko and Brootkhart [9], Reynolds, Livingston, and Willson [12], and Waugh and Gronlund [14] . These guidelines when used will reduce the errors associated with item writing which may impact on students’ performance.

On the issue of including a variety of items/tasks in the assessments, the findings showed that a majority of the teachers indicated that they often practised that. This is in keeping with the guidelines provided by Reynolds, Livingston, and Willson [12], and Waugh and Gronlund [14], which recommended that test developers should use multiple sources and types of assessments, especially if the assessment results are used to make educational decisions with high consequences.

With regard to matching each item with the instructional objectives, the findings showed that a percentage of the teachers indicated practising this. Kubiszyn and Borich [28], Reynolds, Livingston, and Willson [12] and Waugh and Gronlund [14] are just a few of the authors who recommended that this should be done.

On the issue of arranging item types under different sections of the test/exam paper, which several of the teachers indicated doing, Kubiszyn and Borich [28] recommended grouping “together all items of similar format” (p. 130). The findings also showed that many of the teachers indicated having a balance of easy, moderate and difficult items in the test/exam papers. This was recommended by Kubiszyn and Borich [28], who stated that items should be arranged “from easy to hard” (p. 130). These views were consistent with the general guidelines provided by Waugh and Gronlund [14] .

Regarding the issue providing information to the student on the purpose of the assessment, Miller, Lin, and Gronlund stated that “Before administering a particular test, the teacher should explain to students the purpose of the test and the uses to be made of the results” [8] (p. 449). The teachers’ responses also showed that they often maintained assessment security, and provided the students with information on areas to be covered, the date, and the venue, as well as giving students some tips on assessment-taking skills. The obtained mean values for the responses to these items showed that the teachers often secured their assessments, notified their students before an assessment, motivated the students to do as well, and gave them assessment tips. This is in keeping with the guidelines provided by Reynolds, Livingston, and Willson [12], which states that test security should be maintained, information about the assessment should be provided to students as well as tips on assessment-taking skills. Nitko [29] recommended that teachers teach their students “minimum assessment-taking skills” (p. 306). These include among other things, paying attention to the test instruction, finding out how the assessment will be graded, writing clear responses, studying tips, using the assessment time wisely, guessing appropriately, organising written responses clearly, checking assigned marks, and reviewing answers [29] .

Regarding research question three, the mean values of the teachers showed that often and almost always, a majority of the teachers indicated that ensured that each student got an assessment paper and that the students understood the instructions for the assessment, and they were monitored for cheating. These findings are consistent with the recommendations by Kubiszyn and Borich [28] who stated that test papers should be distributed to each student and that the students should be monitored “while they are completing their tests” (p. 134). Furthermore, Miller, Lin, and Gronlund [8] recommended steps to be taken to prevent cheating in the classroom. These include 1) securing the test, 2) clearing the top of the students’ desks, 3) ensuring that students turn in any scratch papers used or not used in the test, 4) monitoring students during the test, 5) using special seating arrangement such providing space between students’ seats, 6) making use of two forms of the test paper and alternate them when distributing papers before the assessment, 7) ensuring that the test paper has good face validity, and 8) encouraging students to have a positive attitude toward the assessment.

Regarding research question four, the mean values for the first two items were high, indicating that the teachers often and almost always graded assessments with a scoring guide and consistently followed the scoring guide. However, the low mean values recorded on the other three items, revealed that the teachers almost never or sometimes asked colleagues to check the answers before grading to reduce scoring bias, ensured that their scoring was fair, and matched students’ performance on each item against each instructional objective/standards. To Waugh and Gronlund [14], these are to be done by persons involved with classroom assessment in order to reduce assessment errors. Furthermore, Kubiszyn and Borich [28] recommended five different ways the teacher could compare students’ performance. These are “1) with other students, 2) established standards, 3) aptitude, 4) actual versus potential effort, and 5) actual versus potential improvement” (p. 204). Kubiszyn and Borich [28] noted that each has its advantages and disadvantages.

For research question five, the mean values for the teachers’ responses on the item that measured communicating the assessment results to students in a timely manner, reporting students’ grades in ways that they can easily understand it, explaining to the students how their scores were derived, and going through the graded assessments with the students. This is in keeping with the guidelines provided by Reynolds, Livingston, and Willson [12] . Furthermore, Kubiszyn and Borich, [28] recommended “going over the test with your students” (p. 149).

Regarding research question six, there was a fairly high mean value for the teachers’ responses to allowing students to ask questions about their graded papers. This practice is commendable. As clearly stated by Reynolds, Livingston, and Willson [12], “[s]tudents clearly have a right to know the procedures that will be used to determine their grades” (p. 288).

However, on the issue of making changes to scoring applicable to all students who were assessed, and giving students the opportunity to appeal their grades, the mean values were low, indicating that a majority of the teachers were not always practising fairness as outlined under the standard (P5) developed by the Joint Committee on Standards for Educational Evaluation and chaired by Gullickson [27] . According to this Committee, the “[e]valuations of students should be consistent with applicable laws and basic principles of fairness and human rights, so that the students’ rights and welfare are protected” [27] (p. 51).

Regarding research question seven, on the issue of analysing assessment data, the mean values to the items were low, indicating that the teachers’ frequency of practice was between almost never and sometimes. This has been a long-standing problem because as far back as in the 1980s’ when Gullickson [43] conducted a study and found that teachers rarely used descriptive statistical analysis to describe assessment results. Similarly, Mertler [44] discovered that teachers in the State of Ohio did not spend too much time conducting statistical analyses of their assessment data. Reynolds, Livingston, and Willson [12] recommended: “the use of both quantitatively and qualitative item analyses to improve the quality of test items” (p. 159). Furthermore, Kubiszyn and Borich [28] suggested the use of descriptive statistics to summarise and report assessment data.

6. Conclusion

The study was done to find out if there were differences in the perceptions of primary and secondary school teachers’ assessment practices in region one in Jamaica. The findings showed that both primary and secondary school teachers used a variety of assessment methods and item types. However, the closed-book test, portfolio, multiple-choice, short answer, restricted essay, and fill-in-the-blank were often used. The findings also showed that there were significant differences among the teachers’ perceptions on preparing an assessment, administration of assessments, grading assessment, providing feedback, fairness, providing students with the opportunity to appeal their grades, and using the results of the statistical analyses to improve assessment practices. The findings showed that there were areas in the assessment practices of the primary and secondary school teachers where need improvement. The implication of these findings is the need for improvement through more professional development opportunities on classroom assessment and as well as committed monitoring of quality from their supervisors.

Limitation

The limitation of the current is worth noting. Only teachers in public primary and secondary school from two parishes in Jamaica were studied. The number did not represent all teachers in the two parishes. Therefore, one should be guarded when making generalisations. A much bigger sample size is recommended for future studies. Regardless of the limitation mentioned above, the study has contributed to the existing literature which is lacking in the Caribbean.

Recommendations

Based on the findings and the implication, the following recommendations are made:

1) The classroom teachers should be encouraged to use items that will promote critical thinking such as the interpretive exercise mostly among the secondary school teachers. This could be achieved by ensuring that the instructional objectives to be tested are written at the higher levels of Bloom’s taxonomy before the teacher made tests are constructed.

2) The senior teachers should ensure that more classroom teachers use a table of specifications when preparing their assessments. This could be done by asking the classroom teachers to submit a table of specifications before the assessment for verification by the senior teachers.

3) The school administrators should encourage the classroom teachers to use the guidelines of writing test items when preparing their tests. These guidelines could be found in many classroom assessment textbooks which are available online and offline.

4) To ensure that scoring is fair, the school administrators should encourage the use of second marking if possible. This process could be used to reduce scoring bias.

5) The school administrators should encourage classroom teachers to provide information to students on how their scores were derived. This could be achieved by meeting the students as a group or individually immediately after the assessment.

6) The classroom teachers should give their students the opportunity to appeal their grades. This could be done by informing the students about the protocol to be followed within a specified period of time.

7) The school administrators should encourage the classroom teachers to conduct quantitative and qualitative analyses after an assessment. This would improve the reliability of the classroom tests/assessments as well as the item quality. One easy way to achieve this is to conduct item analysis by determining the difficulty level and the discrimination index.

Appendix

Assessment Practices Questionnaire

This survey is designed to collect information on your assessment practices. The information collected will be used in writing an article. The questionnaire will take less than 10 minutes to complete. If you have agreed to participate in this study, kindly read each item carefully then answer as accurately as possible. Responses to this questionnaire are confidential.

Section A: Background Data

Instructions: The items listed below are designed to obtain information on your profile. Please read the items carefully then tick (√) and/or write the appropriate response for items 1 to 7.

1) What is your gender? Male Female

2) What is your age range?

30 and under

31 - 35

36 - 40

41 - 45

46 and over

3) What educational qualifications do you have?

Certificate

Diploma

Bachelor’ Degree

Master’s Degree

Doctoral Degree

Other (Please state): ______________________________

4) How many years of teaching experience do you have?

5 years & under

6 - 10 years

11 - 15 years

16 years & Over

5) What type of institution do you teach?

Public primary school Independent primary school

Public secondary school Independent secondary school

Public tertiary institution Private tertiary institution

Other (Please state): ______________________________

6) What subject do you teach? (Please state):

____________________________________________________________

7) What grade/level of students do you teach? (Please state):

_____________________________________________________________

Section B: Assessment Methods

Instructions: The items listed below are designed to obtain information on types of assessment methods, as well as types of test items you use. Please read the items carefully then tick (√) the appropriate response.

Section C: Assessment Practices

Instructions: The items listed below are designed to obtain information on your assessment practices. Please read the items carefully then rate by circling or ticking the appropriate response beside each items.

Please write comments about your assessment practices that were not covered in sections B and C above.

Thank You for Completing in this Questionnaire!

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

Cite this paper

Onyefulu, C. (2018) Assessment Practices of Teachers in Selected Primary and Secondary Schools in Jamaica. Open Access Library Journal, 5, 1-25. doi: 10.4236/oalib.1105038.

References

[1] Chew, A. and Lee, I. (2013) Teachers’ Beliefs and Practices of Classroom Assessment in Republic Polytechnic, Singapore. The 39th Annual Conference of the International Association of Educational Assessment (IAEA) on Educational Assessment 2.0: Technology in Educational Assessment, Tel Aviv, 20-25October 2013.
[2] Flemming, B. (2013) Classroom Assessment Takes Spotlight at Teachers’ Conference.
http://jamaica-gleaner.com/gleaner/20130403/
lead/lead91.html
[3] McMillan, J.H. (1997) Classroom Assessment: Principles and Practice for Effective Instruction. Allyn & Bacon, Boston.
[4] Frey, B.B. and Schmitt, V.L. (2010) Teachers’ Classroom Assessment Practices. Middle Grades Research Journal, 5, 107-117.
[5] Young, V.M. and Kim, D.H. (2010) Using Assessments for Instruction Improvement: A Literature Review. Education Policy and Analysis Archives, 18, 1-36.
https://doi.org/10.14507/epaa.v18n19.2010
[6] RestiMartanti, I.F. (2015) Holistic English Mid-Term Assessment for Junior Schools. Indonesian Journal of English Language Studies, 1, 57-69.
[7] Chappuis, J. and Stiggins, R. (2017) Introduction to Students-Involved Assessment for Learning. 7th Edition, Pearson Education, Inc., Boston.
[8] Miller, M.D., Linn, R.L. and Gronlund, N.E. (2000) Measurement and Assessment in Teaching. 10th Edition, Pearson Education Ltd., Upper Saddle River.
[9] Nitko, A.J. and Brootkhart, S.M. (2015) Educational Assessment of Students. 7th Edition, Pearson Education, Inc., Upper Saddle River.
[10] Popham, J.W. (2014) Classroom Assessment: What Teachers Need to Know. 7th Edition, Pearson Education Ltd., Boston.
[11] Peacock, A. (2016) Assessment for Learning without Limits. Open University Press, London.
[12] Reynolds, C.R., Livingston, R.B. and Willson, V. (2006) Measurement and Assessment in Education. Pearson Education, Inc., Boston.
[13] Russell, M. and Airasian, P. (2012) Classroom Assessment: Concepts and Applications. 7th Edition, McGraw-Hill, Columbus, OH.
[14] Waugh, C.K. and Gronlund, N.E. (2013) Assessment of Student Achievement. 10th Edition, Pearson Education, Inc., Boston.
[15] Benzehaf, B. (2017) Exploring Teachers’ Assessment Practices and Skills. International Journal of Assessment tools in Education, 4, 1-18.
[16] Calveric, S.B. (2010) Elementary Teachers’ Assessment Beliefs and Practices. Doctoral Thesis, Virginia Commonwealth University, Richmond, VA.
[17] Han, T. and Kaya, H.I. (2013) Turkish EFL Teachers’ Assessment Preferences and Practices in the Context of Constructivist Instruction. Journal of Studies in Education, 4, 77-93.
https://doi.org/10.5296/jse.v4i1.4873
[18] Koloi-Keaikitse, S. (2012) Classroom Assessment Practices: A Survey of Botswana Primary and Secondary School Teachers. Doctoral Degree, Ball State University, Muncie, IN.
[19] Zhang, Z. and Burry-Stock, J. (2003) Classroom Assessment Practices and Teachers’ Self perceived Assessment Skills. Applied Measurement in Education, 16, 323-342.
https://doi.org/10.1207/S15324818AME1604_4
[20] Susuwele-Banda, W.J. (2005) Classroom Assessment in Malawi: Teachers’ Perceptions and Practices in Mathematics. Doctoral Thesis, Virginia Polytechnic Institute & State University, Blacksburg, VA.
[21] Wicking, P. (2017) The Assessment Beliefs and Practices of English Teachers in Japanese Universities. JLTA Journal, 20, 76-89.
[22] Yang, T.-L. (2008) Factors Affecting EFL Teachers’ Use of Multiple Classroom Assessment Practices of Young Language Learners. English Teaching & Learning, 32, 85-123.
[23] The Planning Institute of Jamaica (n.d.) Spatial Boundaries of Jamaica (Draft).
https://www.pioj.gov.jm/Portals/0/Sustainable_Development
/SPATIAL%20BOUNDARIES%20OF%20JA
[24] Jamaica School Profiles: 2015-2016. Ministry of Education, Youth, & Information.
https://www.moey.gov.jm/sites/default/files/School
%20Profiles%202015-2016.pdf
[25] Wiersma, W. (2000) Research Methods in Education: An Introduction. 7th Edition, Ally & Bacon, Boston, MA.
[26] Spector, P.E. (1992) Summated Rating Scale Construction: An Introduction. Sage Publications, Newbury Park.
https://doi.org/10.4135/9781412986038
[27] The Joint Committee on Standards for Educational Evaluation (2003) The Student Evaluation Standards: How to Improve Evaluations of Students. Corwin Press, Inc., Thousand Oaks, CA.
[28] Kubiszyn, T. and Borich, G. (2000) Educational Testing and Measurement: Classroom Application and Practice. 6th Edition, John Wiley & Sons, Inc., New York.
[29] Nitko, A.J. (2004) Educational Assessment of Students. 4th Edition, Pearson Education, Inc., Upper Saddle River, NJ.
[30] Bland, J.M. and Altman, D.G. (1997) Statistics Notes: Cronbach’s Alpha. British Medical Journal, 314, 572.
https://doi.org/10.1136/bmj.314.7080.572
[31] DeVellis, R. (2003) Scale Development: Theory and Applications. Sage Publication, Thousand Oaks, CA.
[32] Bastick, T. and Matalon, B.A. (2004) Research: New and Practical Approaches. Chalkboard Press, Kingston, Jamaica.
[33] Nunnally, J.C. and Bernstein, I.H. (1994) Psychometric Theory. 3rd Edition, McGraw-Hill, New York.
[34] Streiner, D. (2003) Starting at the Beginning: An Introduction to Coefficient Alpha and Internal Consistency. Journal of Personality Assessment, 80, 99-103.
https://doi.org/10.1207/S15327752JPA8001_18
[35] Thorn, D.W. and Deitz, J.C. (1989) Examining Content Validity through the Use of Content Experts. Occupation, Participation and Health, 9, 334-346.
https://doi.org/10.1177/153944928900900602
[36] Amdur, R, and Bankert, E.A (2011). Institutional Review Board: Member Handbook. 3rd Edition, Jones & Bartlett Publishers, Boston, MA.
[37] The World Medical Association (2004) Declaration of Helsinki—Ethical Principles for Medical Research Involving Human Subjects. 1-8.
http://www.uma.net/e/policy/b3.htm
[38] Matthew, N. (2012) Student Preferences and Performance: A Comparison of Open-Book, Closed Book, and Cheat Sheet Exam Type. Proceedings of the National Conference on Undergraduate Research, Weber State University, Ogden Utah., 29-31 March 2012.
[39] Bulawa, P., Seeco, E.G., Kgosidialwa, K.T. and Losike-Sedimo, N.C. (2017) Teaching and Assessment Techniques used at the University of Botswana: Students’ Voices in the Faculty of Education. Journal of Education & Human Development, 6, 138-145.
https://doi.org/10.15640/jehd.v6n1a14
[40] McMillan, J.H. and Workman, D. (1998) Teachers’ Classroom Assessment and Grading Practices: Phase 1. Metropolitan Educational Research Consortium, Virginia Commonwealth University, Richmond, VA.
[41] Wallace, M. and White, T. (2014) Secondary Mathematics Pre-Service Teachers’ Assessment Perspectives and Practices: An Evolutionary Portrait. Mathematics Education Research Group of Australasia, Inc.
https://files.eric.ed.gov/fulltext/EJ1052604.pdf
[42] Blerkom, M.L.V. (2009) Measurement and Statistics for Teachers. Routledge, New York.
[43] Gullickson, A.R. (1982) The Practice of Testing in Elementary and Secondary Schools. ERIC Reproduction Service No. ED 229 391.
[44] Mertler, C.A (14-17 October 1998) Classroom Assessment Practices of Ohio Teachers. Paper presented at the 1998 Annual Meeting of the Mid-Western Educational Research Association.

  
comments powered by Disqus

Copyright © 2019 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.