Bridge the Gap—Incorporating Classroom Response Systems for Classroom-Embedded Formative Assessment

Abstract

Research on the topic of summative assessments has attracted a large audience. The latest trend of a “process-oriented view of learning” has given rise to a heated discussion about assessment as a facilitative instrument to scaffold learning rather than being regarded solely as a “high-stake” measurement tool. Black and Wiliam contributed heavily to theorizing about the pedagogical use of assessment in the classroom with their widely read papers “Assessment and classroom learning (1998)” and “Classroom assessment and pedagogy’ (2018).” Other scholars in the field also contributed their expertise and experience enormously to further exploration in this field. But there still exists a gap between the ideal theoretical model of assessment for learning (AfL) and the actual practices in a real classroom learning environment. This paper discusses how a solution to the problem can be achieved by incorporating a web-based mobile technology—Classroom Response Systems (CRSs) to implement “Web-based Formative Assessment” in secondary schools. A literature review on the topics of formative assessment and technology-enhanced pedagogy is synthesized. A research project on using CRSs to facilitate classroom learning is presented in terms of 1) the effectiveness of CRSs as instructional instruments for adolescent learners in secondary schools; 2) secondary school students’ preference over CRS question types and response formats; and 3) secondary school students’ reflection on and attitudes towards CRSs as a way of formative assessments and instruction. This research provides evidence that the web-based interactive tasks involving the use of CRSs serve to enable learners to develop learner agency and effective learning strategies.

Share and Cite:

Shi, W.H. and Hargis, J. (2023) Bridge the Gap—Incorporating Classroom Response Systems for Classroom-Embedded Formative Assessment. Open Access Library Journal, 10, 1-19. doi: 10.4236/oalib.1109829.

1. Introduction

Tests and summative assessments have been regarded as powerful evaluation tools for students’ performance in the target disciplines for centuries. They have also attracted a vast amount of public attention, owing to their huge potential of impacting one’s social status, economic and political. This is not only true in east Asia, where education and test results carry much weight in societal life (Marzano & Costa, 1988) [1] and are considered to be “crucial milestones in the journey to success” (Brown & Lee, 2015) [2] , it was also the case during the medieval time in European countries (Earl, 2010) [3] . As such, students, parents, teachers as well as educational institutions have deployed substantial amounts of energy to handle tests, thus resulting in the phenomenon of what is known as “teach to the test” (Black & Wiliam, 2018) [4] .

This is not without an indirect cost. While students, parents and teachers celebrate their “successes” in various tests, they are also subject to constant anxiety and sometimes deep stress because they are obsessed with the test results (Crooks, 1988 [5] ; Crooks et al., 1996 [6] ) rather than engage in and enjoy an autotelic process of learning and harvest the fruits of intellectual development in “general cognitive abilities” (Marzano & Costa, 1988) [1] . Moreover, the diversity of learners’ characters is somewhat neglected, leading to a “one-size-fits-all” “single model” of assessment (Bachman & Palmer, 1996 [7] ; Butler, 2017 [8] , 2019 [9] ).

But nowadays, winds are blowing in a different direction. Due to the recent rise of a “process-oriented view of learning” (Bennett, 2009 [10] ; Butler, 2017 [8] ; Earl, 2010 [3] ; Wiliam, 2011 [11] ), “assessment” has again caught the educators’ attention (Brown & Lee, 2015 [2] ; Wiliam, 2011 [11] ). There has been a shift from the traditional measurement-based “assessment of learning” (AoL) towards a more learner-centered facilitative “assessment for learning (AfL)” (Black & Wiliam, 1998 [12] ; Earl, 2010 [3] ). A number of researchers and educators have come to realize that, in addition to being a measurement tool that serves to “place test-takers along a mathematical continuum in rank order” (Brown & Lee, 2015) [2] , assessments also function as formative tools that can scaffold students’ daily classroom learning and substantially improve the learning outcome (Black & Wiliam, 1998 [12] ; Butler, 2017 [8] ; Fuchs & Fuchs, 1986 [13] ; Lee, Harrison, & Black, 2004 [14] ; Wiliam, 2011 [11] ; Fies & Marshall, 2006 [15] ). Mei and Wang (2022) [16] advocate in their interpretation that assessment should be considered an integral part of the teaching-learning dyad, which can provide not only evidence of students’ learning, but also affordance that activates students’ agency, enhances their reflection and self-regulation and at the same time facilitates the perfecting of teachers’ instructional design.

However, there still exists a significant gap between ideal and reality. The conventional classroom teaching settings and teachers’ methods are less than adequate in terms of enacting the “ideal” formative assessments. For example, it is difficult to overcome the obstacles such as aggregating evidence of learning in a large-sized class, thus rendering it impossible for the instructors to provide what van Lier (2004) [17] calls the “just-in-time” and “just-right” evidence-based feedback and interventions in class. Furthermore, the extent to which peers interact amongst themselves and with their instructors in class is also limited, as the time constraint of a formal 40-minute session is nowhere near enough to afford each individual student’s sharing their work and receive timely feedback. Still worse, adolescent learners under the influence of traditional code-of-conduct in East Asia tend to display higher affective filters when invited to voice their personal views in class, thus undermining the reliability of the assessments. As for peer-assessment, it could either embarrass the students due to their stereotypical conceptualization of assessment, or have its validity at risk because some “group experts” or “teachers’ favorites” could potentially and unknowingly harm the process by playing a dominant role (Fies & Marshall, 2006) [15] .

Reasonably, the need arises for a practical solution to the aforementioned problems, which ultimately directs our attention to the use of classroom response systems (CRSs), which are also referred to as clickers, students’ response systems (SRSs), audience response systems (ARSs), wireless keypad response systems, classroom communication systems or electronic response systems. CRSs are web-based applications that assist instantaneous interaction in class (Fies & Marshall, 2006 [15] ; Iwamoto & Hargis, 2018 [18] ; Li & Wong, 2020 [19] ) to bridge the gap between the traditional “one-way” delivery of subject content (Black & Wiliam, 2018) [4] and the new paradigm of interactive and facilitative formative assessments for instructional purposes.

Nevertheless, despite the fact that CRSs have enjoyed a long history since the 1960s, first in the form of infra-red clickers and now as web-based and classroom-embedded applications on digital devices (Deal, 2007 [20] ; Li & Wong, 2020 [19] ), the use of technology enhanced formative assessment amongst Chinese adolescent language learners is still brand-new. Research concerning pedagogical principles underlying formative assessment is still inadequate and incomplete (Black & Wiliam, 2018 [4] ; Black & Atkin, 2014 [21] ; Butler, 2017 [8] , 2019 [9] ; Fies & Marshall, 2006 [15] ; Mills, 2014 [22] ). Poehner (2008) [23] suggests that the instructors’ limited knowledge of pedagogical theories and principles may have also resulted in the antagonistic dichotomy of assessment and instruction in reality. According to Li and Wong’s (2020) [19] research, the use of CRSs in secondary school accounts for only 8%, far less than the equivalent of 84% in tertiary institutions, and therefore implies potential for further study and discovery. As Black and Wiliam commented on the pedagogical issue of technology use in formative assessment, “building coherent theories, adequate descriptions, and firmly grounded guides to practice, for formative assessment is a formidable undertaking” (1998) [12] , and “there is ample room for such considerations” (2009) [24] .

This paper thus introduces qualitative research probing into how the integration of technology and pedagogy provides a new paradigm of formative assessment that enhances learning in secondary school. The research is based on the technology enhanced teaching practices in two different classes from two middle schools focusing on how the use of CRSs has enriched the learning environment and increased the interactivity among students themselves and between them and the instructor.

2. Theoretical Framework

2.1. Formative Assessment: Definition, Rationale, and Functions

2.1.1. What Is Formative Assessment?

The traditional role of assessment was “high-stake” and as “gates” which students had to pass so that they could move onto the next level of learning. During the 1950s, it was widely accepted that tests functioned to “provide an index of learning”―“to ensure fair, accurate, and consistent opportunities for students” (Earl, 2010) [3] .

It was not until the 1970s that “formative assessment” became popularized in contrast with “summative assessment” (Wiliam, 2014) [25] . The term is defined as frequent and interactive classroom activities that take place during teaching for the instructors to monitor students’ learning progresses, identify their needs (both individual and group), and then help instructors make appropriate choices about or adjust the next phase of teaching (Black & Wiliam, 2018 [4] ; Black & Wiliam, 1998 [12] ; Broadfoot et al. 1999 [26] ; Cowie & Bell, 1999 [27] ; Earl, 2010 [3] ; Looney, 2005 [28] ; Kahl, 2005 [29] ; Shepard et al., 2005 [30] ; Wiliam, 2014 [25] ). It also helps to provide learners with feedback on specific course or lesson objectives, or the criteria of the course (Brown & Lee, 2015) [2] .

2.1.2. How Does Formative Assessment Work

Lantolf et al. (2015) [31] maintain that “the most important forms of human cognitive activity develop through interaction within social and material environments.” Thanks to the development in psychology, pedagogy and educational philosophy, we now understand that human minds are inclined to make meaning out of their immediate surroundings and personal experiences, and then choose to respond to the context based on prior knowledge and skills (Earl, 2010 [3] ; Lantolf et al., 2015 [31] ; Lantolf et al., 2020 [32] ; Thorne, 2008 [33] ; WIDA, 2020 [34] ). Young learners are constantly attempting to construct new knowledge based on prior knowledge, and through interacting with the environment and other peers, through the process of which their personal beliefs are adjusted and actions modified to meet the real-world challenges (Earl, 2010 [3] ; Strange & Banning, 2015 [35] ; Thorne, 2008 [33] ).

In this light, learning is conceptualized as an ongoing process where each individual student explores for knowledge through their very own personal experiences embedded in the interaction with the environment and other peers. Learners’ socio-cognitive interactive abilities such as attitudes to peer collaboration, skills in communication, autonomy or self-regulation all have a role to play in the process (Butler, 2019) [9] . They also mature following their unique and idiosyncratic patterns rather than in a universal way (Fies & Marshall, 2006) [15] . As Marzano and Costa (1988) [1] pointed out that “learning is not a static trait” but rather “a dynamic process that itself can be learned and developed,” and therefore they advocate the implementation of “alternative tests,” or “dynamic assessment” (Lantolf, 2009 [36] ; Poehner, 2008 [23] ) which test “the vast array of thinking skills important for the information age.”

Following this vein, we can draw the conclusion that “assessment in this conception of learning is much more than a summary or index of learning at the end,” but rather an integral part of the learning process (Birenbaum, 1996 [37] ; Earl, 2010 [3] ; Earl & Katz, 2006 [38] ; Mei & Wang, 2022 [16] ; Wiliam, 2013 [39] ; Wolf et al., 1991 [40] ). It is essential for the teachers to understand how individual students conduct learning, and cater to their needs with adjustment and modification of the content and methods for the next stage ? a matter of pedagogical choice for the purpose of fostering “disciplinary habits of mind” (Black & Wiliam, 2018 [4] ; Fies & Marshall, 2006 [15] ; Shulman, 2005 [41] ; Wiliam 2016 [42] ).

Therefore, classroom activities should be so designed as to “engage attention, motivate and present a challenge to which the learners can respond with high probability of success, so securing intrinsic reward” in order to sustain learner’s commitment to learning (Black & Wiliam, 2018) [4] .

2.1.3. What Can Formative Assessment Do

Contrary to the common illusion that a class is “a homogeneous unit” (Earl, 2010) [3] , the truth is that each of the individual students bring to the class their unique backgrounds, skills and individual needs (Poehner, 2008 [23] ; WIDA, 2020 [34] ). It is, therefore, the teachers’ responsibility to take the students’ diverse interests, backgrounds, understandings, intellectual and emotional maturity into consideration, and then employ appropriate assessments to understand these differences and hence, to cater to their particular needs and learning patterns (Earl, 2010 [3] ; WIDA, 2020 [34] ).

By deploying meticulously designed and classroom-embedded formative assessment, teachers are able to gain an insight into students’ existing beliefs and prior knowledge, access the best possible evidence about students’ learning and then use this information to decide what to do next (Wiliam, 2011) [11] , thus avoiding false interpretations and judgements that may distort learning and teaching practices. Teachers can also establish links between students’ prior knowledge and new learning experiences using formative assessment in class.

Earl (2010) [3] summarizes three functions that formative assessments serve:

1. To reveal learners’ knowledge, skills, and beliefs that they bring to the class;

2. To serve as a starting point for the next stage of instruction; and

3. To monitor students’ learning progress.

2.2. Web-Based Formative Assessment: When Technology Stages the Show

As has been explained, implementing ideal formative assessment in class is not easy because there are “barriers” that hinder its effectiveness. It is at this point that information and communication technology comes in handy (Jiang et al., 2017) [43] . The CRSs belong to the category of “Synchronous Computer Mediated Communication (SCMC)” applications (Thorne, 2008) [33] that provide solutions to the limitations on pedagogical use of formative assessments in class (Fies & Marshall, 2006 [15] ; Hargis, Cavanaugh, Kamali & Soto, 2014 [44] ; Li & Wong, 2020 [19] ). As Butler (2019 [9] , 2017 [8] ) proposes, the latest trend of integrating digital technology―web-based applications in this case―into daily classroom teaching has provided opportunities to tailor assessments to learners’ needs and experiences, and is reshaping the ways of learning significantly (Jiang & Zhang, 2020) [45] .

Thorne (2008) [33] cited Kern’s (1995) [46] and Darhower’s (2002) [47] research findings that via SCMC learning practices students were able to produce more “meaningful, highly intersubjective discourse” along with more “sophisticated” grammatical and functional output in the target language. Thorne explains that this may have resulted from the fact that SCMC can “promote an increase in corrective feedback and negotiation at all levels of discourse, a condition that prompts learners to produce form-focused modifications to their turns” (2008) [33] . And this theorization is echoed by other researchers in the field (Chun, 1994 [48] ; Salaberry, 2000 [49] ; Payne & Ross, 2005 [50] )

Having made its debut during the 1960s, CRSs quickly evolved from a “one-button infra-red or radio-signal based clicker” to the modern type of “web-based” responding applications (Deal, 2007 [20] ; Li & Wong, 2020 [19] ). There are a variety of question types catering to different instruction and assessment needs in class, ranging from “simple factual recall” to “questions designed specifically to reveal and challenge common misconceptions in a given topic” (Deal, 2007) [20] , which align themselves with the cognitive skills listed in the Bloom’s taxonomy (Krathwohl, 2002) [51] . The relationship is displayed in Figure 1.

The varied types include simple True/False or Multiple-choice questions, Item-ranking and, particularly, Open-ended questions (Li & Wong, 2020) [19] . While the True/False and multiple-choice questions are more “akin to the stimulus response pattern of behaviorist theory” (Fies & Marshall, 2006) [15] to reinforce behavior and consolidate students’ factual knowledge, the item-ranking and open-ended questions are more likely to “foster the strengths of diversity in

Figure 1. Alignment of CRSs questions types with Bloom’s taxonomy.

thought” (Stroup et al., 2002 [52] , 2004 [53] , 2006 [54] ) and encourage the students to follow a collaborative and constructive learning approach. This enables the instructors to make decisions about how to collect data from students more effectively (Fies & Marshall, 2006) [15] and how the instructional procedures can be adapted to meet the pedagogical requirements.

One highlighting feature of CRSs is that instructors can decide whether the students can engage anonymously or reveal their real names during the assessments (Fies & Marshall, 2006 [15] ; Jiang & Zhang, 2020 [45] ; Melchor-Couto, 2018 [55] ). This may affect the students’ performance because they can become more open and confident than conservative when invited to tackle questions or put forward opinions through the online application. Jiang and Zhang (2020) [45] find that learners display better performance in cognitively demanding tasks when they are allowed to engage in a web-based collaborative learning environment anonymously.

Another advantage of using CRSs in formative assessment is that they help to elicit opinions from each individual in a way that every voice can be “heard” and the class doesn’t risk being dominated by those “group experts” or “teachers’ favorites” (Fies & Marshall, 2006 [15] ; Shulman, 2005 [41] ; Wiliam 2016 [42] ).

In terms of providing feedback after assessment, the use of CRSs successfully dodges Brown and Lee’s (2015) [2] concern for the traditional practice of only providing grades in digits. They protested that “grades and scores reduce a mountain of linguistic and cognitive performance data to an absurd minimum.”

3. The Research

3.1. Setting and Participants

The researchers implemented the web-based formative assessment in two different classes. A total of 46 students were involved in the research, with 11multi-lingual (native language, Chinese and English) students of Grade Eleven (4 males and 7 females, aged 16 to 17) from an international school, and 35 Chinese bilingual (Chinese and English) students (8 males and 27 females, aged 15 to 16) of Grade Nine, who have attained a high level of English proficiency but are more accustomed to traditional lecturing mode of teaching. The detailed information about the participants is given in Table 1.

The two groups of participants are from similar age groups but of diverse life and educational backgrounds. The researchers expected that the two groups’ respective responses to the web-based formative assessment will reveal whether the use of CRSs technology can enhance the students’ performance during the class, across the spectrum of diverse backgrounds.

Prior to the research, the students from both classes were more inclined to remain “silent” even when encouraged to “voice their opinions.” The researchers presume that the participants’ reluctance to speak up in class was due to the fact that they were afraid of making mistakes in class and that they were under the influence of East Asian culture which maintains that students in class are supposed to listen quietly and avoid questioning others or commenting on controversial issues.

Before engaging all the participants in the research, the teacher informed them of the purpose and the plan of using the web-based interactive technology in the classroom. A letter of consent was presented to all the participants to make sure that they understood what they were expected to do and that they agreed to be part of the research.

3.2. Research Questions

The purpose of the research is to collect evidence about how CRSs can facilitate the classroom formative assessment and promote learning outcomes, and how the students respond to the new technology-enhanced learning environment. After reviewing the literatures, the researchers decided that observation should focus on the following aspects:

1) To what extent do CRSs as instruction instruments enhancethe secondary school students’ participation in classroom activities?

2) What CRSs question types and response formats best appeal to the secondary school students?

3) What are the secondary school students’ attitudes towards CRSs as a way of formative assessments and instruction?

Table 1. Demographics of participants.

3.3. Procedures

In two weeks’ time, the researchers implemented the web-based formative assessments. CRSs were used in both classes as assessment and pedagogical tools, after which a survey was given to collect students’ comments and attitudes towards the novel instructional practices.

The researchers used the CRSs in class to implement an interactive instruction. They first led the whole class to read or listen to the materials and then ask the students to respond using the CRSs. Their responses were immediately aggregated and displayed on the screen in the predetermined formats. All of the participants are allowed to opt for the “anonymous” mode.

Altogether, the researchers experimented with four question formats, namely, “word cloud,” “brief answers or comments,” “item ranking,” and “multiple choices.”

With “word cloud”―a kind of open-ended question, the participants are asked to submit a word to the questions, which are displayed on the screen instantaneously. The “word cloud” continues to grow, changing its shape and size with more responses pouring in from the student participants. The most heavily “voted-for” word creates the biggest font size. This is a most engaging period for the participants, as they are able to interact with other peers by observing how others think about and reply to the questions. Technically, they are also allowed to “change” their responses if they find others’ opinions more reasonable and more convincing. This constitutes a period of peer interaction and collaborative learning when students join their efforts and invent diversified solutions to the questions and situations assigned by the teacher. The teacher also contributes to the process by intentionally encouraging each individual student to submit their ideas and by explaining whether or not the students’ submitted answers fit into the context provided, which aims at enhancing classroom learning outcomes (Black et al., 2003) [56] . This embodies Vygotsky’s (1978) [57] ideal of scaffolding students’ learning through meaningful interaction between the learners and the more knowledgeable other.

Figure 2 is a screenshot of a “word cloud” co-constructed by the 9th graders when they were asked to complete the paragraph on the left of the page by supplying a word to the gap. The word “recycling” received the most votes and therefore located itself in the center in the largest font, while other responses with relatively fewer votes spread out in the periphery in smaller fonts. The words in the outskirt clearly revealed how the students’ syntactical and semantic knowledge was influencing their decision-making for the gap.

The “brief answers or comments” is a type of “opinion poll.” Students are invited to submit their answers in phrases, clauses or sentences based on their comprehension of the given multimodal materials, i.e. written scripts, audio and visual pieces. This question type is used for multiple purposes, ranging from assessing factual knowledge to soliciting comments or opinions. And it also kindles the students’ passion for collecting evidence and sharing viewpoints. During

Figure 2. CRSs question type: word cloud.

the interaction, the teacher also scaffolds the students by asking them to explain their viewpoints or invite them to challenge one another’s view. Figure 3 is an example of how the students contributed their respective opinions and collaborated to resolve the question.

With the “Multiple Choices” questions, students are invited to choose from four or five choices provided by the teacher, the percentage based on their choices is immediately calculated by the CRSs app and displayed in the form of histogram. It assists the teacher to observe and decide how well the students have acquired the target concepts. The students also see the differences in their choices and are triggered to re-examine their own answers or challenge others’. The teacher may facilitate this process by asking the students to explain or challenge the choices. Figure 4 shows how the students differed in their answers to the question, which aroused further discussion.

Teachers and researchers should be aware that “multiple choice” questions can also be deployed in the form of dyadic questions, such as “True/False” or “Yes/No” formats, not only for factual knowledge, but for opinion polls as well. For example, at the end of the experimental sessions, the teacher asked the students to decide whether they would like to engage in this innovative web-based formative assessment in the future. And the students cast their votes which were immediately displayed on the screen, as is shown in Figure 5.

“Item ranking” is similar to “multiple choice” questions. The students are invited to choose from a list of options and the one with highest votes is automatically moved to the top of the sequence. These types of questions can be employed to encourage the students to put forward their viewpoints and challenge others’, as can be seen in Figure 6. This question format is especially useful when the teachers hope to foster the students’ critical thinking skills. Additionally, based on the collected answers, the teacher can also group and re-group the students, and decide the topic to be discussed for the next phase.

At the end of the two weeks’ trying out of the web-based CRSs formative assessment, a survey in the printed form is given out to collect the students’ feelings

Figure 3. CRSs question type: opinion wall.

Figure 4. CRSs question type: Multiple Choices.

Figure 5. CRSs question type: multiple choices/True of False.

Figure 6. CRSs question type: item ranking.

and comments on engaging the technology enhanced instruction and web-based classroom-embedded formative assessment. The data were then calculated and summarized by the researchers to reveal the findings.

4. Findings and Discussions

After the implementation phase, a survey was conducted to collect the students’ feedback on the use of CRSs. Altogether nine questions were given to the students. Questions 1 to 6 (Table 2) are designed in the form of Likert scale, question 7 (Table 3) is a multiple-choice question about their preferences of CRSs question and response formats. Questions 8 and 9 are open-ended, aiming at soliciting students’ feedback and suggestions on future implementation and modification of web-based and classroom-embedded formative assessment.

Altogether, the researchers received 45 replies (97.8% response rate), 10 from the international class and 35 from the Chinese class. All of the data were recorded and then mean scores were calculated for questions 1 to 6. To ensure the data’s reliability, the researchers also calculated the value of Cronbach’s Alpha which yielded a result of 0.81, indicating that the quality of the data set is statistically “good.”

The survey shows that the use of web-based CRSs for the integration of formative assessment proves to have won the students’ favor. The data indicate that the students enjoyed themselves engaging in the web-based formative assessment where they could respond to questions through either individual efforts or peer collaboration. In Table 2, based on the mean scores of questions 1 to 6, it is clear that the students maintain highly positive opinions about the technology-enhanced novel approach. The mean scores of the questions concerning “focused attention,” “understanding,” “communication and collaboration,” “participation,” “comparison against traditional discussion” and “attitudes towards future use of CRSs based formative assessment in class” all average high.

Table 2. Mean scores of questions 1 to 6.

Another table (Table 3) is made according to the students’ choices of preferred question types or response formats. As is shown below, the most preferred question type is “word cloud” (71%), followed by “brief answers or comments/ opinion polls” (53%) and “item ranking” (49%), while “multiple choices” and “True/False questions” rank at the bottom. The researchers believe that the creative use of graphic, dynamic and interactive elements of “word cloud” may have contributed to its championship in the poll, which exemplifies the top level of “transformation” of pedagogy according to Dr. Puentedura’s (2010) [58] SAMR (Substitution, Augmentation, Modification and Redefinition) model. And it is also obvious that the students preferred “open-ended” questions in the assessments which provide them with opportunities to challenge or construct knowledge.

Question 8 is an open ended one that calls on the students to freely comment on what they like about the web-based formative assessment. Aggregating all the responses from the students and feeding them into the “word cloud” generating machine via https://www.wordclouds.com/. The researchers created a “word cloud” (Figure 7) highlighting the students’ most condensed remarks. As can be anticipated, students highly value the features of CRSs that provide them with “great fun” and “instantaneous feedback,” have them more “focused”, promote their “participation,” “interaction,” “independent thinking,” and, particularly, give the “shy” students an opportunity to “have their voices heard.”

Table 3. Ranking of choices from question 7.

Figure 7. Word cloud featuring students’ responses to question 8.

Question 9 invites the students to comment on the less favorable aspects of the CRSs use and provide suggestions on modification and improvement. Most replied “Good” while some suggested that the teacher should give the students more time to discuss and collaborate before producing the answers and that the hardware installed in the classroom needs improving to better support the web-based formative assessment and interactive learning.

The findings from the research provide answers to the three research questions:

1) The integration of web-based CRSs has both stimulated the secondary school students’ passion for and promoted their participation in the classroom-embedded assessment;

2) The CRSs question types and response formats provide “great fun,” “instantaneous feedback,” help them to “focus”, promote their “participation,” “interaction,” “independent thinking,” and give everyone an opportunity to have their opinions voiced;

3) The secondary school students exhibit a highly positive attitude towards the web-based and classroom-embedded formative assessment.

5. Conclusions

As can be seen in this research, the advantages of a web-based formative assessment echo the findings of the previous researches (Black & Wiliam, 2018 [4] ; Butler, 2019 [9] , 2017 [8] ; Crooks, 1988 [5] ; Earl, 2010 [3] ; Fies & Marshall, 2006 [15] ; Jiang & Zhang, 2020 [45] ; Li & Wong, 2020 [19] ; Stroup et al., 2002 [52] , 2004 [53] , 2006 [54] ; Wiliam, 2011 [11] ).

For the students, the web-based formative assessment:

1) Enhances peer interaction and collaboration;

2) Helps with individual reflection on learning, in other words, self-regulation of learning behavior and strategies;

3) Significantly lowers the affective filter, enhances motivation and promotes participation and interaction of the whole class;

4) Introduces a graphic, dynamic and interactive element that engages the students;

5) Caters to the learning styles of Generation Z and renders classroom learning more enjoyable than ever before;

For the teachers, the web-based formative assessment:

6) Provides a means for teachers to collect evidence about students’ learning in large sized classes, which helps to adjust both in-time and long-term teaching;

7) Also informs teachers about plans for future teaching and assessment criteria.

Based on this research and previous studies, it is also important that teachers be aware that the integration of technology and formative assessment must be aligned with pedagogical principles in order to set up an “effective learning environment” (Black, 2016 [59] ; Black & Wiliam, 2018 [4] ; Brown & Lee, 2015 [2] ; Wiliam, 2011 [11] ; Wiliam & Thompson, 2007 [60] ).

The practice of integrating web-based technology with classroom formative assessment is based on the sociocultural theory of mental development (Lantolf, 2009 [36] ; Poehner, 2008 [23] ; Wiliam, 2014 [25] ; Vygotsky, 1978 [57] ). It helps to optimize learning outcomes (Butler, 2017) [8] by bridging the gap between the traditional one-way delivery of subject matter (Black & Wiliam, 2018) [4] and the novel approach of interactive and evidence-based formative assessments for instructional purposes. It also facilitates the effort to close the gap between the difficulty that hinders successful formative assessment in big-sized classes and the ideal practices that cater to both learning and teaching needs.

These days, some CRSs applications have put forward another new function which enables students to access mobile-based assessment tasks at home. Students can tackle more complicated questions individually or choose to compete with a “virtual” competitor online.

Despite all the advantages of web-based formative assessment discussed above, the researchers are aware that there are limitations to this research. This research is a qualitative one which lasts for only two weeks and the data from the survey are only indirect as they are collected from the participants’ subjective choices and personal narration. There still exists a lack of direct evidence of their learning outcomes. More informative research would include a comparison between the pre-test and post-test results, or take on a quasi-experiment that provides a comparison of learning outcomes between a control group and a treatment group.

Regarding the future of technology enhanced formative assessment, the researchers would like to cite Wiliam (2011) [11] that “there was no simple recipe that could be easily applied in every classroom.” Further research is still necessary to testify the validity and reliability of web-based formative assessment in class (Butler, 2017) [8] .

Ultimately, it depends on the concerted effort from teachers, administrators, and public stakeholders (WIDA, 2020) [34] , who must examine some of the deeply held notions about testing and assessing, so that the design of future assessment is adjusted not only to measure but also to empower learning (Earl & Katz, 2006) [38] .

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Marzano, R.J. and Costa, A. (1988) Question: Standardized Tests Measure General Cognitive Skills? Answer: No. Educational Leadership: Journal of the Department of Supervision and Curriculum Development, N.E.A., 45, 66-71. https://www.researchgate.net/publication/234671814
[2] Brown, H.D. and Lee, H. (2015) Teaching by Principles: An Interactive Approach to Language Pedagogy. Pearson Education, New York.
[3] Earl, L. (2010) Assessment: A Powerful Lever for Learning. Brock Education Journal, 16, 1-15.
[4] Black, P. and Wiliam, D. (2018) Classroom Assessment and Pedagogy. Assessment in Education: Principles, Policy and Practice, 25, 551-575. https://doi.org/10.1080/0969594X.2018.1441807
[5] Crooks, T.J. (1988) The Impact of Classroom Evaluation Practices on Students. Review of Educational Research, 58, 438-481. https://doi.org/10.3102/00346543058004438
[6] Crooks, T.J., Kane, M.T. and Cohen, A.S. (1996) Threats to the Valid Use of Assessments. Assessment in Education Principles Policy and Practice, 3, 265-286. https://doi.org/10.1080/0969594960030302
[7] Bachman, L.F. and Palmer, A.S. (1996) Language Testing in Practice: Designing and Developing Useful Language Tests. Oxford University Press, Oxford.
[8] Butler, Y.G. (2017) Challenges and Future Directions for Young Learners’ English Language Assessments and Validity Research. In: Wolf, M.K. and Butler, Y.G., Eds., English Language Proficiency Assessments for Young Learners: Innovations in Language Learning and Assessment at ETS, Volume 2, Routledge, New York, 255-273. https://doi.org/10.4324/9781315674391-14
[9] Butler, Y.G. (2019) Assessment of Young English Learners in Instructional Settings. In: Gao, X., Ed., Second Handbook of English Language Teaching, Springer, Berlin, 477-496. https://doi.org/10.1007/978-3-030-02899-2_24
[10] Bennett, R.E. (2009) A Critical Look at the Meaning and Basis of Formative Assessment (ETS Research Memorandum No. RM-09-06). Educational Testing Service, Princeton.
[11] Wiliam, D. (2011) Embedded Formative Assessment. Solution Tree Press, Bloomington.
[12] Black, P. and Wiliam, D. (1998) Assessment and Classroom Learning. Assessment in Education: Principles Policy and Practice, 5, 7-73. https://doi.org/10.1080/0969595980050102
[13] Fuchs, L.S. and Fuchs, D. (1986) Effects of Systematic Formative Evaluation: A Meta-Analysis. Exceptional Children, 53, 199-208. https://doi.org/10.1177/001440298605300301
[14] Black, P., Harrison, C., Lee, C., Marshall, B. and Wiliam, D. (2004) Working inside the Black Box: Assessment for Learning in the Classroom. Phi Delta Kappan, 86, 8-21. https://doi.org/10.1177/003172170408600105
[15] Fies, C. and Marshall, J. (2006) Classroom Response Systems: A Review of the Literature. Journal of Science Education and Technology, 15, 101-109. https://doi.org/10.1007/s10956-006-0360-1
[16] Mei, D. and Wang, Q. (2022) New Development of English Curriculum Standards for Compulsory Education in the New Era: Interpreting the 2022 English Curriculum Standards for Compulsory Education. Elementary Education Curriculum, No. 10, 19-25.
[17] Van Lier, L. (2004) The Ecology and Semiotics of Language Learning: A Sociocultural Perspective. Kluwer Academic Publishers, Boston. https://doi.org/10.1007/1-4020-7912-5
[18] Iwamoto, D. and Hargis, J. (2018) Student Response Systems: A Mindful Approach. In: Harnish, R.J., Bridges, K.R., Sattler, D.N., Signorella, M.L. and Munson, M., Eds., The Use of Technology in Teaching and Learning, The Society for the Teaching of Psychology, 66-73.
[19] Li, K.C. and Wong, B.T.-M. (2020) The Use of Student Response Systems with Learning Analytics: A Review of Case Studies (2008-2017). International Journal of Mobile Learning and Organisation, 14, 63-79. https://doi.org/10.1504/IJMLO.2020.103901
[20] Deal, A. (2007) A Teaching with Technology White Paper: Classroom Response Systems. Carnegie Mellon University, Pittsburgh. https://www.cmu.edu/teaching/technology/whitepapers/ClassroomResponse_Nov07.pdf
[21] Black, P. and Atkin, M. (2014) Ch. 38. The Central Role of Assessment in Pedagogy. In: Lederman, N.G. and Abell, S.K., Eds., Handbook on Research in Science Education, Volume 2, Routledge, Abingdon, 775-790.
[22] Mills, K.L. (2014) Effects of Internet Use on the Adolescent Brain: Despite Popular Claims, Experimental Evidence Remains Scarce. Trends in Cognitive Sciences, 18, 385-387. https://doi.org/10.1016/j.tics.2014.04.011
[23] Poehner, M.E. (2008) Dynamic Assessment: A Vygotskian Approach to Understanding and Promoting L2 Development (Vol. 9). Springer Science and Business Media, Berlin.
[24] Black, P. and Wiliam, D. (2009) Developing the Theory of Formative Assessment. Educational Assessment, Evaluation and Accountability, 21, 5-31. https://doi.org/10.1007/s11092-008-9068-5
[25] Wiliam, D. (2014) Formative Assessment and Contingency in the Regulation of Learning Processes. Annual Meeting of American Educational Research Association, Vol. 2014, Philadelphia.
[26] Broadfoot, P.M., Daugherty, R., Gardner, J., Gipps, C.V., Harlen, W., James, M., et al. (1999) Assessment for Learning: Beyond the Black Box. University of Cambridge School of Education, Cambridge.
[27] Cowie, B. and Bell, B. (1999) A Model of Formative Assessment in Science Education. Assessment in Education: Principles, Policy and Practice, 6, 32-42. https://doi.org/10.1080/09695949993026
[28] Looney, J. (2005) Formative Assessment: Improving Learning in Secondary Classrooms. Organisation for Economic Cooperation and Development, Paris.
[29] Kahl, S. (2005) Where in the World Are Formative Tests? Right under Your Nose! Education Week, 25, 38.
[30] Shepard, L.A., Hammerness, K., Darling-Hammond, L., Rust, F., Snowden, J.B., Gordon, E., et al. (2005) Assessment. In: Darling-Hammond, L. and Bransford, J., Eds., Preparing Teachers for a Changing World: What Teachers Should Learn and Be Able to Do, Jossey-Bass, San Francisco, 275-326.
[31] Lantolf, J.P., Thorne, S.L. and Poehner, M.E. (2015) Sociocultural Theory and Second Language Development. In: VanPattern, B. and Williams, J., Eds., Theories in Second Language Acquisition: An Introduction, 2nd Edition, Routledge, New York, 207-226.
[32] Lantolf, J.P., Poehner, M.E. and Thorne, S.L. (2020) Sociocultural Theory and L2 Development. In: Theories in Second Language Acquisition, Routledge, London, 223-247. https://doi.org/10.4324/9780429503986-10
[33] Thorne, S.L. (2008) Mediating Technologies and Second Language Learning. In: Coiro, J., Knobel, M., Lankshear, C. and Leu, D., Eds., Handbook of Research on New Literacies, Lawrence Erlbaum, Mahwah, 417-449.
[34] WIDA (2020) WIDA English Language Development Standards Framework, 2020 Edition: Kindergarten-Grade 12. Board of Regents of the University of Wisconsin System, Madison.
[35] Strange, C.C. and Banning, J.H. (2015) Learning through Mobile Technology. In: Designing for Learning: Creating Campus Environments for Student Success, 2nd Edition, Jossey-Bass, San Francisco, 113-133.
[36] Lantolf, J.P. (2009) Dynamic Assessment: The Dialectic Integration of Instruction and Assessment. Language Teaching, 42, 355-368. https://doi.org/10.1017/S0261444808005569
[37] Birenbaum, M. (1996) Assessment 2000: Towards a Pluralistic Approach to Assessment. In: Birenbaum, M. and Dochy, F.J.R.C., Eds., Alternatives in Assessment of Achievements, Learning Process, and Prior Knowledge, Kluwer Academic Publishers Group, Dordrecht, 3-29. https://doi.org/10.1007/978-94-011-0657-3_1
[38] Earl, L. and Katz, S. (2006) Rethinking Classroom Assessment with Purpose in Mind: Assessment for Learning, Assessment as Learning, Assessment of Learning. Canada: Manitoba Education, Citizenship and Youth, Winnipeg. https://www.edu.gov.mb.ca/k12/assess/wncp/full_doc.pdf
[39] Wiliam, D. (2013) Assessment: The Bridge between Teaching and Learning. Voices from the Middle, 21, 15-20.
[40] Wolf, D., Bixby, Glenn III, J. and Gardener, H. (1991) To Use Their Minds Well: Investigating New Forms of Student Assessment. Review of Research in Education, 17, 31-74. https://doi.org/10.2307/1167329
[41] Shulman, L. (2005) The Signature Pedagogies of the Professions of Law, Medicine, Engineering, and the Clergy: Potential Lessons for the Education of Teachers. In: Talk Delivered at the Math Science Partnerships (MSP) Workshop: “Teacher Education for Effective Teaching and Learning”, 6-8.
[42] Wiliam, D. (2016) Leadership for Teacher Learning: Creating a Culture Where All Teachers Improve So That All Learners Succeed. Learning Sciences International, West Palm Beach.
[43] Jiang, D., Renandya, W.A. and Zhang, L.J. (2017) Evaluating ELT Multimedia Courseware from the Perspective of Cognitive Theory of Multimedia Learning. Computer Assisted Language Learning, 30, 726-744. https://doi.org/10.1080/09588221.2017.1359187
[44] Hargis, J., Cavanaugh, C., Kamali, T. and Soto, M. (2014) A Federal Higher Education iPad Mobile Learning Initiative: Triangulation of Data to Determine Early Effectiveness. Innovative Higher Education, 39, 45-57. https://doi.org/10.1007/s10755-013-9259-y
[45] Jiang, D. and Zhang, L.J. (2020) Collaborating with “Familiar” Strangers in Mobile-Assisted Environments: The Effect of Socializing Activities on Learning EFL Writing. Computers and Education 150, Article ID: 103841. https://doi.org/10.1016/j.compedu.2020.103841
[46] Kern, R.G. (1995) Restructuring Classroom Interaction with Networked Computers: Effects on Quantity and Characteristics of Language Production. Modern Language Journal, 79, 457-476. https://doi.org/10.1111/j.1540-4781.1995.tb05445.x
[47] Darhower, M. (2002) Interactional Features of Synchronous Computer-Mediated Communication in the Intermediate L2 Class: A Sociocultural Case Study. CALICo Journal, 19, 249-277. https://doi.org/10.1558/cj.v19i2.249-277
[48] Chun, D.M. (1994) Using Computer Networking to Facilitate the Acquisition of Interactive Competence. System, 22, 17-31. https://doi.org/10.1016/0346-251X(94)90037-X
[49] Salaberry, M.R. (2000) L2 Morphosyntactic Development in Text-Based Computer-Mediated Communication. Computer Assisted Language Learning, 13, 5-27. https://doi.org/10.1076/0958-8221(200002)13:1;1-K;FT005
[50] Payne, J.S. and Ross, B. (2005) Synchronous CMC, Working Memory, and Oral L2 Proficiency Development. Language Learning and Technology, 9, 35-54.
[51] Krathwohl, D.R. (2002) A Revision of Bloom’s Taxonomy: An Overview. Theory into Practice, 41, 212-218. https://doi.org/10.1207/s15430421tip4104_2
[52] Stroup, W.M., Kaput, J.J., Ares, N.M., Wilensky, U., Hegedus, S., Roschelle, J., et al. (2002) The Nature and Future of Classroom Connectivity: The Dialectics of Mathematics in the Social Space. The 24th Annual Meeting of the North American Chapter of the International Group for the Psychology of Mathematics Education, Athens.
[53] Stroup, W.M., Ares, N. and Hurford, A. (2004) A Taxonomy of Generative Activity Design Supported by Next Generation Classroom Networks. Proceedings of the 28th Annual Conference of Psychology in Mathematics Education North America, Bergen, 14-18 July 2004, 1401-1410.
[54] Stroup, W.M., Ares, N. and Lesh, R.A. (2006) Diversity by Design: The What, Why and How of Generativity in Next Generation Classroom Networks. In: Lesh, R.A., Hamilton, E. and Kaput, J.J., Eds., Foundations of the Future: Twenty-First Century Models and Modeling, Lawrence Erlbaum, New York, 367-393.
[55] Melchor-Couto, S. (2018) Virtual World Anonymity and Foreign Language Oral Interaction. ReCALL, 30, 232-249. https://doi.org/10.1017/S0958344017000398
[56] Black, P., Harrison, C., Lee, C., Marshall, B. and Wiliam, D. (2003) Assessment for Learning: Putting It into Practice. Open University Press, Berkshire.
[57] Vygotsky, L. (1978) Mind in Society: The Development of Higher Psychological Processes. Harvard University Press, Cambridge.
[58] Puentedura, R. (2010) SAMR and TPCK: Intro to Advanced Practice. http://hippasus.com/resources/sweden2010/SAMR_TPCK_IntroToAdvancedPractice.pdf
[59] Black, P. (2016) The Role of Assessment in Pedagogy and Why Validity Matters. In: Wyse, D., Hayward, L. and Pandya, J., Eds., Sage Handbook of Curriculum, Pedagogy and Assessment, Vol. 2, Sage, Thousand Oaks, 725-739. https://doi.org/10.4135/9781473921405.n45
[60] Wiliam, D. and Thompson, M. (2007) Integrating Assessment with Instruction: What Will It Take to Make It Work? In: Dwyer, C.A., Ed., The Future of Assessment: Shaping Teaching and Learning, Lawrence Erlbaum Associates, Mahwah, 53-82. https://doi.org/10.4324/9781315086545-3

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.