The Impact of English Writing Instruction Integrated with Automated Evaluation Systems on College English Teachers
—iWrite 2.0 System as an Example

Abstract

In order to figure out the impact of automated evaluation system integration into English writing instruction on college English teachers, this study designed a semester-long experiment to explore the changes in college English teachers before and after integrating an automated evaluation system into college English writing classrooms. Qualitative and quantitative results were drawn from surveys, interviews, etc. Suggestions were also proposed for college English writing instruction as well as teacher development in the “Intelligence Plus” era.

Share and Cite:

Guo, Y. and Xiong, W. (2024) The Impact of English Writing Instruction Integrated with Automated Evaluation Systems on College English Teachers
—iWrite 2.0 System as an Example. Open Journal of Modern Linguistics, 14, 1159-1169. doi: 10.4236/ojml.2024.146061.

1. Introduction

Today, with the rapid development of information technology, artificial intelligence plays a significant role in many fields, with no exception in education. As an important part of the English teaching system in the university, the development of English writing instruction has long been constrained by various factors, with writing evaluation as the most prominent one. Teachers are often unable to provide timely feedback due to constraints like excessive workload, or students may be unable to revise their compositions after feedback because of their deficient knowledge or fear of difficulties, resulting in ineffective evaluation. The introduction of an automated evaluation system offers a solution to the challenges in English writing instruction evaluation at the university. Automated evaluation systems, based on corpora and employing artificial intelligence technologies such as natural language processing, interact with teachers and students to provide personalized teaching support and feedback. Automated evaluation systems can not only provide real-time assessments of students’ compositions but also analyze the problems in students’ writing from various dimensions, which helps students overcome their fear and reluctance, facilitating personalized guidance. The systems also get English teachers rid of the burden of evaluation, allowing them to draw more attention to improving the content of students’ writing.

iWrite English Writing Teaching and Evaluation System (hereinafter referred to as the iWrite system, which has now released version 2.0), jointly launched by Foreign Language Teaching and Research Press and China National Foreign Language Education Research Center, is one of the most widely-used automated writing evaluation systems. This system is closely integrated with current college English writing instruction and synchronizes with the requirements of major English examinations both domestically and internationally. It evaluates compositions from four dimensions, including writing content, text structure, language, and technical specifications, aiming to help students improve grammatical accuracy in writing and enhance the coherence of their compositions.

It should be noted that the key factor in whether information technology can be successfully applied to language teaching is not computer hardware or software, but “humanware” (Warschauer & Meskill, 2000: pp. 303-318). To be specific, in English writing instruction, the English teacher is the key to teaching quality. Such being the case, teachers urgently need to understand how to better integrate evaluation systems into college English writing instruction to achieve better teaching outcomes. It should also be paid attention that, under the impact of artificial intelligence in the new era, what challenges should the teachers in the university deal with, and how they can better adapt to the intelligent model in college English writing courses for long-term career development.

2. Literature Review

Automated writing evaluation systems originated in the United States in the 1960s. Early scoring engines have evolved into automated evaluation systems (such as My Access!, Criterion, Writing Roadmap, Write to Learn, etc.) which can provide personalized feedback, and have been widely applied in classroom formative assessments (Wu & Tang, 2012: pp. 3-4).

2.1. Research on the Application of Automated Writing Evaluation Systems in Teaching

Early research focused on the validity of evaluation systems, that is, the similarity between the scores given by automated evaluation systems and those given by human raters (Bennett, 2006; Wang & Brown, 2007; Rich et al., 2008; McCurry, 2010; Liang, 2011). Subsequent research has mainly concentrated on the application of automated evaluation systems in teaching, mainly including two aspects: one is research on the impact of automated evaluation systems on students’ writing abilities (Shi, 2012; Wang & Liu, 2012; Tang & Wu, 2012; Hu, 2015; Yang & Dai, 2015). Shi (2012), Yang and Dai (2015) conducted empirical research on the feedback and scoring on the writing of “Juku Pi’gai Wang” through questionnaires and interviews. They found that students generally believe that automated evaluation systems can improve their writing skills. However, the feedback given by the systems is mainly focused on vocabulary and grammar, lacking in content, logic, coherence, and text structure. Tang and Wu (2012) conducted comparative research between students who used the automated writing evaluation system and those who did not, finding that students who used the system scored higher in content, structure, grammar, and vocabulary, and also showed greater interest and confidence in writing. Researchers have also constructed a writing instruction model suitable for college English writing instruction. The other is research on the evaluation of automated writing evaluation systems (Page, 2003; Attali, 2004; Bull & McKenna, 2004; Wang, 2011; Zhou, 2011). Research mainly focused on the advantages of automated evaluation systems compared to manual evaluation, as well as how to better utilize automated evaluation systems in teaching. It is worth noting that some scholars believe that automated evaluation systems should serve as an auxiliary tool in English writing instruction rather than replacing teachers in the teaching process (Burstein & Marcu, 2003; Ware & Warschauer, 2006; Chen & Cheng, 2008).

2.2. Research on the Impact of Automated Writing Evaluation Systems on Teachers

Research on the impact of automated writing evaluation systems on teachers is relatively limited. Warschauer & Grimes (2008) found that teachers were not very clear about the system and its application in teaching. There is also evidence to suggest that teachers’ attitudes affect students’ acceptance and utilization of the system (Chen & Cheng, 2008; Tang & Rich, 2011; Wang, 2008).

2.3. Research on iWrite2.0 System

The iWrite system, jointly put forward by Foreign Language Teaching and Research Press and Beijing Foreign Studies University, has received widespread attention from researchers in teaching since its launch, and the system has now been upgraded to version 2.0. Li and Tian (2018) utilized algorithms of consistency and consensus to conduct a comparison between manual scoring of writing and machine scoring by iWrite2.0 system from multiple perspectives. They also conducted an empirical study on the reliability of iWrite2.0 online scoring to English compositions. Liu and Liu (2018) examined the process in which students revised their compositions through iWrite2.0 system and found that there was a significant decrease in linguistic errors such as lexicon, syntax, technical specifications, etc. Xu (2020) studied the impact of iWrite2.0 system feedback and feedback combined teachers with the system on English writing, and the attitudes of students and teachers toward these two feedbacks. The study found that iWrite2.0 system can improve students’ writing skills, especially in terms of language and technical specifications. Combination of teacher’s and iWrite2.0 system feedback is more effective, bringing higher satisfaction of students. Li (2021) verified the consistency between the automatic scoring of iWrite and manual scoring by examining the use of iWrite2.0 system in the National Talent Examination, finding mismatched consistency. The main reason was said to be related to the limited ability of iWrite to judge the relevance of composition content and the standardization of format in different contexts. Liu et al. (2022) constructed a human-computer collaborative teaching model for English writing based on iWrite2.0 system, finding that the intermediary tool-assisted human-computer collaboration teaching fully leveraged the role of the auxiliary tool and improved teaching efficiency. Simultaneously, it strengthened timely interaction between teachers and students, promoting students’ autonomous learning and achieving precise teaching.

Literature review indicates that studies on the integration of automated writing evaluation systems into college English writing instruction mainly focus on the research of automated evaluation systems themselves, comparative studies between automated and human feedback, and research on the impact of automated evaluation systems on improving students’ English writing abilities. There is insufficient attention paid to the impact of integrating automated evaluation systems into college English writing courses on English teachers. In fact, as mentioned above, English teachers play a pivotal role in college English writing courses. First, integrating automated writing evaluation systems more effectively into college English writing courses requires English teachers’ endeavor. Understanding the impact of automated evaluation systems on English teachers can help them better integrate these systems into college English writing instruction. Secondly, it is imperative to provide training in artificial intelligence for English teachers, as artificial intelligence increasingly influences every aspect of education. Such training should be based on an understanding of the impact of automated evaluation systems on English teachers. Thirdly, having a knowledge of the impact helps researchers conduct subsequent studies, which offers guidance to integrate these systems into the courses all-roundedly. It also provides references for college English teachers’ professional development.

3. Research Content

3.1. Research Objectives

This study aims to explore the impact of automated evaluation systems on teachers’ teaching and teachers themselves by comparing changes in the participating teachers before and after using the automated evaluation system. The impact on teachers’ teaching mainly focuses on changes in students’ autonomous learning and changes in teacher-student interaction. Impact on the teachers themselves mainly focuses on three aspects: the transformation of teachers’ attitudes toward writing instruction, the possible deepening in understanding of writing pedagogy, and the enhancement of innovative application abilities by using intelligent technology in teaching practice. By analyzing the research data, this study provides suggestions for better integration of automated evaluation systems into college English writing instruction and offers references for the development of teachers’ teaching abilities.

3.2. Research Subjects

The subjects of this study are English teachers from a provincial first-tier university in China with 20 individuals in total. None of these 20 teachers had used iWrite2.0 system before this study. Four of them specialize in teaching writing to English majors, while the other 16 teach English writing courses for non-English-major university students. All of them hold a master’s or doctoral degree, with teaching experience of more than five years, indicating a rich background in teaching.

3.3. Research Content

Based on the objectives of this study, the research observes the participating teachers for over a semester, which primarily examines the changes before and after using iWrite system. These changes include alterations in the teaching process as well as shifts within the teachers themselves. With the rapid advancement of artificial intelligence, the ability of teachers to grasp innovative application abilities (Xie, 2020) to use intelligent technology is key to the integration of automated evaluation systems into college English writing instruction since such application plays a crucial role in the effectiveness of “intelligence plus” course instruction. This ability is also directly related to the professional development of individual teachers. Therefore, this research particularly pays attention to teachers’ innovative application of intelligent technology in teaching while studying the changes within the teachers themselves. It analyzes whether there has been a change in the participating teachers’ application ability before and after the experiment. This research will thereafter give a reference for subsequent research in formulating measures to enhance teachers’ innovative application abilities to use intelligent technology in teaching.

3.3.1. Pre-Survey

In this stage, the researcher aims to understand the basic situation of participating teachers’ English writing classes and assess their teaching abilities, especially their innovative application abilities to use intelligent technology in teaching. This is mainly obtained through two methods: pre-questionnaire and pre-interview. The design of the pre-survey has got inspiration from the survey designed by Xie (2020).

1) Pre-Questionnaire (The pre-survey questionnaire takes Likert 5-point scale):

Key viewpoints of teachers on basic situation of their writing classes, teaching abilities, and innovative application of intelligent technology in teaching:

Cognitive Attitude

Absolutely disagree

Disagree

Neutral

Agree

Absolutely agree

1) My classroom is conducive to good teacher-student interaction.

2) My students possess strong autonomous learning capabilities.

3) I have strong teaching abilities in writing courses.

4) I have comprehensive knowledge of English writing pedagogy.

5) I am capable of consciously using data for problem analysis and resolution.

6) I can effectively utilize intelligent learning platforms to obtain students’ learning data.

7) I am able to process various types of learning data obtained.

8) I understand the purpose and role of learning data analysis.

9) I am proficient in using some data analysis software.

2) Pre-Interview:

a) Do you have knowledge about AWE (Automated Writing Evaluation)?

b) What is your point of view on the role of artificial intelligence in teaching?

c) What preparations do you think teachers should make to implement “intelligence plus” teaching?

d) How do you evaluate your own ability to innovatively apply intelligent technology in teaching?

e) What aspects do you think are necessary to improve teachers’ innovative application ability to implement intelligent teaching?

3.3.2. Post-Survey

Post-survey comprises three parts: post-survey, post-interview, and supplementary survey.

1) Post-Survey: The participating teachers will fill out the form in the pre-questionnaire again after using iWrite 2.0 system for a semester of English writing instruction. Researchers will compare the results of post-survey and pre-survey to summarize whether there has been an improvement in teacher-student interaction and whether students’ autonomous learning abilities have been enhanced after using the system. In terms of teaching ability, the research will examine whether teachers’ understanding of writing instruction and pedagogy has deepened and what changes have occurred in their innovative application abilities to use intelligent technology.

2) Post-Interview: The content of the postinterview for teachers builds upon the pre-interview, mainly guiding teachers to provide as detailed an explanation as possible about the changes they have experienced in their teaching process and their own teaching abilities. The interview questions are as follows:

a) Do you have a better understanding of AWE (Automated Writing Evaluation) now?

b) How do you like artificial intelligence in teaching? Compared to before the experiment?

c) What preparations do you think teachers should make for implementing “intelligence plus” teaching?

d) How do you evaluate your own ability to innovatively apply intelligent technology in teaching now, and has it improved since the beginning of the semester?

e) What aspects do you think are necessary to improve teachers’ ability to innovatively apply intelligent technology in teaching?

f) What new insights do you have regarding writing instruction and pedagogy?

g) After using iWrite system, has your classroom seen an improvement in students’ autonomous learning abilities? Have teacher-student interactions improved? In what ways have they improved?

3) Supplementary Survey: The supplementary survey mainly includes two aspects: teacher journal writing and online exchanges.

Teacher Journal Writing: Since this study covers an entire semester, the participating teachers are required to complete an open-ended interview task at the end of the semester. To prevent teachers from providing ambiguous answers due to unclear memory, this study suggests teachers write teacher journals based on an outline provided by the researcher. The content of the outline is consistent with that of the post-interview. In this way, on the one hand, the participating teachers can provide more specific information during the interview, and on the other hand, it is conducive to researchers capturing turning points in teachers’ teaching process and capabilities.

Online Exchanges: Online exchanges are parts of teacher support in this study. Teacher support is an essential guarantee for the successful application of automated evaluation systems in English writing instruction. It can be provided in various ways, such as course objective introductions, introduction to system orientation and usage, discussions on writing pedagogy, and discussions on teaching issues (Wu & Tang, 2012: p. 5). Before conducting the research, this study introduces the course objectives to the participating teachers and teaches them how to use and position the system. It may take one month or so to help teachers become familiar enough with the system. More importantly, during the process of the study, online exchanges will be held regularly, which facilitates teachers to discuss English writing pedagogy and teaching issues. The researcher records these exchanges, records of which serve as a positive supplement to the research analysis.

3.4. Research Approach

This study follows a basic approach of “Research Preparation”, “Research Process”, “Results Analysis”, and “Suggestions and Recommendations” as shown in Figure 1. “Research Preparation” mainly refers to teacher support, which includes helping teachers clarify course objectives, become familiar with the usage and positioning of iWrite2.0 system, and guide the teachers to review main writing pedagogy. “Research Process” refers to the entire process of observing the participating teachers’ teaching. At the beginning of the semester, the teachers should complete the pre-survey and pre-interview. Throughout the semester, they need to write teaching logs and attend online exchanges. By the end of the semester, they should finish the post-survey and post-interview. “Results Analysis” refers to the analysis of changes in the teachers’ teaching process and teaching abilities based on the survey and interview results. Changes in the teaching process include alterations in teacher-student interaction and students’ autonomous learning. Changes in teaching abilities include shifts in teachers’ understanding of writing instruction, their comprehension of writing pedagogy, and their innovative application abilities to use intelligent technology in teaching. During this phase, SPSS22.0 is utilized to perform frequency analysis, central tendency analysis, variance analysis, and factor analysis on the data obtained from the Likert 5-point scale. At the same time, supplementary analyses are conducted via teacher interviews, teaching logs, and records from online meetings in order to obtain authentic results of the changes in the teachers’ application of an automated evaluation system. “Suggestions and Recommendations” mainly consist of how to better integrate automated evaluation systems into college English writing instruction in the context of “intelligence plus” era, how college English writing teachers can better plan their careers in this era, and potential directions for advancing research related to college English writing instruction in the “intelligence plus” era. The research approach is illustrated in Figure 1 below:

1) Research Preparation. It takes about one month. During this stage, researchers help the participating teachers clarify the course objectives, become familiar with the usage of iWrite2.0 system, and guide the teachers in exchanging English writing pedagogy through teaching seminars. When necessary, the researcher shares representative monographs about English writing pedagogy as references for the teachers.

2) Research Process. It takes about six months. This stage is mainly divided into three parts. The first part involves the participating teachers completing pre-questionnaire and pre-interview. The second part involves the teachers integrating iWrite2.0 system into their English writing classes, with the researcher observing the teaching process throughout. And the teachers should regularly write teaching logs and participate in online meetings from time to time. The third part involves the teachers completing post-survey and post-interview.

3) Results Analysis. It takes about three months. During this stage, researchers combine qualitative and quantitative analyses based on the survey results, so as to check whether there have been changes in the teaching process and teaching abilities of the participating teachers before and after integrating iWrite2.0 system into English writing instruction.

4) Suggestions and Recommendations. It takes about eight months. During this stage, based on the results of the research analysis, researchers will write a research report on “The Impact of English Writing Instruction Integrated with Automated Evaluation Systems on College English Teachers”, and compose related academic papers.

Figure 1. Research approach.

4. Innovations of This Study and Suggestions for Subsequent Research

The innovation of this study is summarized in the following two aspects: first, this study manifests innovation in research subjects. Most studies on the integration of automated evaluation systems into college English writing instruction focus on the impact of these systems on college students’ English writing abilities, with less attention given to such impacts on English teachers. However, the improvement of college students’ English writing abilities through automated evaluation systems should be guided by English teachers. Focus on the impact of automated evaluation systems on English teachers can help these systems better enhance college students’ English writing skills. Given the rapid development of artificial intelligence, it is imperative to provide technical training for English teachers. To understand the impact of automated evaluation systems on English teachers can provide a prerequisite for targeted technical training. Second, this study manifests innovation in research methodology. Previous studies have typically been either qualitative or quantitative. This study focuses on the integration of automated evaluation systems into college English writing instruction and its impact on teachers. Such is the case when examining the effects of system usage on the teaching process or on teaching abilities; neither qualitative nor quantitative research can meet the research requirements. This study attempts to gain a more comprehensive and clearer perception of how automated evaluation systems affect English teachers by conducting quantitative research on quantifiable indicators utilizing qualitative research methods like teacher interviews and teaching logs.

Going forward, the researcher will proceed with the study in an orderly way based on the research objectives and the established approach, aiming to provide suggestions for a better integration of automated evaluation systems represented by iWrite2.0 system into college English writing instruction. Also, this study offers references for the development of teachers’ teaching abilities.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Attali, Y. (2004). Exploring the Feedback and Revision Features of Criterion. Paper Presented at the Meeting of the National Council on Measurement in Education (NCME), San Diego.
[2] Bennett, R. E. (2006). Technology and Writing Assessment: Lessons Learned from the US National Assessment of Educational Progress. Paper Presented at the Annual Conference of the International Association for Educational Assessment. Singapore.
[3] Bull, J., & McKenna, C. (2004). A Blueprint for Computer-Assisted Assessment. Routledge.
[4] Burstein, J., & Marcu, D. (2003). Developing Technology for Automated Evaluation of Discourse Structure in Student Essays. In M. D. Shermis, & J. C. Burstein (Eds.), Automated Essay Scoring: A Cross-Disciplinary Perspective. Lawrence Erlbaum.
[5] Chen, E., & Cheng, E. (2008). Beyond the Design of Automated Writing Evaluation: Pedagogical Practices and Perceived Learning Effectiveness in EFL Writing Classes. Language Learning &Technology, 12, 94-112.
[6] Hu, X. (2015). Effects of Online Self-Correction on EFL Students’ Writing Quality. Technology. Enhanced Foreign Language Education, No. 163, 45-49.
[7] Li, Y. (2021). A Study of Consistency between iWrite Scoring and Human Scoring: Based on ETIC Email Writing. Beijing Foreign Studies University.
[8] Li, Y., & Tian, X. (2018). An Empirical Research into the Reliability of iWrite 2.0. Modern Educational Technology, No. 2, 75-80.
[9] Liang, M. (2011). The Development of an Automatic Scoring System for Large-Scale English Composition Exams. Higher Education Press.
[10] Liu, Y., & Liu, J. (2018). Effects of Online Automated Writing Evaluation System on EFL Learners’ Writing Revision—An Empirical Study Based on iWrite. Foreign Language Education in China, No. 2, 67-87.
[11] Liu, Y., Liu, S., & Yang, J. (2022). Man-Machine Cooperative Teaching and Its Application from the Sociocultural Activity-Theory Perspective: A Case of iWrite-Assisted English Writing Instruction. China Educational Technology, No. 11, 108-116.
[12] McCurry, D. (2010). Can Machine Scoring Deal with Broad and Open Writing Tests as Well as Human Readers? Assessing Writing, 15, 118-129.
https://doi.org/10.1016/j.asw.2010.04.002
[13] Page, E. (2003). Project Essay Grade: PEG. In M. D. Shermis, & J. Burstein (Eds.), Automated Essay Scoring: A Cross-Disciplinary Perspective. Lawrence Erlbaum Associates.
[14] Rich, C. S., Harrington, H., Kim, J., & West, B. (2008). Automated Essay Scoring in State Formative and Summative Writing Assessment. Paper Presented at AERA New York City.
[15] Shi, X. (2012). A Tentative Study on the Validity of Online Automated Essay Scoring Used in the Teaching of EFL Writing—Exemplified by http//www.pigai.org. Modern Educational Technology, No. 10, 67-71.
[16] Tang, J., & Rich, C. S. (2011). Online Technology—Enhanced English Language Writing Assessment in the Chinese Classroom. Paper Presented at 2011 Annual Conference of American Educational Research Association, New Orleans.
[17] Tang, J., & Wu, Y. (2012). A Study on an Automated Writing Assessment System Used in the College EFL Classroom. Foreign Languages and Their Teaching, No. 4, 53-58.
[18] Wang, J., & Brown, M. S. (2007). Automated Essay Scoring versus Human Scoring: A Comparative Study. Journal of Technology, Learning, and Assessment, 6, 4-28.
[19] Wang, S. (2011). On On-Line English Writing Feedback with Writing Roadmap 2.0 Automated Evaluation System. Modern Educational Technology, No. 3, 76-81.
[20] Wang, Y. (2008). A Study on WRM Used in the College EFL Classroom of the Freshman Year. Paper Presented at the Meeting of the “Teaching of EFL Writing in the College”, Beijing.
[21] Wang, Y., & Liu, Z. (2012). A Study of Teacher Feedback on Accuracy, Fluency, Complexity and Quality of EFL Writing. Foreign Language Education, No. 6, 49-53.
[22] Ware, P. D., & Warschauer, M. (2006). Electronic Feedback and Second Language Writing. In K. Hyland, & F. Hyland (Eds.), Feedback in Second Language Writing (pp. 105-122). Cambridge University Press.
https://doi.org/10.1017/cbo9781139524742.008
[23] Warschauer, M., & Grimes, D. (2008). Automated Writing Assessment in the Classroom. Pedogogies: An International Journal, No. 3, 22-36.
[24] Warschauer, M., & Meskill, C. (2000). Technology and Second Language Learning. In J. Rosenthal (Ed.), Handbook of Undergraduate Second Language Education. Lawrence Erlbaum.
[25] Wu, Y., & Tang, J. (2012). Impact of Integrating an Automated Assessment Tool into English Writing on University Teachers. Technology Enhanced Foreign Language Education, No. 7, 3-10.
[26] Xie, J. (2020). Research on Construction of Teachers Precision Teaching Ability Model. Northeast Normal University.
[27] Xu, H. (2020). An Empirical Study on the Influence of Two Feedback Modes on College English Writing—Based on iWrite2.0 Writing Evaluation System. Jiangxi Normal University.
[28] Yang, X., & Dai, Y. (2015). An Empirical Study on College English Autonomous Writing Teaching Model Based on www.pigai.org. Technology Enhanced Foreign Language Education, No. 3, 17-22.
[29] Zhou, Y. (2011). On the Application of Formative Tool in English Writing—Difficulties and Solutions. Modern Educational Technology, No. 9, 88-93.

Copyright © 2025 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.