Share This Article:

Building a Better Mousetrap: Replacing Subjective Writing Rubrics with More Empirically-Sound Alternatives for EFL Learners

Full-Text HTML Download Download as PDF (Size:308KB) PP. 1320-1325
DOI: 10.4236/ce.2012.38193    4,172 Downloads   7,165 Views   Citations

ABSTRACT

Although writing rubrics can provide valuable feedback, the criteria they use are often subjective, which compels raters to employ their own tacit biases. The purpose of this study is to see if discreet empirical characteristics of texts can be used in lieu of the rubric to objectively assess the writing quality of EFL learners. The academic paragraphs of 38 participants were evaluated according to several empirically calculable criteria related to cohesion, content, and grammar. Values were then compared to scores obtained from holistic scoring by multiple raters using a multiple regression formula. The resulting correlation between variables (R = .873) was highly significant, suggesting that more empirical, impartial means of writing evaluation can now be used in conjunction with technology to provide student feedback and teacher training.

Conflicts of Interest

The authors declare no conflicts of interest.

Cite this paper

Schenck, A. & Daly, E. (2012). Building a Better Mousetrap: Replacing Subjective Writing Rubrics with More Empirically-Sound Alternatives for EFL Learners. Creative Education, 3, 1320-1325. doi: 10.4236/ce.2012.38193.

References

[1] American Council on the Teaching of Foreign Languages (2012). ACTFL proficiency guidelines. URL (last checked 1 October 2012). http://actflproficiencyguidelines2012.org/writing
[2] Beyreli, L., & Ari, G. (2009). The use of analytic rubric in the assessment of writing performance: Inter-rater concordance study. Educational Sciences: Theory & Practice, 9, 105-125.
[3] Brown, H. D. (2004). Language assessment principles and classroom practices. White Plains, NY: Pearson Education.
[4] Chodorow, M., & Burstein, J. (2004). Beyond essay length: Evaluating e-rater’s performance on TOEFL essays. Princeton, NJ: ETS.
[5] Cope, B., Kalantzis, M., McCarthey, S., Vojak, C., & Kline, S. (2011). Technology-mediated writing assessments: Principles and processes. Computers & Composition, 28, 79-96. doi:10.1016/j.compcom.2011.04.007
[6] Educational Testing Service (2008). TOEFL IBT test: Independent writing rubrics (scoring standards). URL (last checked 1 October 2012). http://www.ets.org/Media/Tests/TOEFL/pdf/Independent_Writing_Rubrics_2008.pdf
[7] Fang, Z., & Wang, Z. (2011). Beyond rubrics: Using functional language analysis to evaluate student writing. Australian Journal of Language and Literacy, 34, 147-165.
[8] Graham, S., Hebert, M., & Harris, K. R. (2011). Throw ‘em out or make ‘em better? State and district high-stakes writing assessments. Focus on Exceptional Children, 44, 1-12.
[9] Halliday, M. A. K., & Hasan, R. (1976). Cohesion in English. London: Longman.
[10] Hasan, R. (1984). Coherence and cohesive harmony. In J. Flood (Ed.), Understanding reading comprehension (pp. 181-219). Delaware: International Reading Association.
[11] Hoey, M. (1996). Patterns of lexis in text. New York, NY: Oxford University Press.
[12] Hunter, K., & Docherty, P. (2011). Reducing variation in the assessment of student writing. Assessment & Evaluation in Higher Education, 36, 109-124. doi:10.1080/02602930903215842
[13] Johansson, V. (2008). Lexical diversity and lexical density in speech and writing: A developmental perspective. Working Papers, 53, 61-79.
[14] Johnson, D., & Van Brackle, L. (2012). Linguistic discrimination in writing assessment: How raters react to African American “errors,” ESL errors, and standard English errors on a state-mandated writing exam. Assessing Writing, 17, 35-54. doi:10.1016/j.asw.2011.10.001
[15] Lexile Analyzer [Computer software]. URL (last checked 25 October 2012). http://lexile.com/analyzer
[16] Mansilla, V. B., Duraisingh, E. D., Wolfe, C. R., & Haynes, C. (2009). Targeted assessment rubric: An empirically grounded rubric for interdisciplinary writing. The Journal of Higher Education, 80, 334- 353. doi:10.1353/jhe.0.0044
[17] Peden, B. F., & Carroll, D. W. (2008). Ways of writing: Linguistic analysis of self-assessment and traditional assignments. Teaching of Psychology, 35, 313-318. doi:10.1080/00986280802374419
[18] Peregoy, S. F., & Boyle, O. F. (2005). Reading, writing, and learning in ESL: A resource book for K-12 teachers. New York, NY: Pearson Education.
[19] Schulz, M. M. (2009). Effective writing assessment and instruction for young English language learners. Early Childhood Education Journal, 37, 57-62. doi:10.1007/s10643-009-0317-0
[20] Sévigny, S., Savard, D., & Beaudoin, I. (2009). Comparability of writing assessment scores across languages: Searching for evidence of valid interpretations. International Journal of Testing, 9, 134-150. doi:10.1080/15305050902880801
[21] Text Analyser [Computer software]. URL (last checked 25 October 2012). http://www.usingenglish.com/resources/text-statistics.php
[22] Weigle, S. C. (2010). Validation of automated scores of TOEFL iBT tasks against non-test indicators of writing ability. Language Testing, 27, 335-353. doi:10.1177/0265532210364406

  
comments powered by Disqus

Copyright © 2018 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.