Using Process Indicators to Facilitate Data-Driven Decision Making in the Era of Accountability
Kyu Tae Kim
Keimyung University, Daegu, Korea.
DOI: 10.4236/ce.2012.36102   PDF    HTML     4,210 Downloads   7,209 Views   Citations

Abstract

This paper explores which accountability indicators are likely to reveal the distinct contexts and qualitative characteristics of school that stimulate and improve authentic pedagogy and accountability. In the era of accountability, data-driven decision making is a new research area for authentic pedagogy through monitoring student progress and improving school accountability. It is based on input-and-result oriented indicators such as school demographics, facilities, budget, standardized test scores, dropout rates. But the indicators are unlikely to capture a dynamically interactive qualitative characteristics of school organizations featuring a loosely-coupled system and difficult to be measured or assessed. Thus, process indicators need to be complementary to input-and-outcome data for a valid and graphic description, monitoring and explanation of ‘why’ and ‘how’ the school outcomes occur. The author concluded that the data-driven decision making (DDDM) based on process indicators strengthens reflective professionalism and provides for the educational welfare for the poor and left-behind students.

Share and Cite:

Kim, K. (2012). Using Process Indicators to Facilitate Data-Driven Decision Making in the Era of Accountability. Creative Education, 3, 685-691. doi: 10.4236/ce.2012.36102.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Abelmann, C., & Elmore, R. (1999). When accountability knocks, will anyone answer? Consortium for Policy Research in Education, ERIC ED 428463.
[2] Adams, J. E., & Kirst, M. (1999). New demands for educational accountability: Striving for results in an era of excellence. In J. Murphy and K. S. Louis (Eds.), Handbook of research in educational administration (pp. 463-489, 2nd ed.). San Francisco: Jossey-Bass.
[3] Anderson, G. L. (2009). Advocacy leadership: Toward a post-reform agenda in education. New York: Routledge.
[4] Berry, B., Fuller, E., Reeves, C., & Laird, E. (2006b). Linking teachers and student data to improve teacher and teaching quality. URL (last checked 18 March 2010). http://www.dataqualitycampaign.com.
[5] Cindy, H., & Joellen, K. (2007). Ten roles for teacher leaders. Educational Leadership, 65, 74-77.
[6] Copland, M. A. (2003). Leadership of inquiry: Building and sustaining capacity for school improvement. Educational Evaluation and Policy Analysis, 25, 375-395. doi:10.3102/01623737025004375
[7] Darling-Hammond, L. (1989). Accountability for professional practice. Teachers college Record, 91, 59-80.
[8] Darling-Hammond, L., & Ball, D. L. (1999). What can policy do to support teaching to high standards? CPRE Policy Bulletin. URL (last checked 30 June 2008). http://www.cpre.org/Publications/Publications_Policy_Bulletins.htm
[9] Data Quality Campaign (2006). Creating a longitudinal data system: Using data to improve student achievement. URL (last checked18 March 2012). http://www.dataqualitycampaign.com
[10] Data Quality Campaign (2009). The next step: Using longitudinal data systems to improve student success. URL (last checked18 March 2010). http://www.dataqualitycampaign.com
[11] Day, C. (2002). School reform and transition in teacher professionalism and identity. International Journal of Educational Research, 37, 667- 692. doi:10.1016/S0883-0355(03)00065-X
[12] Eisner, E. W. (2002). The educational imagination: On the design and evaluation of school programs (3rd ed.). Upper Saddle River, NJ: Merrill Prentice Hall.
[13] Evers, C. W., & Lakomski, G. (2000). Doing educational administration: A theory of administrative practice. New York: Pergamon.
[14] Firestone, W. A., & Herriott, R. E. (1982). Two images of schools as organizations: An explication and illustrative empirical test. Educational Administrative Quarterly, 18, 39-59. doi:10.1177/0013161X82018002004
[15] Fusarelli, L. D. (2002). Tightly coupled policy in loosely coupled systems: Institutional capacity and organizational change. Journal of Educational Administration, 40, 561-575. doi:10.1108/09578230210446045
[16] Gaither, G., Nedwek, B. P., & Neal, J. E. (1995). Measuring up: The promises and pitfalls of performance indicators in higher education. ASHE-ERIC Higher Education Report No 5. ERIC ED 383278.
[17] Greenfield, T. B. (1986). The decline and fall of science in educational administration. Interchange, 17, 57-80. doi:10.1007/BF01807469
[18] Greenfield, T. B. (1991). Reforming and revaluing educational administration: Whence and when cometh the phoenix? Educational Management and Administration, 19, 200-217.
[19] Guba, E. G. & Lincoln, Y. S. (1989). Fourth generation evaluation. London: Sage Publications.
[20] Habermas, J. (1996). Three normative models of democracy. In S. Benhabib (Ed.), Democracy and Difference: Contesting the Boundaries of the Political. Princeton: Princeton University Press.
[21] Hargreave, A., & Goodson, I. F. (1996). Teachers’ professional lives: Aspirations and actualities. In I. F. Goodson, & A. Hargreaves (Eds.), Teachers professional lives (pp. 1-27). London: Farmer Press.
[22] Harris, A. (2008). Distributed school leadership: Developing tomorrow’s leaders. New York: Routledge.
[23] Honig, M. J., & Coburn, C. (2008). Evidence-based decision making in school district central offices: Toward a policy and research agenda. Educational Policy, 22, 578-608. doi:10.1177/0895904807307067
[24] Hoy, W. K., & Miskel, C. G. (2012). Educational administration: Theory, research, and practice (9th ed.). McGrow-Hill: New York.
[25] Jones, B. D. (2007). The unintended outcomes of high-stakes testing. Journal of Applied School Psychology, 23, 65-86. doi:10.1300/J370v23n02_05
[26] Knapp, M. S. (2008). How can organizational and sociocultural learning theories shed light on district instructional reform? American Journal of Education, 114, 521-539. doi:10.1086/589313
[27] Lachat, M. A., & Smith, S. (2005). Practices that support data use in urban high schools. Journal of Education for Students Placed At Risk, 10, 333-349. doi:10.1207/s15327671espr1003_7
[28] Laird, E. (2006b). Data use drives schools and district improvement. URL (last checked 18 March 2010). http://www.dataqualitycampaign.com
[29] Lima, J. A. (2007). Teachers’ professional development in departmentalised, loosely coupled organisations: Lessons for school improvement from a case study of two curriculum department. School Effectiveness and School Improvement, 18, 273-301. doi:10.1080/09243450701434156
[30] Linn, R. L. (2001). The design and evaluation of educational assessment and accountability. CSE Technical Reprot 539. National Center for Research on Evaluation, Standard, and Student Testing.
[31] Loeb, H., Knapp, M. S., & Efers, A. (2008). Teachers’ response to standards-based reform: Probing reform assumptions in Washington State. Educational Policy Analysis Archives, 16, 1-32.
[32] Louis, K. S., Kruse, S., & Raywid, M. A. (1996). Putting teachers at the center of reform: Learning schools and professional community. NASSP Bulletin, 80, 9-21. doi:10.1177/019263659608058003
[33] Mansbridge, J. (1990). The rise and fall of self-interest in the explanation of political life. In Mansbridge (Ed.), Beyond self-interest. Chicago: University of Chicago Press.
[34] Marsh, J. A., Pane, J. F., & Hamilton, L. S. (2006). Making sense of data-driven decision making in education: Evidence from recent RAND research. Santa Monica, CA: RAND Corporation. URL (last checked 28 November 2009). http://www.rand.org/pubs/occasional_papers/OP170/
[35] Newmann, F. M., King, M. B., & Rigdon, M. (1997). Accountability and school performance: Implications from restructuring schools. Harvard Educational Review, 67, 41-74.
[36] Oakes, J. (1989). What educational indicators? The case for assessing the school context. Educational Evaluation and Policy Analysis, 11, 181-199.
[37] O’Day, J. A. (2002). Complexity, accountability, and school Improvement. Harvard Educational Review, 72.
[38] Ogawa, R. T., & Collom, E. (2000). Using performance indicators to hold schools accountable: Implicit assumptions and inherent tensions. Peobody Journal of Education, 75, 200-215. doi:10.1207/S15327930PJE7504_9
[39] Opper, V. D., Henry, G. T., & Mashburn, A. J. (2008). The district effect: Systemic response to high stakes accountability policies in six southern states. American Journal of Education, 114, 299-332. doi:10.1086/521242
[40] Park, V., & Datnow, A. (2009). Co-constructing distributed leadership: District and school connections in data-driven decision-making. School leadership and Management, 29, 477-494. doi:10.1080/13632430903162541
[41] Petty, N. W., & Green, T. (2006). Measuring educational opportunity as perceived by students: A process indicator. School Effectiveness and School Improvement, 18, 67-91. doi:10.1080/09243450601104750
[42] Popham, W. J. (2001). The truth about testing: An educator’s call in action. Alexandria, VA: Association for Supervision and Curriculum Development.
[43] Porter, A. C. (1991). Creating a system of school process indicators. Educational Evaluation and Policy Analysis, 13, 13-29.
[44] Ranson, S. (2003). Public accountability in the age of neo-liberal government. Journal of Education Policy, 18, 459-480. doi:10.1080/0268093032000124848
[45] Reyes, P., Wagstaff, L. H,. & Fusarelli, L. D. (1999). Delta forces: The changing fabric of American society and education. In J. Murphy, & K. S. Louse, (Eds.), Handbook of research on educational administration (pp. 183-202, 2nd ed.). San Francisco, CA: Jossey-Bass.
[46] Rothstein, R. (2000). Toward a composite index of school performance. The Elementary School Journal, 100, 409-441. doi:10.1086/499649
[47] Sch?n, D. A. (1983). The reflective practitioner: How professionals think in action. New York: Basic Books.
[48] Shadish, W. R., Cook, T. D., & Leviton, L. C. (1991). Foundations of program evaluation: Theory of practice. New York: Sage publications.
[49] Sheldon, K. M., & Biddle, B. J. (1998). Standards, accountability, and school Reform: Perils and pitfalls. Teacher College Record, 100, 164-180.
[50] Skyes, G. (1999). The new professionalism in education: An appraisal. In J. Murphy, & K. S. Louis (Eds.), Handbook of research in educational administration (pp. 203-226, 2nd ed.). San Francisco: Jossey- Bass.
[51] Spillane, J. P. (2004). State, standard, assessment, and accountability instruments in practice: When the rubber hits the road. URL (last checked 2 May 2009). http://www.albany.edu/edfin/Spillane%20EFRC%20Symp%2004%20Single.pdf
[52] Spillane, J. P., Halverson, R., & Diamond, J. B. (2004). Towards a theory of leadership practice: A distributed perspective. Journal of Curriculum Studies, 36, 3-34. doi:10.1080/0022027032000106726
[53] Spillane, J. P. (2006). Distributed leadership. San Francisco: Jossey- Bass.
[54] Spillane, J. P., Camburn, E. M., Pustejovsky, J., Pareja, A. S., & Lewis, G. (2008). Taking a distributed perspective: Epistemological and methodological tradeoffs in operationalizing the leader-plus aspect. Journal of Educational Administration, 46, 189-213. doi:10.1108/09578230810863262
[55] Stecher, B. M. (2005). Developing process indicators to improve educational governance: Lessons for education from health care. Testimony presented to the California Little Hoover Commission. Santa Monica, CA: Rand.
[56] Stufflebeam, D. L. (2001). Evaluation models. San Francisco, CA: Jossey-Bass.
[57] Stufflebeam, D. L., & Shinkfield, A. J. (2007). Evaluation theory, models, and applications. San Francisco, CA: Jossey-Bass.
[58] Valli, L., & Buese, D. (2007). The changing roles of teachers in an era of high-stakes accountability. American Educational Research Journal, 44, 519-558. doi:10.3102/0002831207306859
[59] Wayman, J. C. & Stringfield, S. (2006). Technology-supported involvement of entire faculties in examination of student data for instructional improvement. American Journal of Education, 112, 549- 571. doi:10.1086/505059
[60] Wayman, J. C., Cho, V., & Johnston, M. T. (2007). The data-informed district: A district-wide evaluation of data use in the Natrona County School District. Austin, TX: The University of Texas.
[61] Wayman, J. C., Stringfield, S., & Yakimowski, M. (2004). Software enabling school improvement through analysis of student data. Report No. 67. Baltimore, MD: The Johns Hopkins University.
[62] Weick, K. E. (1976). Educational organizations as loosely coupled systems. Administrative Science Quarterly, 21, 1-19. doi:10.2307/2391875
[63] Woods, P. A., Bennett, N., Harvey, J. A., & Wise, C. (2004). Variabilities and dualities in distributed leadership: Findings from a systematic literature review. Educational Management Administration and Leadership, 32, 439-457. doi:10.1177/1741143204046497
[64] Young, V. M. (2006). Teachers’ use of data: Loose coupling, agenda setting, and team norms. American Journal of Education, 112, 521- 548. doi:10.1086/505058

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.