The Program Assessment and Improvement Cycle Today: A New and Simple Taxonomy of General Types and Levels of Program Evaluation
James Carifio
University of Massachusetts-Lowell.
DOI: 10.4236/ce.2012.326145   PDF    HTML     6,354 Downloads   10,311 Views   Citations

Abstract

There has been strong pressure from just about every quarter in the last twenty years for higher education institutions to evaluate and improve their programs. This pressure is being exerted by several different stake holder groups simultaneously, and also represents the growing cumulative impact of four somewhat contradictory but powerful evaluation and improvement movements, models and advocacy groups. Consequently, the program assessment, evaluation and improvement cycle today is much different and far more complex than it was fifty years ago, or even two decades ago, and it is actually a highly diversified and confusing landscape from both the practitioner’s and consumer’s view of such evaluative and improvement information relative to seemingly different and competing advocacies, standards, foci, findings and asserted claims. Therefore, the purpose of this article is to present and begin to elucidate a relatively simple general taxonomy that helps practitioners, consumers, and professionals to make better sense of competing evaluation and improvement models, methodologies and results today, which should help to improve communication and understanding and to have a broad, simple and useful framework or schema to help guide their more detailed learning.

Share and Cite:

Carifio, J. (2012). The Program Assessment and Improvement Cycle Today: A New and Simple Taxonomy of General Types and Levels of Program Evaluation. Creative Education, 3, 951-958. doi: 10.4236/ce.2012.326145.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] American Council on Education (2012). National and international projects on accountability and higher education outcomes. http://www.acenet.edu/Content/NavigationMenu/OnlineResources/Accountability/index.htm
[2] Aneshensel, C. S. (2002). Theory-based data analysis for the social sciences. Thousand Oaks, CA: Pine Forge Press.
[3] Bamberger, M., Rugh, J., Church, M., & Fort, L. (2004). Shoestring evaluation: Designing impact evaluations under budget, time and data constraints. American Journal of Evaluation, 25, 5-7.
[4] Brass, C. T., Nunez-Neto, B., & Williams, E. D. (2006). Congress and program evaluation: An overview of randomized control trials (RCTs) and related issues. URL (last checked 24 October 2008). http://digital.library.unt.edu/govdocs/crs/permalink/meta-crs-9145:1
[5] Burke, J. (2005). Achieving accountability in higher education: Balancing public, academic, and market demands. San Francisco: Jossey-Bass.
[6] Carifio, J., & Perla, R. (2009). A critique of the theoretical and empirical literature on the use of diagrams, graphs and other visual aids in the learning of scientific-technical content from expository texts and instruction. Interchange, 41, 403-436.
[7] Coryn, C. L. S. (2007). The “holy trinity” of methodological rigor: A skeptical view. Journal of Multidisciplinary Evaluation, 4, 26-31.
[8] Deming, W. E. (1986). Out of the crisis. Cambridge, MA: Center for Advanced Engineering Study, Massachusetts Institute of Technology.
[9] Denzin, N., & Lincoln, Y. (2005). The Sage handbook of qualitative research. ThousandOaks, CA: Sage.
[10] Dlugacy, Y. (2006). Measuring healthcare: Using quality data for operational, financial and clinical improvement. San Francisco, CA: Jossey-Bass.
[11] Elton, L. (1988). Accountability in higher education: The danger of unintended consequences. Higher Education, 17, 377-390.
[12] English, F. W., & Hill, J. C. (1994). Total quality education: Transforming schools into learning places. Thousand Oaks, CA: Corwin Press.
[13] Figlio, D. (2011). Intended and unattended consequences of school accountability. http://www.youtube.com/watch?v=e3aKEuctqy8
[14] Glass, G. (2000). Meta-analysis at 25. URL (last checked 15 January 2007). http://glass.ed.asu/gene/papers/meta25.html
[15] Godfray, H. (2002). Challenges for taxonomy. Nature, 417, 17-19.
[16] Green, J., Camilli, G., & Elmore, P. (2006). Handbook of complementary methods in educational research. Mahwah, NJ: Erlbaum.
[17] Harman, G. (1994). Australian higher education administration and quality assurance movement. Journal for Higher Education Management, 9, 25-45.
[18] Kenney, C. (2008). The best practice: How the new quality movement is transforming medicine. Philadelphia, CA: Perseus Book Group.
[19] Kleining, G. (1982). An outline for the methodology of qualitative social research. URL (last checked 22 October 2008). http://www1.unihamburg.de/abu//Archiv/QualitativeMethoden/Kleining/KleiningEng1982.htm
[20] Lederman, D. (2009). Defining accountability. Inside higher education. http://www.insidehighered.com/news/2009/11/18/aei
[21] Lincoln, Y., & Guba, G. (1985). Naturalistic inquiry. Thousand Oak, CA: Sage.
[22] London Times (2012). World university rankings. http://www.timeshighereducation.co.uk/world-university-rankings/2011-2012/top-400.html
[23] Mets, T. (2011). Accountability in higher education: A comprehensive analytical framework. Theory and Research in Education March, 9, 41-58.
[24] Morley, R. (2012). R morley incorporated. http://www.barn.org/index.htm
[25] O’Rand, A., & Krecker, M. (1990). Concepts of the life cycle: Their history, meanings, and uses in the social sciences. Annual Review of Sociology, 16, 241-262.
[26] Mertens, D. (2010). Research and evaluation in educational and psychology: Integrating diversity with quantitative, qualitative, and mix- methods approaches. Thousand Oaks, CA: Sage.
[27] Mezzich, J. E. (1980). Taxonomy and behavioral science: Comparative performance of grouping methods. New York: Academic Press.
[28] Mulligan, R. (2012). The Deming University. http://paws.wcu.edu/mulligan/www/demingu.html
[29] Pawson, R. (2006). Evidence-based policy: A realistic perspective. Thousand Oaks, CA: Sage.
[30] Pawson, R., & Tilley, N. (2008). Realistic evaluation. Thousand Oaks, CA: Sage.
[31] Perla, R., & Carifio, J. (2009). Toward a general and unified view of educational research and educational evaluation: Bridging philosophy and methodology. Journal of Multi-Disciplinary Evaluation, 5, 38-55.
[32] Perla, R., & Carifio, J. (2011). Theory creation, modification, and testing: An information-processing model and theory of the anticipated and unanticipated consequences of research and development. Journal of Multi-Disciplinary Evaluation, 7, 84-110.
[33] Phillips, F. (2005). The contested nature of empirical research (and why philosophy of education offers little help). Journal of Philosophy of Education, 39, 577-597.
[34] Schick, T. (2000). Readings in the philosophy of science: From positivism to postmodernism. Mountain View, CA: Mayfield.
[35] Scriven, M. (2010a). Rethinking Evaluation methodology. Journal of Multidisciplinary Evaluation, 6, 1-2.
[36] Scriven, M. (2010b). Contemporary thinking about causation in evaluation: A dialogue with Tom Cook and Michael Scriven. American Journal of Evaluation, 31, 105-117.
[37] Scriven, M. (2012). Evaluating evaluations: A meta-evaluation checklist. http://michaelscriven.info/images/EVALUATING_EVALUATIONS_8.16.11.pdf
[38] Shavelson, R. (2010). Accountability in higher education: Déjà vu all over again. http://www.stanford.edu/dept/SUSE/SEAL/Presentation/Presentation%20PDF/Accountability%20in%20hi%20ed%20CRESST.pdf
[39] Sloane, F. (2008). Through the looking glass: Experiments, quasi-experiments and the medical model. Education Researcher, 37, 41-46.
[40] Stake, R. (2003). Standards-based and responsive evaluation. Thousand Oaks, CA: Sage
[41] Stake, R. (2010). Qualitative research: Studying how things work. New York: Guilford Press.
[42] State Higher Education Executive Officers (2012). National commission on accountability in higher education. http://www.sheeo.org/account/comm-home.htm
[43] Stufflebeam, D. (2001). Evaluation models. New Directions in Evaluation, 89, 7-98.
[44] Stufflebeam, D. L., & Shinkfield, A. J. (2007). Evaluation theory, models and applications. San Francisco, CA: Jossey-Bass.
[45] Suppe, F. (1974). The structure of scientific theories. Urbana: University of Illinois Press.
[46] US News (2012). Best colleges and universities. http://www.usnews.com/rankings
[47] van Thiel, S. & Leeuw, F. (2002). The performance paradox in the public sector. Public Performance & Management Review, 25, 267-281.
[48] Yin, R. (2008). Case study research: Design and methods (applied social research methods). Thousand Oaks, CA: Sage.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.