Music Assessment in Higher Education

Abstract

The purpose of this study is to determine the type and level of assessment being done at selected music departments in higher education. A twelve-item questionnaire was developed and distributed to twenty-two universities. Sixteen universities were chosen because they are the peer institutions to the author’s campus. The others do not have music major but possess other strengths including several ensembles, many courses for students to choose from and in many cases, a minor in music. Cover letters and questionnaires were emailed to the Director of each Music Department. The cover letter explained the purpose of the questionnaire and asked that the director forward it to the individual in charge of assessment. Eleven universities responded. Results of this study indicate that assessment is going on in higher education in music. Although there were only eleven institutions involved in the study, every responding university indicated that they were doing some kind of assessment in music. The degree of assessment varied from campus to campus. Assessment training and support was limited. But, eleven music departments nationwide feel the need (and responsibility) to examine what and how they are teaching and then to come up with decisions on how to improve their teaching. Further, they feel that implementation of reviewed assessment techniques will improve students’ learning.

Share and Cite:

Fuller, J. (2014) Music Assessment in Higher Education. Open Journal of Social Sciences, 2, 476-484. doi: 10.4236/jss.2014.26056.

Faculty members who assess students, are also part of the results. The faculty members in charge of assessTable1 What is your term for the “measurable” statement of what the student has learned?  

Table2 At what level are measurable statements identified?                           

Table3 What tool(s) of the following are used to report?                              

Table4 Is this tool required?                                                    

Table5 To whom do you report the data and how often?                              

Table6 How is the data reviewed and used by the department?                         

Table7 Who uses the data for decisions?                                          

Table8 How many units do you measure in a year?                                  

Table9 At what stages are faculty involved in the process?                            

Table 10. Are faculty and staff in the department provided any training/education in order to do assessment?

Table 11. Are you provided any additional support to engage in the process such as additional staff or funding?

Table 12. Free response―This last item is voluntary and is in summary style so that you may add any significantitems to your university’s assessment process. Thank you for your time?

ment include the department or division head and the individual in charge of assessment. Music faculty involvement in the assessment process includes those who apply assessment to their teaching of students with various kinds of measurement tools. Other music faculty members can benefit from decisions made by comparing their own students learning to the students whose learning was measured during the assessment process.

Table 1 asks for the “measurable” statement of what the student has learned. Six assessment faculty say “outcome” while three say “all of these.” Outcomes are very specific tasks measured to see if a student has learned with success. A typical university syllabus lists outcomes in order with a heading like “By the end of the term the student will be able to…” and then lists each outcome with numbers or bullet points. Therefore it is understandable why the majority list outcome as their measurable statement.

Results in number Table 2 show the level that measurable statements are identified. Six say “department” while three say “major/minor.” Here again, these responses are similar to outcomes; departments are usually more specific in their expectations with their degree programs. Academic majors and minors within a department are even more specific. This facilitates the outcome creation and makes the assessment process more measurable.

Reporting the process with the tools used was also straightforward. Six used a department annual planning report and three used a general rubric that explains “how to” with findings. In Table 4, nine faculty stated that the tool was required. A correlation between both questions is not known. However, this demonstrates the importance of assessment routines by documenting the process from year to year. This gives assessment faculty the ability to compare results and to see if recommendations were fulfilled from previous years.

When it comes to reporting data to an administrator from the faculty member in the process of assessment in Table 5, ten faculty members responded. Five reports to the Department Head and three stated the results were given to the individual in charge of assessment. These two were the most common, but two checked “other.” Results indicate that upper level administrators are aware of assessment processes and review the methodology that faculty are using and study their results.

Results in Table 6 which asks how is the data reviewed and used by the department also had ten responses. Two departments said their data is collected by the assessment reviewer, while two stated an assessment committee collects the data, processes and displays the data to the entire faculty. Six campuses selected “other.” It is not known how they review the data. However, this shows that groups, or in some cases their entire faculty were involved in assessment.

Concerning decisions made with the assessment data, the Department Head option had five responses, the Division Head in charge of Assessment had two responses and the faculty whose courses were used each semester had one response. “Other” was selected by two. This suggests that the upper level administrators are aware of the decisions made based on assessment results, with some faculty input. 

Seven faculty indicated that the number of units measured was more than two. One marked two and two said one. The “units” are usually outcomes, but could be objectives or goals. Results show the majority have a commitment to evaluating three or more department areas (e.g., majors, minors, specific course, etc.) which are satisfactory and identifies areas that need improvement with recommendations.

Faculty involvement differed. Answers indicating the Department Head reviews data and then passes results back to all faculty had three responses. The remaining other responses had two. Only nine faculty responded with no majority. However, table 9 demonstrates that the entire faculty is involved in analyzing assessment results and considers making changes to the target improvement areas.

Training and support given to the music department relating to assessment was not as positive. One university reports that all training is available, four state availability of some assessment training, and five indicate little, if any training and education availability. Feedback indicating support was less positive. Two remarked that all support for assessment was available, yet eight said no support was available.

The results of this study indicate that assessment is going on in higher education in music. Although there were only eleven institutions involved in the study, every university indicated that they were doing some kind of assessment in music. The degree of assessment varies from campus to campus. Assessment training and support was limited. But, according to the Qualtrics survey, eleven music departments nationwide feel the need (and responsibility) to examine what and how they are teaching and then to come up with decisions on how to improve their teaching. Further, they feel that implementation of reviewed assessment techniques will improve students’ learning.

Topics for Further Study

1) Survey more music departments on assessment, and how they utilize assessment results.

2) Replicate this study with the universities surveyed in five years to compare updates and changes made in assessment.

3) Compare the Music Department’s assessment with other departments within each university (arts or nonarts) to see:

a) How much the Music Department is doing when looking at other departments.

b) How each university as a whole is doing with assessment practices.

Acknowledgements

The author would like to thank the following for their assistance in this study’s preparation: Jason De Rousie for the initial review and his expertise in assessment, J. Mark Scearce and Marcia Toms for following reviews, questions and comments. Special thanks go to Gary Beckman, for his critical reviews and comments on presentation and manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Russell, J.A. and Austin, J.R. (2010) Assessment Practices of Secondary Music Teachers. Journal of Research in Music Education, 58, 37.
http://dx.doi.org/10.1177/0022429409360062
[2] Miller, L. and Gronlund (2009) Measurement and Assessing in Teaching. 10th Edition, Pearson Education, Upper Saddle River.
[3] Palomba, C.A. and Banta, T.W. (1999) Assessment Essentials: Planning, Implementing, and Improving Assessment in Higher Education. Jossey-Bass Publishers, San Francisco.
[4] Marchese, T.J. (1987) Third Down, Ten Years to Go. AAHE Bulletin, 40, 3-8.
[5] US Department of Education (2011) No Child Left Behind.
[6] Cobb, S. (2004) History of Assessment Practices in the United States.
[7] Linn, R.L. (2000) Assessment and Accountability. Online, 29, 2.
[8] Habanek, D.V. (2005) An Examination of the Integrity of the Syllabus. College Teaching, 53, 2.
http://dx.doi.org/10.3200/CTCH.53.2.62-64
[9] Asmus, E.P. (1999) Music Assessment Concepts. Music Educator’s Journal, 86, 2.
http://dx.doi.org/10.2307/3399585
[10] Rudner, L.M. and Schafer, W.D. (2002) What Teachers Need to Know about Assessment. National Education Association, Washington DC.
[11] De Grez, L., Roozen, I. and Valcke, M. (2012) How Effective Are Self- and Peer Assessment of Oral Presentation Skills Compared with Teachers’ Assessments? Active Learning in Higher Education, 13, 129-142.
http://dx.doi.org/10.1177/1469787412441284
[12] Falchicov, N. and Goldfinch, J. (2000) Student Peer Assessment in Higher Education: A Meta-Analysis Comparing Peer and Teacher Marks. Review of Educational Research, 70, 287.
http://dx.doi.org/10.3102/00346543070003287
[13] Topping, K. (1998) Peer Assessment between Students in Colleges and Universities. Review of Educational Research, 68, 249-276.
http://dx.doi.org/10.3102/00346543068003249
[14] Hill, I.B. (1996) Setting the Context for Assessment. Assessment in Practice: Putting Principles to Work on College Campuses. Jossey-Bass Publishers, San Francisco.
[15] Leung, C., Wan, Y. and Lee, A. (2009) Assessment of Undergraduate Students’ Music Compositions. International Journal of Music Education, 27, 250-268.
http://dx.doi.org/10.1177/0255761409337275
[16] Bowles, C.L. (1991) Self-Expressed Adult Music Education Interests and Music Experiences. Journal of Research in Music Education, 39, 3.
http://dx.doi.org/10.2307/3344719
[17] Standley, J.M. (1984) Productivity and Eminence in Music Research. Journal of Research in Music Education, 32, 149.
http://dx.doi.org/10.2307/3344834

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.