A Note on the Determinants of Recent Pupil Achievement ()

Frederic L. Pryor

Economics Department, Swarthmore College, Swarthmore, USA.

**DOI: **10.4236/ce.2014.514143
PDF
HTML
2,439
Downloads
3,121
Views
Citations

Economics Department, Swarthmore College, Swarthmore, USA.

This analysis focuses on the average state scores in 2013 for reading and mathematics of fourth and eighth grade pupils taking the National Assessment of Educational Progress. Of the 19 possible determinants that are tested, two are most important: average pupil/teacher ratios in the state and the share of pupils coming from low income families who are eligible for free or low cost lunches. Most of the other variables focusing on pupil and school characteristics or on state policies such as expenditures on education, length of school year, or required attendance at kindergartens did not have a statistically significant impact.

Share and Cite:

Pryor, F. (2014) A Note on the Determinants of Recent Pupil Achievement. *Creative Education*, **5**, 1265-1268. doi: 10.4236/ce.2014.514143.

1. Introduction

In 1969, the US Department of Education began to administer the National Assessment of Educational Progress (NAEP) to a nationwide sample of primary and secondary school pupils in public and private schools. The results of these tests, sometimes dubbed “The Nation’s Report Card”, are published by the National Center for Education Statistics.

The discussion below briefly describes the data set and then focuses on the possible determinants of the different average scores by state for 4^{th} and 8^{th} grade achievement in reading and mathematics in 2013. This is the most recent year for which such data were available at the time of writing.

2. The Data

NAEP tests are currently given every two years to a sample of pupils in the 4^{th} and 8^{th} grade in the 50 states and Washington DC (treated as a state in the calculations below). Such tests are also given to those in the 12^{th} grade in selected states and in various subject areas, but the analysis focuses only on the 4^{th} and 8^{th} grades for reading and mathematics, where data for all states are readily available.

The analysis below focuses on scores and explanatory variables at the state level. Although using such an aggregated approach dramatically reduces the size of the sample, it has several advantages. The results are not heavily influenced by extreme scores nor are the results greatly biased by variables influencing performance occurring in previous years (the “window of time” effect). Further, the aggregation takes into account neighbourhood effects that are evident if individual scores are examined. Finally, data for certain possible explanatory variables are only available on a state level. This aggregation means that the impact of particular causal variables will have a larger effect than if individual pupil scores are used. All calculations are weighted by the number of students enrolled in each grade in each state, so that small states do not disproportionately influence the final calculations.

For 2013, the average state scores did not dramatically vary, as shown in the relatively small ratio of the standard deviations to the means in Table1 As shown in Part B of the table, however, differences between minimum and maximum scores varied considerably, and reveal a consistent pattern: on all four tests, Washington DC scored among the lowest, and Massachusetts, among the highest. Such results do not tell us, however, whether such differences are due to differences in the pupils, the schools, or the educational policies pursued by the governmental authorities in each state; and so we must turn to the determinants of these raw scores.

3. Determinants of Achievement Scores

Three sets of possible explanatory variables underlying statewide test scores can be easily specified: those relating to characteristics of students being tested, those relating to aspects of the school in which they are attending, and those related to state characteristics and governmental policies where the pupils live. Various hypotheses deserve exploration^{1}:

Table 1. Features of the NAEP data base for public and private schools in 2013^{1}.

The data are drawn Education National Center of Education Statistics (2014). The weights are student enrollments in the respective grades.

3.1. Pupil Characteristics

According to Grissmer et al. (2000), differences in pupil characteristics, especially characteristics of their families, account for the greatest variation in test scores. A crucial factor is family income, which should be negatively related to academic performance, especially since poverty and cultural deprivation are related. A good proxy for family income is eligibility of the pupil for free or low-cost school lunches. Other possible determinants include income inequality in the state, race (Black, Hispanic, and Asian), and share of children living in single parent families; in the statistical tests, none however, not of these out to play a statistically significant role in influencing average performance, once the school-lunch variable is taken into account. Although gender makes a difference for scores of individual pupils, the variability of gender shares across states is too small to have an impact on performance. In sum, the only family characteristic revealing a statistically significant relationship (0.05 level) to test scores in a state is the share of children living in poverty^{2}.

3.2. School Characteristics

The most likely determinant is the pupil/teacher ratio, which should be negatively related to pupil performance^{3}. Another likely determinant is the share of students taking remedial English (a variable that includes immigrant pupils with poor English), which should be negatively related to performance, particularly in reading. Other possible determinants include average teacher’s salaries, teacher turnover (measured in terms of new teachers hired each year and average teaching experience), share of teachers with advanced academic degrees, average educational expenditures per pupil, total hours in a school year, and the ratio of average attendance to total enrollment, but none of these turned out to have a significant impact on the final scores.

3.3. State Policies and State Characteristics

Such variables include urbanization, which should be positively related, assuming urban schools are better equipped and taught than rural schools and the unemployment rate in the state, which should be negatively related to pupil performance because unemployment is associated with such social ills as family stress. Other possible state-level variables include years the years of required schooling, whether kindergarten attendance is required, and whether districts must offer kindergarten to preschool children, but none of these latter variables have a significant impact on performance scales.

Given the nineteen possible causal variables of state-level test performance, the analysis uses a modified stepwise regression technique. That is, for each type of score the analysis started with the variable that yielded the highest adjusted coefficient of determination and then in each succeeding round added other variables to determine if they yielded statistically significant coefficients, a process repeated until all significant coefficients were obtained. The results reported in Table 2 report the last stage where none of the other hypothesized explanatory variable yielded a statistically significant coefficient with the predicted sign.

Table 2 presents the final OLS regression results. An important conclusion is that relatively few—two to four —hypothesized causal variables appear associated with over half to almost three quarters of the variation between the average academic performance scores among the US states. Three of the four scores show the predicted relation between test scores with the share of students eligible for free school lunches and the average pupil/teacher ratios. The only other variables that seem to have a statistically significant on the test scores are unemployment (negative only for 4^{th} grade mathematics), the share of students taking remedial English (negative only for 8^{th} grade reading), and urbanization (positive only for 8^{th} grade reading).

The difference in performance scores between Washington DC and Massachusetts can now be examined more closely. Looking just at the share of pupils eligible for free or reduced-price lunch, the gap between achievement scores of these two geographical units narrows by 40 to 50 percent. However, since Washington

Table 2 . Weighted regression results for achievement scores^{1}.

^{1}These are OLS regression results. The data come from National Center for Education Statistics (2014). An asterisk designates statistical significance at the 0.05 level. In none of the four sets of regressions, adding other possible independent variables yields statistically significant results with the predicted sign. The regressions are weighted by student enrollments in that grade.

DC also has a lower average pupil/teacher ratio than Massachusetts, this roughly counterbalances the effect of pupils eligible for free or reduced-price lunch, so that the large differences in average scores between these two governmental units are still left unexplained^{4}. Most likely the differences in average scores must be attributed to their differences in educational policies and procedures followed in each governmental unit for which comparable data are not available.

4. Brief Conclusion

The regression results move us closer to an understanding some underlying influences on pupil performance in the NAEP achievement tests. Of the 19 possible influences on the state averages of these scores, only two are shown to have a consistent influence on the results, namely the poverty of the pupil’s family as proxied by the pupil’s eligibility for free or low cost lunch and a low pupil/teacher ratio. The comparisons between high and low scoring states suggest that academic performance also appears influenced by more subtle factors than the available data allow, so that educational policy must remain a matter of debate.

NOTES

^{1}A subtle problem of bias in the statistical calculations arises because no single student receives the entire test, but the plausible scores for each student are calculated using a set of conditioning variables consisting mostly of student and teacher characteristics (Schofield et al., 2014) Such bias can be offset by employing explanatory variables for the regressions from the same set of conditioning variables (National Center for Education Statistics, 2014), which, in most cases, was followed.

^{2}A variable with a possible influence on average state scores is the percentage of children in a given school who did not take the NAEP tests, e.g., those with particular handicaps. Since the schools are not judged on their average test scores, school administrators have no incentive to prevent students whose performance is likely to be poor from taking these tests and lowering the school’s average. Unfortunately data on the share of students who did not take the NAEP tests in 2013 did not seem available, but data from earlier years show that the percentage is quite low so that their overall effect on average scores should be limited.

^{3}A variable related to the ratio of pupils to teachers is average classroom size, but the latter variable performed more poorly in the regressions explaining the scores of all four tests. This suggests that teachers supplementing those standing in front of a class all day have an important impact on achievement scores of pupils.

^{4}The slightly higher percentage of 8^{th} graders taking remedial English has a negligible impact on the comparisons between Washington DC and Massachusetts.

Conflicts of Interest

The authors declare no conflicts of interest.

[1] | (2014). Digest of Education Statistics. http://nces.ed.gov/programs/digest |

[2] | Grissmer, D. et al. (2000). Improving Student Achievement. Santa Monica: RAND. |

[3] |
National Center for Education Statistics (2014). Conditioning Variables and Contrast Coding. http://nces.ed.gov/nationsreportcard/pdf/main1998/2001509f.pdf |

[4] | Schofield, L. S., Junker, B. J., Taylor, L., & Black, D. (2014). Predictive Inference Using Latent Variables with Covariates. Psychometrika. (Forthcoming) |

Journals Menu

Contact us

+1 323-425-8868 | |

customer@scirp.org | |

+86 18163351462(WhatsApp) | |

1655362766 | |

Paper Publishing WeChat |

Copyright © 2024 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.