Early research finds no consistent relationship between spending and student outcomes. Many studies were conducted when education data were more limited and methods for separating correlation from causation less advanced. Their results are likely contaminated by biases, making their findings unreliable. Most were “observational” and relied on relatively simple comparisons across states/districts and time; estimates derived from such comparisons are likely influenced by changes in other factors that are also correlated with education spending such as demographics, socioeconomics, and other education policies. For example, if additional education funding is provided to groups who tend to have worse education outcomes, then observational methods might wrongly indicate that higher funding harms outcomes. Nevertheless, the early research reflects the complexity of the relationship between spending and outcomes, and more funding alone does not guarantee meaningfully better performance.
Newer research uses higher-quality data and methods, typically quasi-experimental designs or natural experiments that move beyond observational analyses and provide cleaner estimates of the causal effect of a change in school spending. Nearly all recent studies find positive and significant effects. However, many of the effects are small by conventional standards. Average effects across recent studies are positive and statistically significant. Based on the guidelines developed from past research, these average effects range from small to medium for spending increases that are comparable to other moderate- to high-cost interventions.
The recent school funding literature analyzes spending changes from the 1960s through the 2010s using local, state, and national comparisons. Studies reflect impacts across time, contexts, demographics, and policy mechanisms. Nearly all recent studies investigate differences across multiple dimensions, finding varying impacts and emphasizing aspects of the context and/or policies that may be important to the results. For example, because many low-income students do not attend low-income schools and/or districts, policies that target district and/or school conditions have limited ability to affect student-level socioeconomic gaps. Other studies have highlighted the impact of union strength and accountability measures. Effects also vary across studies due to factors including differences in context, intervention, and data and methods. While researchers acknowledge effect variation, there is no consensus on its practical and/or policy implications. More research is needed to better understand how and why effects differ and what can be done to maximize the efficiency of future spending.
Much of the school spending activity over the past 50 years has targeted economically disadvantaged students and/or districts, and recent research finds that positive results seem most concentrated among these groups. For example, a recent meta-analysis estimated that test score effects are about twice as large for low- vs. non-low-income students and attainment effects are three times as large. Another recent meta-analysis estimates larger but insignificant test score effects and attainment effects that are three times as large for low-income students. These results suggest that targeting additional spending toward these groups could yield larger impacts.
Generally, studies on the relationship between education spending and student outcomes consider relatively few outcomes, often elementary and middle school test scores and measures of educational attainment. However, school funding may affect a wider set of outcomes. All these outcomes may be important, but research is able to measure only some of them. However, a few studies provide evidence of positive impacts on longer-run outcomes, such as criminality, civic participation, adult earnings, and intergenerational mobility. Nevertheless, we still have a limited understanding of the full scope of the impacts of school funding and how they might vary across funding policies.
Most of the recent literature investigates high-level changes and offers less direct evidence on the spending decisions and structures that may best improve outcomes. Additionally, most evaluations of educational interventions do not include cost information. These two factors make it challenging to provide practical guidance regarding how to effectively allocate additional resources to meaningfully affect student outcomes. Additional research is needed to connect impacts to how money is spent, on whom, and in what circumstances.
It seems obvious that “money matters” in education. Schools need buildings in which to educate students; educators are salaried employees with benefits; and schools use instructional materials to deliver content. All require money, and thus different school funding levels can enable or constrain schools’ ability to provide opportunities for students to learn and grow. However, researchers, policymakers, and practitioners have long debated the productivity of additional school spending, with historically little consensus on whether and under what circumstances increased expenditures improve student outcomes. Although none deny that schools require some baseline of funding to educate students, debates have centered on whether additional funding would lead to meaningful improvements in student outcomes. In general, researchers, policymakers, and the courts continue to grapple with (1) whether additional unrestricted spending increases are likely to meaningful impact student outcomes and (2) how political factors, funding policy design, resource targeting, and spending composition influence the efficacy and efficiency of spending policy.
The debate over the impact of additional education expenditures has been informed by both simple analyses of the time trends of expenditures and outcomes as well as more careful research. After adjusting for inflation, education spending on year-to-year operations (i.e., excluding capital spending) per student roughly doubled between 1970 and 2000. Since then, spending per student has continued to grow but at a slower pace, increasing by more than 30% over the subsequent two decades.1
However, average student test scores in mathematics and reading have not experienced similar growth. Scores on the National Assessment of Education Progress (NAEP) increased for some grade levels and subjects through the 1990s and early 2000s, but these increases are inconsistent and generally do not match the magnitude of the spending increases over this period. This has led many to question the impact of additional spending on average outcomes.
More recently, NAEP scores were flat across grades and subjects in the 2010s and have declined substantially post-pandemic. Mathematics scores have maintained some of their earlier growth but are now at similar levels to scores around the turn of the millennium. Reading scores, on the other hand, are at the same level as or below average student performance in the early 1990s.
Consistent with this simple trend analysis, the early research, which was largely correlational, did not provide consistent evidence of a meaningful relationship between additional spending and student outcomes.
However, more recent quasi-experimental studies using better methods and data contrast with this early research and add new evidence to the debate. In general, this research finds positive impacts of school spending on student outcomes—although the interpretation, magnitude, and key mediating factors are nuanced and subject to ongoing research and debate. We summarize some of the key takeaways from the existing and emerging scholarship below.
Policy debates over school finance are often contextualized by broad spending and outcome trends. There can also be a disconnect between research and policy questions. Research tends to focus on what happened because of past policy changes, while policymakers are interested in forecasting the impact of future changes given the current spending levels/pattern and educational context.
While rigorous retrospective studies can provide compelling effect estimates, external validity remains a challenge when projecting future policy impacts. For this reason, understanding and properly contextualizing the trends in spending and outcomes is fundamental to improving the link between research and policy. In addition to the key findings from the literature, there are several influential policy themes related to these issues.
Long-term spending and outcome trends
As mentioned above, current education spending (i.e., excluding capital spending) per student roughly doubled between 1970 and 2000 and has continued to grow at a slower pace in the past two decades.14 The fast-paced spending growth in the latter portion of the 20th century was encouraged by state court rulings related to equity in and adequacy of school funding.15 More recently, state courts have been relatively less likely to find in favor of plaintiffs or call for funding increases, coinciding with slower per student spending growth.16
Additionally, test scores on the National Assessment of Education Progress (NAEP) have shown inconsistent growth across time periods, ages, and subjects. Table 1 below shows the standardized changes in various subjects and grades.17 In general, math has shown more consistent growth than reading; end-of-high-school scores (age 17) have also shown very little growth in recent decades. Scaled in dollar terms to be more consistent with the meta-analytic average from recent studies,18 per dollar impacts on math (state NAEP and not long-term trend) are similar to the average 0.06 standard deviations (s.d.) per 10% and 0.03 s.d. per $1,000 over four years, respectively. However, for reading and NAEP long-term trend exams, the per dollar estimates from the literature do not map to the overall trends.
Exam | Start year | End year | A score (SDs) | A score (SDs) per 10% spend inc. |
Long term reading |
|
|
|
|
Age 9 | 1971 | 2012 | 0.3134 | 0.0266 |
Age 13 | 1971 | 2012 | 0.2135 | 0.0181 |
Age 17 | 1971 | 2012 | 0.0373 | 0.0032 |
Long term math |
|
|
|
|
Age 9 |
| 2012 | 0.7049 | 0.0985 |
Age 13 |
| 2012 | 0.5354 | 0.0748 |
Age 17 | 1978 | 2012 | 0.1705 | 0.0238 |
Reading |
|
|
|
|
Grade 4 | 1992 | 2019 | 0.1050 | 0.0247 |
Grade 8 | 1992 | 2019 | 0.0867 | 0.0204 |
Math |
|
|
|
|
Grade 4 | 1990 | 2019 | 0.8639 | 0.2028 |
Grade 8 | 1990 | 2019 | 0.5399 | 0.1268 |
Footnotes
Sources: Nation's Report Card (https://www.nationsreportcard.gov/) for main NAEP data and Long Term Trend NAEP data; NCES 2021 digest (https://nces.ed.gov/programs/digest/2021menu_tables.asp), table 236.55 for expenditure data
Notes: A score (SDs) reports the change in test scores in each respective exam over the period from Start year to End year in terms of the individual standard deviation of the exam in Start year. The next column reports this value for each 10% increase in national per-pupil expenditure (from the base level in Start year).
Of course, as we detail above, the causal effect of school spending need not be equivalent to the average correlation between spending and outcomes at the national level. There are several reasons that this might be the case, including: complementary policy developments (e.g., accountability); upward wage pressure from productivity increases in other domains (i.e., “cost disease”); and countervailing economic forces (e.g., demographic changes, inequality, poverty). Moreover, we might not expect all spending increases to produce equal academic impacts. For example, teacher salary schedules might not be explicitly tied to performance; funding could be poorly targeted to populations with lower marginal returns; there might be diminishing returns at higher spending levels; competing cost pressures (e.g., pensions and special education) could be at work; and the composition of spending might have been shifted toward non-classroom expenditures, such as support staff and pensions.
Targeting, transparency, and accountability
Research on school spending generally considers spending increases in dollar terms, with little differentiation being made between spending mechanisms, incentives, and restrictions. However, policymakers seeking to improve school finance policy have many practical questions that research could answer. Are there approaches to school funding policy that are likely to increase return on investment? How do states promote transparency and accountability around spending decisions? What supporting policies need to be in place to better ensure or enhance the impact of school spending (e.g., test-based accountability)?
Targeting: A key consideration in school finance policy is how to allocate dollars such that they target the districts and students who would benefit most from additional investment. Most school finance reforms and state funding formulas seek to direct more state funding to less-affluent communities and students, often attempting to offset inequalities in local property tax bases. Research suggests that this sort of targeted spending offers a higher return and that school finance reforms have made meaningful progress closing funding gaps.19
Within-district allocations also matter for efficacy and equity. Insofar as districts have schools of varying racial and socioeconomic composition—which is the case for most of a state’s medium and large districts—within-district targeting can act as either a complement or an impediment to the state’s intended spending distribution.
Transparency: Despite robust spending data, we have a limited sense of how districts spend and on whom, especially for non-educator expenditures. This limits our ability to research the impact of specific spending decisions. This is a particular challenge for within-district spending allocations; although some data detail spending levels across schools in a given district20 these data are often limited in scope, only cover recent years, and/or may reflect different accounting standards across districts and states.
Accountability: Research has considered two forms of accountability: accountability for student outcomes and accountability for spending decisions. The former has long been an area of policy and research action, although student outcome-based accountability has experienced waning support in recent years.21 Nevertheless, prior evidence indicated that it improved outcomes,22 and recent evidence suggests that it may also interact with spending changes in state-level reforms.23
The latter varies by type of spending and revenue source, and there is often little accountability on how spending occurs within districts. Nationally, school-level spending data suggest that districts do not spend regressively on average.24 However, data from specific state funding formulas also suggest that marginal dollars are allocated across schools in ways that somewhat undo the progressivity inherent in many state funding formulas.25
Cost pressures and the “real” increase in school resources
In recent decades, schools have faced a variety of cost pressures that have reduced their purchasing power, meaning that each dollar of funding purchases less current educational resources. Nationally, health care costs have been rising continuously, at a rate faster than inflation in most years, putting pressure on school budgets even with no changes in staffing levels or salaries.
Pension costs have also risen precipitously—more than tripling since 2004 on a per student basis.26 Pension contributions now represent more than 12% of education expenditures, which is up from less than 5% in 2004. Pension costs have risen because of underfunded benefit promises made to past teachers, and these rising costs are now crowding out spending on current students and educators.27
In addition to growing benefit costs, other cost pressures include increased construction and maintenance costs and upward wage pressure due to cost-of-living demands and increased wage competition from other sectors.28
These pressures can increase costs without increasing the resources available to educate students and, as a result, can lead to increased spending that may not have much or any impact on student outcomes. In other words, when policymakers contextualize spending increases from recent years, they should consider the higher costs that schools face to provide the same amount and quality of educational resources, which ultimately influences schools’ ability to improve student outcomes.
The academic debate on school spending impacts predates recent trends in spending and the availability of standardized test score data. The seminal 1966 report on the “Equality of Educational Opportunity”—often dubbed the “Coleman report”—collected data on roughly 600,000 students and 3,000 schools and concluded that, “… one implication stands above all: That schools bring little influence to bear on a child’s achievement that is independent of his background and general social context”.29 In the decades that followed, researchers continued to study the relationship between school funding, resources, and student performance.
Still, the consensus was largely similar three decades later: “Simply providing more funding or a different distribution of funding is unlikely to improve student achievement (even though it may affect the tax burdens of school financing across the citizens of a state)”.30 A series of influential literature reviews31 summarized research on the level of school inputs and student performance, finding no systematic relationship. Other literature reviews contested these conclusions32 using different meta-analytic techniques, including different weighting of individual results vs. individual papers. Nevertheless, there was little consensus, and research findings on school spending impacts varied widely.
Importantly, this early wave of research was limited in both approach and data. While some studies were experimental33 or “quasi-experimental”,34 most relied on observational data and empirical methods that are no longer considered convincing ways to obtain policy-relevant estimates of causal effects.35
The key challenge in any empirical assessment of the relationship between school spending and student outcomes is that school spending is not random and depends on a multitude of factors that codetermine student success. For example, spending may increase with economic improvements in an area, but this also affects the resources that households are able to provide for their children, potentially leading a researcher to find effects that are biased upwards if they fail to account for changes occurring outside of school.
On the other hand, school funding is often compensatory, targeting student socio-economic disadvantage or poor academic performance, which could lead a researcher to estimate effects that are biased downwards, perhaps even negative. Failure to account for such confounds in observational designs—which can vary across the specific context of any study—mean that estimates of the relationship between spending and outcomes may not reflect the true causal relationship between the two.
Finally, the data used in earlier studies limited the scope of research to measure meaningful outcomes and employ stronger research designs. Comprehensive, standardized testing data were uncommon before many states—and eventually the federal government—enacted test-based accountability in the 1990s and early 2000s.36 District finance data have also improved and were not consistently available at an annual level prior to the 1990s. This lack of comprehensive public data on spending and outcomes meant that many studies examining the impacts of school resources were more limited in the outcomes they could measure and/or examined specific changes in inputs (e.g., changes in class size) rather than more general spending increases.
The more recent literature studying school spending and student outcomes relies on better data and stronger research designs. Thus, recent research has been able to more convincingly account for potential confounds that may obscure any true relationship between the spending and outcomes. Unlike prior research using observational approaches, these studies have used “quasi-experimental” designs, which attempt to isolate causal effects by relying on comparison groups similar to what would occur under an experiment with random assignment. Many examine state-level school finance reform efforts, often mandated by court order. Others rely on quirks in funding formulas, close elections for higher local funding, or trends in funding that researchers have tried to isolate from other economic forces, such as housing markets and recessions.
Most studies have found a positive and significant relationship between spending and outcomes.37 However, most effects are less than 0.05 s.d. per $1,000 in per pupil spending for four years or are not significant. By conventional standards in the education literature, effects below 0.05 s.d. are often considered small.38 Based on this framework, cross-study average effects estimated using meta-analytic techniques range from small to the lower end of medium for spending increases that are comparable to interventions of moderate to high cost.39 However, there is disagreement about whether these meta-analytic estimates represent the impact of generalized spending increases.40
School finance reforms are arguably the most prominent source of variation in these studies. Most school finance reform studies use statewide and/or national data to examine educational trajectories and later outcomes of students who were exposed to different levels of school spending due to the exact timing of reforms to a state’s school finance system. The intuition of their approach is as follows: court-ordered and/or legislative reforms often impose discrete and significant changes in a state’s school finance system that differentially affect students in different districts within a state and/or who were born in different years. In effect, this method relies on these reforms to provide variation in spending that is less likely to be biased by confounding factors or trends. Other research has questioned the exogeneity of reform timing and whether post-reform outcome effects can be interpreted as isolating spending impacts specifically—not a broader package of financial, accountability, and curricular changes that are often contemporaneous.41
Research on school finance reforms using national samples has found evidence of positive effects on test scores,42 graduation rates,43 adult earnings,44 intergenerational economic mobility,45 and crime.46 For example, one influential study examined reforms that occurred from 1972 to 2010 and found that a 10% increase in school spending for 12 years led to 7.7% higher wages and a 9.8% increase in family income in adulthood.47 Gains were concentrated among students in low-income households. The authors estimated that these effects imply an internal rate of return of roughly 10%, which is similar to studies of preschool programs48 but somewhat smaller than estimates of class size reductions.49
Other studies have examined school finance reforms in specific states, relying on differences within the state in spending trends due to the reform. Studies of Michigan’s 1994 reform provided evidence that spending improved student academic performance, graduation, and college attendance.50 California’s recent “Local Control Funding Formula” reform created a discontinuous “kink” in the funding allocated to high-need districts; studies have relied on this quirk to estimate positive effects of higher spending targeted to the state’s neediest districts.51
School finance reforms are not the only source of variation in recent studies. For example, research has used discontinuous quirks in existing funding formulas;52 the passage or failure of school funding referenda in close elections;53 and the interaction between housing market dynamics and school funding formulas.54
On the other hand, studies specifically examining capital spending have been more mixed, with some finding positive effects55 and others finding no or imprecise effects.56 Pooling estimates across studies in a meta-analysis57 yields significant positive average effects, although effect sizes are smaller per dollar than for operational spending. A recent study investigating capital spending effects across 29 states found small average effects of capital spending but notable heterogeneity across district demographics, spending type, and prior capital spending levels.58 Taken together, these varied findings in the literature on capital spending provide evidence that impacts depend on the context of spending and on what and who spending goes toward.
Recent school spending studies examine various spending changes across many jurisdictions and time periods. K–12 schooling and the context in which schools operate vary significantly across and within states and have evolved considerably over the period covered by the recent literature (i.e., the 1960s to the early 2010s). It is, therefore, unsurprising that estimated effects are heterogeneous both within and across recent studies, encompassing both sampling and true effect variation
Most individual studies engage with and investigate the implications of various forms of heterogeneity, often finding important differences across multiple dimensions and emphasizing aspects of the context that may be important to the results.
Heterogeneous effects have been found across student demographics. For example, a study of spending changes associated with Title I implementation in southern states during the 1960s found they reduced dropout rates for White students but not for Black students.59
Other studies have found important differences associated with the policy and political context in which schools operate. For example, one study found that union strength was associated with higher spending effects, likely in part because of reduced crowding-out of local tax spending in response to higher state aid in these contexts,60 emphasizing how context can influence post-reform dynamics in ways that meaningfully affect results. Other studies have highlighted the impacts of accountability measures,61 which often accompany school finance reforms and may interact with spending increases and lead to differential student outcome effects.62
An important but less discussed consideration in the school spending literature is that effects often vary across grades, subjects, and years within studies for reasons that are not fully understood or able to be directly examined. These differences may reflect both sampling variation and real underlying heterogeneity. For example, a study of capital spending found small and positive but insignificant impacts on 3rd through 8th grade test scores in the sixth year, but point estimates are negative in years 4 and 5 and for math high-school exit exam scores.63 Another study found a positive and significant impact on 4th grade test scores but a negative, insignificant impact on 7th grade scores.64 In contrast with other findings, one study found positive attainment results that were concentrated in districts that were lower poverty and higher achieving at baseline.65
While there is substantial effect heterogeneity within these studies (e.g., for different student groups or years), most of the variation is between studies, representing different contexts and spending interventions. There is more than a one s.d. difference between the largest and smallest individual study estimates included in one of the recent meta-analyses of school spending66 and a 90% of an s.d. difference between the largest and smallest estimates included in another.67 The former estimates that 51% of the effect variation is between studies68, and the latter finds that the true effect s.d.is two-thirds as large as the average effect.69
Past school spending changes have occurred across states, varying district demographics, prior spending levels, and spending types and have incorporated varying levels of restriction and/or accountability. Estimates of the overall average impact are useful to characterize the magnitude and range of impacts one may expect based on experience across policies and context, but that does not necessarily make them representative of potential future impacts. If the effect of school spending fundamentally varies along many dimensions, understanding that variability is key to using funds to improve outcomes. Abstracting too far from the context and details can be limiting and at worst counterproductive, leaving policymakers and education leaders with incomplete or inaccurate guidance regarding how to effectively use additional resources.
Many of the papers in the recent school funding literature investigate school funding reforms that were legislatively and/or court ordered. In general, these reforms tended to direct additional state funding toward low-income students and low-wealth districts. Over time, these policy changes have lessened and even reversed differences in funding levels by income, wealth, and race, both overall and across many states.70
Because school finance reforms tend to allocate more money to low-income students and/or low-wealth districts, several studies have used this variation to estimate differential impacts across measures of income.71 Nine of the 12 study–outcome combinations from a recent meta-analysis that allow a direct comparison of low-income to non-low-income impacts yield larger effects for low-income students.72 Overall, test score effects are about twice as large for low-income vs. non-low-income students, attainment effects are three times as large, and both differences are significant. Similarly, the other recent meta-analysis includes 11 studies that report both effects. It reports larger effects for low-income populations that are comparable in magnitude to the effects from the above mentioned meta-analysis, although the test score effect difference is not significant.73
Generally, many of the reforms studied in the literature were directed toward low-income, low-wealth, and/or low-spending populations.74 Although there is incomplete evidence to make strong prescriptions based on within-study differences in targeted populations, the fact that most spending policy reforms and changes have directed marginal dollars toward less-advantaged populations suggests that the evidence may be more applicable to these settings. Conversely, existing research may be less applicable to spending increases in high-income and/or high-spending districts, where concerns over diminishing returns may be greater.
Most recent school funding research examines test scores in elementary and middle school grades, often only in math and reading/English Language Arts. This is mainly due to data limitations: there are only so many quantifiable outcomes available in most datasets that researchers have access to. Additionally, the long lag required to tie school spending in childhood to later-life outcomes means that many recent policies cannot yet be studied for their impact on long-run outcomes. Conversely, our evidence on long-run outcomes comes almost entirely from spending changes in the 1990s and earlier. However, this limitation may be diminishing: the rise of statewide longitudinal databases that connect educational and adult outcomes and the aging of cohorts affected by recent school finance policy changes may broaden scope for future research on long-term outcomes.
Some studies consider impacts on educational attainment, including high school completion, college attendance, and postsecondary degree completion.75 When scaled in s.d. terms, a recent meta-analysis finds that attainment outcome effects are larger than test score effects,76 although the other meta-analysis finds the opposite.77
A smaller number of studies examine effects on longer-run non-educational outcomes, such as earnings,78 intergenerational mobility79 and adult crime.80 Effects are positive across these studies and suggest large cost–benefit ratios comparing the spending to eventual earnings (or adult crime) effects.
Because student success is multifaceted, any limited focus on a singular outcome or set of outcomes risks missing out on some component of the true “effect”. Depending on its use, increased school funding may have different impacts on different outcomes, all of which one may care about, but only some of which research is able to measure. Hiring a new science teacher might improve learning in science but could have less of an impact on reading or math skills. Alternatively, hiring an additional counselor may have no impact on test scores but could improve graduation and college attendance rates.
Researchers have attempted to overcome this by focusing on how school improvements and increases in school spending affect local house prices, which can act as a sort of “index” to assess how much parents and the local community value specific changes. In standard economic models, this can be interpreted as the capitalization of local demand for school quality, inclusive of changes broader than what can be measured by test score and/or attainment impacts alone. Of course, this relies on an assumption that the true value of any improvements in children’s outcomes are “correctly” valued by residents and the real estate market, which may be unrealistic in most settings.
Some have also found a positive relationship between changes in school spending and local house prices.81 Recent evidence suggests that there may be a mismatch between the types of spending that improve test scores and the types of spending that are valued in the housing market. For example, Biasi et al. (2024) find that spending on HVAC has large impacts on test scores and no impact on house prices while the opposite is true for spending on athletic facilities.
These limitations diminish the ability for research to inform policy. For example, based on current evidence, spending increases that target unmeasured outcomes may appear to be ineffective and/or inefficient if we only consider test scores. This may be especially true for spending at the secondary school level, for which coursework interventions are more varied and testing less consistent and aligned with actual curriculum. Policymakers looking to learn from current research—and researchers looking to advance the literature to better inform policy—need to carefully consider the context of the spending to generalize impacts beyond test scores: who is affected by the spending, what it is being spent on, and which outcomes are measured and unmeasured. These contexts vary considerably across research studies, and the data available in any single study are often limited in the ability to broadly address them.
Most recent school funding literature investigates high-level changes and is limited in its ability to connect specific policies or spending decisions to outcomes. Furthermore, the delay between policymaking and rigorous evidence is often so long that it reduces the relevance of the results. In part, these challenges reflect a tradeoff between the strength of the evidence and contemporary policy relevance. Studies that prioritize causal identification often require a narrower focus on settings and samples where quasi-experimental variation is possible. These may not necessarily study the most promising interventions, cover representative contexts, or examine optimal spending decisions. In other words, while the more recent focus on quasi-experimental research designs has improved the quality of the evidence (i.e., internal validity), whether this evidence is generally applicable across settings or interventions (i.e., external validity) remains an open policy and research question.
The representativeness of the sample, context, and interventions plays a critical role in determining whether research findings can be applied to new settings. However, a few states’ reforms are overrepresented in the recent literature, in part because these states have high-quality, accessible data and clear opportunities for causal identification. For example, several studies focus on Michigan, Ohio, and Wisconsin, which together account for only 8% of U.S. public school enrollment. Another limitation is that many recent studies focus on test scores, which typically cover only specific grade levels. As a result, there is less research on how spending impacts early grade- and high-school students’ outcomes, leaving gaps in our understanding of the effects across the full grade range.
Similarly, examining close bond elections has been a common empirical strategy for estimating causal effects of capital spending, but there are limitations to the external validity of these results. Estimated effects are specific to districts right at the margin of passage or failure, and we know less from these studies about districts where educational spending proposals generate substantial support or opposition.
Additionally, most evaluations of educational interventions do not include cost information.82 For example, schools spend most of their money on people (approximately 80%), yet because they often do not include cost information, most quasi-experimental and experimental evaluations of interventions that involve additional spending on various personnel policies are often not considered in the school funding literature. The high-quality evidence in this literature that often involves spending increases underlines how differences in policy design and context can lead to divergent results.83
Challenges with external validity, limited data and evidence on specific spending categories, and scarcity of cost data in other evaluations all limit the practical guidance research can directly offer policymakers. Ultimately, policymakers seek strong and relevant evidence on how to efficiently allocate and utilize additional resources to improve student outcomes. As the school finance research evolves and progresses beyond estimating broad average impacts, more research is needed to better connect spending impacts with how the money is used, who benefits from it, and under what circumstances it proves most effective.
U.S. Department of Education, National Center for Education Statistics. 2024. Digest of Education Statistics: 2022. ↩︎
See, e.g., Hanushek, Eric A. 2003. The failure of input-based schooling policies. The Economic Journal, 113(485), F64–F98.↩︎
Jackson, C. Kirabo. 2020 Does school spending matter? The new literature on an old question. American Psychological Association.↩︎
See Jackson, C. Kirabo, and Claire L. Mackevicius. 2024. What Impacts Can We Expect from School Spending Policy? Evidence from Evaluations in the United States. American Economic Journal: Applied Economics, 16 (1): 412–46.; Handel, D.V., and Hanushek, E.A. 2024. Contexts of convenience: Generalizing from published evaluations of school finance policies. Evaluation Review, 48(3), 461-494.; Handel, D.V., and E.A. Hanushek. 2023. US school finance: Resources and outcomes. In Handbook of the Economics of Education (Vol. 7, pp. 143–226). Elsevier.↩︎
McGee, J.B. 2023a. Yes, money matters, but the details can make all the difference. Journal of Policy Analysis and Management, 42(4): 1125–1132.; Kraft, M.A. 2020. Interpreting effect sizes of education interventions. Educational Researcher, 49(4): 241–253.↩︎
Kraft (2020).↩︎
Jackson and Mackevicius (2024) and Handel and Hanushek (2024).↩︎
Lafortune, Julien, Jesse Rothstein, and Diane Whitmore Schanzenbach. 2018. School finance reform and the distribution of student achievement. American Economic Journal: Applied Economics 10(2): 1–26.↩︎
Brunner, E., J. Hyman, and A. Ju. 2020. School finance reforms, teachers’ unions, and the allocation of school resources. The Review of Economics and Statistics, 102(3): 473–489.; Buerger, C., S.H. Lee, and J.D. Singleton. 2021. Test-based accountability and the effectiveness of school finance reforms. AEA Papers and Proceedings, 111: 455–459.; Dee, T.S., and B. Jacob. 2011. The impact of No Child Left Behind on student achievement. Journal of Policy Analysis and Management, 30(3): 418–446.↩︎
McGee (2023a); Kraft (2020); McGee, J.B. 2023b. Researchers should be cautious when generalizing findings. Journal of Policy Analysis and Management, 42(4): 1136–1139.; Jackson, C.K., and C. Persico. 2023. Point column on school spending: Money matters. Journal of Policy Analysis and Management, 42(4): 1118–1124.↩︎
Jackson and Mackevicius (2024).↩︎
Handel and Hanushek (2023).↩︎
Belfield, C. R., and A.B. Bowden. 2018. Using resource and cost considerations to support educational evaluation: Six domains. Educational Researcher, 48(2), 120–127.↩︎
U.S. Department of Education, National Center for Education Statistics (n.d.).↩︎
Hanushek, E.A., and M. Joyce-Wirtz. 2023. Incidence and outcomes of school finance litigation: 1968–2021. Public Finance Review, 51(6): 748–781.↩︎
Hanushek and Joyce-Wirtz (2023).↩︎
Reproduced from Table 2 in Handel and Hanushek (2023).↩︎
Jackson and Mackevicius (2024) and Handel and Hanushek (2024).↩︎
Lee, H., K. Shores, and E. Williams. 2022. The distribution of school resources in the United States: A comparative analysis across levels of governance, student subgroups, and educational resources. Peabody Journal of Education, 97(4): 395-411.↩︎
See, e.g., NERD$, via the Edunomics Lab at Georgetown, a compilation of ESSA-mandated spending reports.↩︎
Polikoff, M.S., Jay P. Greene, and Kevin Huffman. 2017. Is Test-Based Accountability Dead? Education Next talks with Morgan S. Polikoff, Jay P. Greene, and Kevin Huffman. Education Next, 17(3): 50–58.; Peterson, P.E. 2024. Accountability’s Demise. Education Next.↩︎
Dee and Jacob (2011).↩︎
Buerger, Lee, and Singleton (2021).↩︎
Lee, Shores, and Williams (2022); Blagg, K., J. Lafortune, and T. Monarrez. 2022. Measuring Differences in School-Level Spending for Various Student Groups. Research Report. Urban Institute.↩︎
Lafortune, Julien, Joseph Herrera, Niu Gao, and Stephanie Barton. 2023. Examining the Reach of Targeted School Funding. Policy Brief. Public Policy Institute of California.↩︎
Costrell, Robert M., and Josh B. McGee. 2022. Recent Research on Teacher Pension Funding, Benefits, and Policy Debates. In Recent Advancements in Education Finance and Policy, Downes, T.A., K.M. Killeen (Eds.), Information Age Publishing.↩︎
McGee, J.B. 2016. Feeling the Squeeze: Pension Costs Are Crowding Out Education Spending, Report No. 22, Manhattan Institute.↩︎
Hanushek, E.A., and S.G. Rifkin. 1997. Understanding the twentieth-century growth in US school spending. Journal of Human Resources, 32(1): 35–68.; Baumol, W.J. 2012. The cost disease: Why computers get cheaper and health care doesn't. Yale university press.↩︎
Coleman, J.S. 1966. Equality of educational opportunity. U.S. Department of Health, Education, and Welfare.↩︎
See p. 153 in Hanushek, Eric A. 1997. Assessing the effects of school resources on student performance: An update. Educational evaluation and policy analysis 19(2): 141–164.↩︎
Hanushek, Eric A. 1986. The economics of schooling: Production and efficiency in public schools. Journal of economic literature 24(3): 1141–1177.; Hanushek, Eric A. 1989. Expenditures, efficiency, and equity in education: The federal government's role. The American Economic Review 79(2): 46–51.; Hanushek, Eric A. 1996. A more complete picture of school resource policies. Review of educational research 66(3): 397–409.; Hanushek (2003).↩︎
See, e.g., Hedges, Larry V., Richard D. Laine, and Rob Greenwald. 1994. An exchange: Part I: Does money matter? A meta-analysis of studies of the effects of differential school inputs on student outcomes. Educational researcher 23(3): 5–14.; Krueger, Alan B. 2003. Economic considerations and class size. The economic journal 113(485): F34–F63.↩︎
See, e.g., the STAR experiment evaluated in Krueger, Alan B. 1999. Experimental estimates of education production functions. The Quarterly Journal of Economics 114(2): 497–532.↩︎
See, e.g., Card, David, and Alan B. Krueger. 1992. Does school quality matter? Returns to education and the characteristics of public schools in the United States. Journal of political Economy, 100(1): 1–40.; Card, David, and A. Abigail Payne. 2002. School finance reform, the distribution of school spending, and the distribution of student test scores. Journal of public economics 83(1): 49–82.↩︎
Angrist, Joshua D., and Jörn-Steffen Pischke. 2010 The credibility revolution in empirical economics: How better research design is taking the con out of econometrics. Journal of economic perspectives 24(2): 3–30.↩︎
Hanushek, Eric A., and Margaret E. Raymond. 2005. Does school accountability lead to improved student performance? Journal of Policy Analysis and Management: The Journal of the Association for Public Policy Analysis and Management, 24(2): 297–327.; Carnoy, Martin, and Susanna Loeb. 2002. Does external accountability affect student outcomes? A cross-state analysis. Educational evaluation and policy analysis, 24(4): 305–331.↩︎
Jackson and Mackevicius (2024) and Handel and Hanushek (2024).↩︎
Kraft (2020).↩︎
Jackson and Mackevicius (2024) and Handel and Hanushek (2024).↩︎
Handel and Hanushek (2024);McGee (2023b).↩︎
McGee (2023a);Handel and Hanushek (2024).↩︎
Card and Payne (2002); Lafortune, Rothstein, and Schanzenbach (2018); Brunner, Eric, Joshua Hyman, and Andrew Ju. "School finance reforms, teachers' unions, and the allocation of school resources." Review of Economics and Statistics 102, no. 3 (2020): 473-489.↩︎
Jackson, C.K., R.C. Johnson, and C. Persico. 2016. The effects of school spending on educational and economic outcomes: Evidence from school finance reforms. The Quarterly Journal of Economics, 131(1): 157–218.; Candelaria, C. A., and K.A. Shores. 2019. Court-ordered finance reforms in the adequacy era: Heterogeneous causal effects and sensitivity. Education Finance and Policy, 14(1): 31–60. ↩︎
Jackson, Johnson, and Persico (2016). ↩︎
Biasi, B. (2023). School finance equalization increases intergenerational mobility. Journal of Labor Economics, 41(1): 1–38. ↩︎
Baron, E. Jason, Joshua Hyman, and Brittany Vasquez. 2024. Public school funding, school quality, and adult crime. Review of Economics and Statistics: 1-46.↩︎
Jackson, Johnson, and Persico (2016). ↩︎
Deming, David. 2009. Early childhood intervention and life-cycle skill development: Evidence from Head Start. American Economic Journal: Applied Economics 1(3): 111-134.↩︎
Fredriksson, Peter, Björn Öckert, and Hessel Oosterbeek. 2013. Long-term effects of class size. The Quarterly Journal of Economics 128(1): 249–285. ↩︎
Papke, L.E. (2008). The effects of changes in Michigan's school finance system. Public Finance Review, 36(4): 456–474.; Roy, J. (2011). Impact of school finance reform on resource equalization and academic performance: Evidence from Michigan. Education Finance and Policy, 6(2): 137–167.; Hyman, J. 2017. Does money matter in the long run? Effects of school spending on educational attainment. American Economic Journal: Economic Policy, 9(4): 256–280.↩︎
Lafortune, Julien. 2021. Targeted K-12 Funding and Student Outcomes: Evaluating the Local Control Funding Formula. Public Policy Institute of California; Johnson, Rucker C. 2023. School Funding Effectiveness: Evidence from California's Local Control Funding Formula. Learning Policy Institute.↩︎
See, e.g., Gigliotti, P., and L.C. Sorensen. 2018. Educational resources and student achievement: Evidence from the Save Harmless provision in New York State. Economics of Education Review, 66: 167–182.; Kreisman, Daniel, and Matthew P. Steinberg. 2019. The effect of increased funding on student achievement: Evidence from Texas's small district adjustment. Journal of Public Economics, 176: 118-141.; Johnson (2023); Lafortune, Herrera, and Gao (2023).↩︎
See, e.g., Abott, Carolyn, Vladimir Kogan, Stéphane Lavertu, and Zachary Peskowitz. 2020. School district operational spending and student outcomes: Evidence from tax elections in seven states. Journal of Public Economics 183: 104142.; Baron, E. Jason. 2022. School spending and student outcomes: Evidence from revenue limit elections in Wisconsin. American Economic Journal: Economic Policy 14(1): 1-39.↩︎
Miller, Corbin L. 2018. The Effect of Education Spending on Student Achievement: Evidence from Property Values and School Finance Rules. Proceedings. Annual Conference on Taxation and Minutes of the Annual Meeting of the National Tax Association 111: 1–121.↩︎
Neilson, Christopher A., and Seth D. Zimmerman. 2014. The effect of school construction on test scores, school enrollment, and home prices. Journal of Public Economics 120: 18–31.; Conlin, Michael, and Paul N. Thompson. 2017. Impacts of new school facility construction: An analysis of a state-financed capital subsidy program in Ohio. Economics of Education Review, 59: 13–28.;Lafortune, Julien, and David Schönholzer. 2022. The impact of school facility investments on students and homeowners: Evidence from Los Angeles. American Economic Journal: Applied Economics 14(3): 254–289.↩︎
Cellini, Stephanie Riegg, Fernando Ferreira, and Jesse Rothstein. 2010. The value of school facility investments: Evidence from a dynamic regression discontinuity design. The Quarterly Journal of Economics 125(1): 215-261.;Martorell, Paco, Kevin Stange, and Isaac McFarlin Jr. 2016. Investing in schools: capital spending, facility conditions, and student achievement. Journal of Public Economics 140: 13-29.; Baron (2022); Brunner, Eric, Ben Hoen, and Joshua Hyman. 2022. School district revenue shocks, resource allocations, and student achievement: Evidence from the universe of US wind energy installations. Journal of Public Economics 206: 104586.↩︎
Jackson and Mackevicius (2024).↩︎
Biasi, Barbara, Julien M. Lafortune, and David Schönholzer. 2024. What Works and for Whom? Effectiveness and Efficiency of School Capital Investments across the US. No. w32040. National Bureau of Economic Research.↩︎
Cascio, E.U., N. Gordon, and S. Reber. 2013. Local responses to federal grants: Evidence from the Introduction of Title I in the South. American Economic Journal: Economic Policy, 5(3): 126–159.↩︎
Brunner, Hyman, and Ju (2020). ↩︎
See, e.g., Dee and Jacob (2011). ↩︎
Buerger, Lee, and Singleton (2021). ↩︎
Martorell, Stange, and McFarlin (2016).↩︎
Chaudhary, L. (2009). Education inputs, student performance and school finance reform in Michigan. Economics of Education Review, 28(1): 90–98.↩︎
Hyman (2017).↩︎
Handel and Hanushek (2024). McGee (2023b).↩︎
Jackson and Mackevicius (2024).↩︎
Handel and Hanushek (2024). McGee (2023b).↩︎
Jackson and Mackevicius (2024).↩︎
Lafortune, Rothstein, and Schanzenbach (2018); Brunner, Hyman, and Ju (2020); Lee, Shores, and Williams (2022). ↩︎
See, e.g., Jackson, Johnson, and Persico (2016); Lafortune, Rothstein, and Schanzenbach (2018); and Candelaria and Shores (2019). ↩︎
Jackson and Mackevicius (2024)..↩︎
Handel and Hanushek (2024).↩︎
See, e.g., Guryan, J. 2001. Does money matter? Regression-discontinuity estimates from education finance reform in Massachusetts. National Bureau of Economic Research, Working Paper No. 8269; Papke (2008); Roy (2011); Biasi (2023); and Lafortune, Herrera, and Gao (2023).↩︎
See, e.g., Cascio, Gordon, and Reber (2013); Jackson, Johnson, and Persico (2016); Hyman (2017); Candelaria and Shores (2019); and Baron (2022).↩︎
Jackson and Mackevicius (2024).↩︎
Handel and Hanushek (2024).↩︎
Jackson, Johnson, and Persico (2016)↩︎
Biasi (2023). ↩︎
Baron, Hyman, and Vasquez (2024).↩︎
Cellini, Ferreira, and Rothstein (2010); Neilson and Zimmerman (2014); Bayer, Patrick J., Peter Q. Blair, and Kenneth Whaley. 2020. A national study of public school spending and house prices. NBER Working Paper 28255.; Lafortune and Schönholzer (2022); Biasi, Lafortune, and Schönholzer (2024).↩︎
Belfield and Bowden (2018).↩︎
See, e.g., Dee, T.S., and J. Wyckoff. 2015. Incentives, selection, and teacher performance: Evidence from IMPACT. Journal of Policy Analysis and Management, 34(2): 267–297.; Fryer, R.G. 2013. Teacher incentives and student achievement: Evidence from New York City public schools. Journal of Labor Economics, 31(2): 373–407. ↩︎
McGee, Josh and Julien LaFortune (2025). "School Funding and Outcomes," in Live Handbook of Education Policy Research, in Douglas Harris (ed.), Association for Education Finance and Policy, viewed 04/11/2025, https://livehandbook.org/k-12-education/school-resources/school-funding-effects/.