However, many studies rely on self-reports of research evidence use, although some have triangulated this data source through public testimony, official documentation, or methodological tools such as social network analysis (SNA). Such studies indicate greater connection between research and policy/practice than previously contended but lack nuance with regard to what research evidence is being used and the impact of that use.
While studies provide empirical support for conceptual use, measuring this type of use is more challenging. Most research to date involves self-reports through interviews or surveys. Furthermore, it does not examine the ways in which conceptual use may precede or be embedded in instrumental or symbolic use of research evidence, for example, when research findings are used to justify an argument or to support a preexisting idea.
The importance of trust has been documented in studies with different designs, indicating an important and often overlooked relational aspect of use of research evidence. However, most of this work has focused on the K-12 level, including practitioners and local leaders or policymakers, and has focused on individual connections or relationships with individual representatives of organizations. We know less about the role of trust in other educational contexts and about the degree of trust that people have in specific agencies or organizations as sources of information.
Brokers are often undervalued, but they play a critical role in connecting disconnected groups to research evidence and sustaining these relationships.2 However, these individuals or organizations are not always neutral disseminators of research. They may package research in ways that support a particular program or policy, or they may serve in a gatekeeping role, sharing only particular types of research evidence. In addition, high levels of turnover of staff in broker roles can be disruptive to use of research evidence.
Though the importance of infrastructure and culture to use of research evidence has been found across studies, context, and methods, our understanding of this area remains somewhat superficial. We have limited understanding of the conditions, resources, or leadership required to develop the infrastructure and culture that support research evidence use or whether having these conditions in place leads to better-quality use of research evidence or use of better-quality research evidence. We know more about what limits the use of research evidence than about what supports it or how to scale it up.
RPPs allow for collaborative work, including the co-construction of meaning from research findings, and they involve rituals and structures for meaningful and authentic joint work.3 While RPPs have the potential to result in higher-quality and more effective use of research evidence, this has not yet been systematically studied.
For several decades, policy, practice, and research communities have called for increased use of research evidence in educational decision-making. Research evidence is the knowledge generated through systematic empirical studies. It may result from qualitative, quantitative, or mixed-methods study designs. It may be reported in books, reports, articles, research summaries, training courses, expert testimony, or other outlets and formats. Use of research evidence refers to the process of actively engaging with and drawing on research evidence to inform, change, and improve decision-making or practice. Importantly, research evidence is only one type of evidence used by policymakers and practitioners with different types of evidence, such as anecdotes, stories, or “best practices” shared by colleagues, used strategically, alone or in combination at different points of the policymaking or decision-making process.1
The implicit idea is that increasing research evidence use in education will point policymakers and practitioners toward interventions and practices that are more likely to yield positive outcomes, hence improving results. To that end, the federal government and some key funders have prioritized building evidence about the impact of education interventions to support policymakers and practitioners in implementing “evidence-based” programs.
The theory of action for many attempts to increase research evidence use relies on a simple, linear model, where researchers produce knowledge and policymakers and practitioners simply need to be told what research says to use it well. This theory implicitly assumes that high-quality research evidence can be productively and thoughtfully used to improve system outcomes and that any challenges or failures in the use of research evidence are caused by a disconnect between research and practice/policy. In recent years, however, scholars have learned much more about how research is used and by whom, with many studies indicating that policymakers draw on evidence in more dynamic and nuanced ways than previously understood by the research community. This finding improves on prior assertions and assumptions about research evidence use that were oversimplified and not empirically grounded. A better understanding of what research evidence use looks like and in what contexts it can improve education systems will help us know how policymakers and practitioners can incorporate more research and what support they need to do so well. Additionally, the broader movement to conduct more transformative and impactful research through partnerships between researchers and policymakers/practitioners will likely result in new conceptualizations and methodologies that deepen our understanding of the complex processes involved and allow us to move beyond the research-centered, linear perspective of research evidence use.
Understudied topics. The initial work in this area helped us to broadly understand whether research evidence was being used at all. The next step is to consider the quality of use of research evidence, as we do not yet know enough about either the quality of the research evidence that practitioners and policymakers are using or the quality of use. Additionally, to date, much research has focused on K-12 classrooms, schools, and districts, and we know much less about other points in the educational trajectory, such as early childhood, higher education, or adult education. Research evidence use by practitioners has also been a more common focus than has research evidence use by policymakers. Power and political influence as well as how they affect use of research evidence, along with issues of equity in terms of how research is used and the conditions that support this use, warrant greater attention. Finally, there is very little research on how researchers adjust their own designs and collaborative practices to strengthen use of research evidence and whether such adjustments have led to more effective research training or changes in researchers’ knowledge and practices. In addition to these understudied areas, research on use of research evidence could be strengthened by a stronger theoretical grounding and more rigorous measurement tools. Beyond that, what we know from the current body of work can inform more complex, large-scale and mixed-methods longitudinal designs.
Policy considerations. We need to learn much more about how policy mandates for evidence use play out in practice. For example, the Every Student Succeeds Act (ESSA) requires that states, districts, and schools identify and implement policies and practices based on different tiers of evidence. However, we do not know whether this requirement has resulted in more effective work aligned with the current body of research on specific areas, e.g., instructional approaches or multilingual programs. We also need to better understand what training, resources, structures, and cultural changes are needed for researchers and policymakers/practitioners to promote widespread and effective use of research evidence for system improvement. Finally, research evidence use in policy (and practice) depends on one’s values regarding and philosophies of education (Loeb chapter link here).
Attention to use of research evidence derives from earlier conceptual work in the 1970s and 1980s by Carol Weiss and Nathan Caplan, among others. This work provided much of the theoretical grounding for both the broader focus on connecting research and practice/policy and the more specific ways in which use of evidence varies, such as instrumental, conceptual, symbolic, or tactical use.4 Design-based approaches that emerged in the 1980s in the field of learning sciences as a way to strengthen connections between research and practice led to a parallel movement toward RPPs. These two broad areas—use of research evidence and RPPs—align and intersect with broader knowledge utilization and knowledge mobilization efforts over this same period of time. In essence, the key drivers for the resurgence of attention to research evidence use were the parallel scholarly and practical efforts regarding the use of research evidence and RPPs that have recently become more integrated.
While the passage of the No Child Left Behind Act (NCLB) and the creation of the U.S. Department of Education’s Institute of Education Sciences (IES) ushered in an era of evidence-based improvement in the early 2000s, the explosion of research on use of research evidence occurred in the last decade as part of the capacity building in the field. The funding opportunities for research on use of research evidence grew substantially, with more than 80 funding sources supporting research on use of research evidence, including governmental and philanthropic entities in the U.S. and internationally.5 At the same time, both the IES and foundations such as the William T. Grant Foundation and the Spencer Foundation supported substantial work on RPPs. In addition, the William T. Grant Foundation developed a line of inquiry specifically addressing research on use of research evidence in policies and practices that affect young people.6 More recent studies, including those focused not only on use but also on ways to improve use of research evidence, as well as studies of use of research evidence outside of education have been supported by the William T. Grant Foundation and been featured on its website. Finally, the IES invested in two knowledge utilization centers, namely, the National Center for Research in Policy and Practice and the Center for Research Use in Education, driving substantial research on use of research evidence at the school and district levels. Parallel to these investments, ESSA, which is the national education law that replaced NCLB, ushered in a new way of thinking about research evidence by identifying different levels or “tiers” of evidence based on the research design, results, context, and setting.
Both interventions and researchers began to attend to the abilities of users to evaluate and discern the difference between quality of evidence and the conditions that supported use of research evidence. Scholars documented the growth and influence of nongovernmental actors on educational politics and policymaking, and these groups became an important component of use of research evidence.7 Some scholars, advocates, and funders began to argue for greater attention to the democratization of the research agenda, broadening the understanding of the roles of users. In essence, the field began growing in two important ways. First, scholarship on use of research evidence began considering the larger ecosystem and broadening the understanding of the roles of producers and users. Second, the field began to more widely understand use of research evidence not as the linear transmission of something, e.g., translating results into a format that would be used by a policymaker, but as part of a process, e.g., a process of learning with regard to a particular area of practice or a process of policy development and implementation. This new way of understanding use of research evidence built on earlier calls for scholars to understand research evidence use in a more complex and iterative way.8 Thus, use of research evidence became better understood as a complex web of users (and nonusers) and as a process of use that led to greater attention to the structures, relationships, politics, and conditions that facilitate or hinder use in practice and policy.
This review focuses on the empirical work to date on use of research evidence. Readers are also encouraged to read some of the conceptual underpinnings related to this body of work and how it has evolved over time.9
Although several earlier studies found that research evidence plays a limited role in the decision-making of central office staff, local school boards, and principals,10 a national study of K-12 leaders expanded the knowledge base on instrumental use of research evidence.11 This descriptive study involved a representative sample of districts across the U.S. and found that the majority of leaders reported that they used research evidence frequently in instrumental ways, e.g., for choosing curricula, allocating resources, adopting or eliminating programs, and designing professional development. Research evidence was reported to influence the decisions that school and district leaders make, e.g., decisions about full-day kindergarten, teacher professional development, and how to address student tardiness. Research evidence also supported decisions at the classroom level, e.g., decisions on student groupings and progress monitoring, although research evidence was one of several types of evidence that was brought to bear on these decisions.12
However, which leaders had access to or were using research evidence in instrumental ways is important. For example, I have conducted research that involved SNA of the underlying networks of district leaders (including central office leaders and principals), finding that principals of underperforming schools were on the periphery (or outer areas) of advice networks in districts compared with principals of higher-performing schools. Additionally, principals of low-performing schools became disconnected from research evidence as a result of leadership churn.13 Some additional studies found instrumental use at the federal and state policy levels. These studies traced federal dollar allocations across different tiers of research evidence, identified specific studies and syntheses that were used to develop standards, and identified the ways in which state education agencies used research evidence to develop frameworks.14 More recently, research conducted by a national center found that state curriculum supervisors used research evidence to determine whether to purchase an intervention or to design professional development for teachers or administrators.15 These studies used a combination of self-reports, document analysis, and budgetary allocations to identify instrumental use of research evidence in policy decisions.
Particularly at early points in advocacy for a position at the local, state, or federal level, research evidence is used strategically to gain buy-in or support. In essence, even if the research evidence is limited or inconclusive, it is used as part of arguments for supporting or removing a particular policy or practice. Research evidence has been used in this way to gain legitimacy; that is, one may be more likely to be able to persuade others of the importance of a particular approach because it has been sanctioned or legitimized by this external “expert” source. Research evidence is frequently used in this way to demonstrate credibility, to symbolize shared ideals or beliefs, to justify a particular position or prior decision, or as a proxy for values. Several studies have found symbolic use of research evidence at the school board, central office, and school levels to convince others of a particular perspective on an issue or to support a particular decision.16 The national study of district leaders mentioned earlier found that more than two-thirds of leaders reported frequent symbolic or strategic use. While research evidence could also be used strategically to discredit others or convince others to dismantle a program, such use was much less common.17
At the policy level, strategic use of research evidence has been identified as a key approach for policymakers to gain support or counter opposing views.18 Sometimes, rather than policymakers, policy entrepreneurs, i.e., people who are considered “champions” of a certain policy agenda and/or solution and who bring innovative ideas to policy problems, are the ones who use research evidence in strategic or symbolic ways throughout different policy stages (depending on availability and political context). For example, research on the development of the Common Core State Standards (CCSS) found that policy entrepreneurs, including former governors, advocacy organizations, and groups representing state and local officials, used research evidence during the problem definition stage of the policy process, primarily to define a set of problems for which they already had a solution (i.e., a symbolic use). During policy design, research evidence was used symbolically to persuade stakeholders about various aspects of the policy, and during policy enactment, research evidence was used strategically to communicate with and persuade specific audiences about the usefulness and importance of specific standards by linking the standards to the research evidence on which they were based.19 While some studies of symbolic use of research evidence have involved self-reports, others have used testimonies and formal documents to analyze the discourse of persuasion, including one study that focused on how policy actors used research evidence in their amicus briefs to sway the U.S. Supreme Court in Fisher v. University of Texas at Austin.20
The emerging work on the instrumental and symbolic uses of research evidence provides us with a greater understanding of the more strategic or tactical approaches to research evidence use in ways that align with rational understandings of decision-making in specific moments when research evidence is used to persuade or inform. Scholars who study use of research evidence have argued that a great deal of research evidence use is conceptual, shaping in slow and diffuse ways how people think about problems and how to solve them.21 In fact, self-reports of conceptual use indicate high levels of this type of research evidence use. For example, 93% of leaders said that research evidence “brought attention to a new issue.” Additionally, leaders were most likely to say that they frequently or all of the time encountered research that had expanded their understanding of an issue (71%). Approximately one-third of leaders reported that research had brought a new issue to their attention (36%) or changed the way that they looked at a problem (35%).22 Other empirical studies found a similar frequency of conceptual use.23 Whether the findings point to a greater incidence of conceptual use than of instrumental use appears to be closely linked to the research design, with some qualitative studies that involved observations showing little evidence of instrumental use but some quantitative studies suggesting that instrumental use was more common than conceptual use.24 Conceptual use of research evidence likely has farther-reaching and longer-term consequences than do instrumental, tactical/political, and other types of use because conceptual use changes what people attend to or how they think about a particular issue, policy, or program.
However, conceptual use of research evidence has proven difficult to measure and track over time, and to date, most studies of conceptual use have relied on self-reports. Conceptual use may be embedded in other uses with regard to accessing and interpreting research evidence as individuals and collectively in organizations. For example, as educators seek out specific evidence on a decision about a program, they may also engage in conceptual use to better understand the larger issues involved or to develop a new perspective on the issue at hand. However, most research has focused on this as an individual, rather than as a collective, learning process. Having structured or formal opportunities to discuss research evidence is strongly associated with instrumental and symbolic research evidence use, which suggests that conceptual frameworks and methodological approaches need to account for differences in the social context for each type of use and how conceptual use may be embedded in these other uses.25 In essence, conceptual use of research evidence may be more likely in collaborative research environments and places that have structures and norms related to research evidence use; however, most studies have not yet accounted for these varied contexts.26
Contrary to conventional wisdom, research evidence is not inaccessible or devalued across the board. Indeed, educational leaders report that they value research evidence and use it quite widely.27 One key finding across contexts is that the use of research evidence occurs in a robust network of interconnected relationships. This means that at any particular level, including the school, district, state, or federal government, there is a set of interwoven relationships and that there is also a set of connections across these units. Several studies that involved case studies and network analysis have shown that trust plays a role in use of research evidence, finding that the same type of evidence brought by a trustworthy or untrustworthy source will have a different result in a person’s response to that evidence, i.e., whether the evidence resonates with the person or the person is skeptical toward it. For example, one study with my collaborator Alan Daly that involved surveys and network analysis of school leaders found that trust was associated with reciprocal relationships between urban school leaders around research-based practices.28 We studied longitudinal social networks of leaders in several districts and found that leaders who had higher perceptions of organizational trust were more likely to share research-based practices and that they also shared these practices with people with whom they had close or reciprocal relationships. We also examined who were considered high-end research evidence users—identified by their colleagues as those who regularly brought new ideas from research (defined as empirical studies)—to better understand how research evidence was diffused throughout the school district. We found that consideration of “high-end research evidence users” in the system was more dynamic than expected. In other words, as the networks themselves changed, so too did the relationships around research evidence, suggesting an understanding of diffusion in this case that was different from that suggested by diffusion theories.29 In addition, another team of researchers used a different approach but also found the importance of trust for use of research evidence. These researchers’ work examined the deliberative practices of school board members and their use of research evidence in these public deliberations. The team attended 160 school board and committee meetings and coded the meetings in terms of the type of evidence brought forward in discussions and whether the evidence was based on research or not. The researchers then interviewed board members and school leaders, learning that trust and trustworthiness were important for whether and how they used evidence, particularly research evidence.30 While these studies indicate that trusting relationships are important for use of research evidence, we do not yet have empirical studies that indicate that trust results in better use of research evidence or use of better-quality research evidence. Furthermore, trust could just as likely result in the use of weak evidence; hence, more work is necessary.
Prior research suggests that educators turn to people to access research evidence and that they prefer evidence curated by colleagues to inform their decisions.31 Brokers play a critical role in the flow of ideas and practices because they filter what is known about research evidence. Professional associations, advocacy organizations, and philanthropic organizations have all been found to be brokers of research evidence for teachers, leaders, and others in the education system.32 My own research, which used SNA, found that key individuals within school districts served as brokers of research evidence, but we also found that high levels of churn in these leadership roles meant that the ties related to research evidence were constantly being disrupted.33 In a similar vein, another team of researchers used SNA to study the structure of connections that bridged the worlds of practice and of research. They found that school staff often played brokering roles, e.g., gatekeeping roles, but that these were not always useful in successfully brokering the research–practice gap. This means that when educators sought out information, the information chain was often quite long, with researchers rarely being involved.34 In my own work, we found that central office staff often served as brokers between university researchers and teachers or school leaders and that they frequently served as gatekeepers between people who were identified as research evidence users in different parts of the system.35 The team of researchers mentioned above also found that, separate from people who were internal to the research or practice communities that were being connected, county-, state-, or federal-level agencies sometimes served as liaisons between these two groups.
The push for greater use of research evidence, combined with an often disjointed relationship between research producers and research evidence users, has allowed new groups to emerge or position themselves as “interpreters” of evidence.36 For example, one group of researchers used qualitative methods to study use of research evidence in several different cities, finding that intermediary organizations were active in promoting, participating in, or opposing certain more controversial educational policies such as charter schools, vouchers, “parent trigger” laws, and merit-pay systems for teachers. These intermediary groups also played the role of local distributors of federal funds, hence aligning resources with their agendas and the research evidence that they elevated or shared with decision makers.37
Several studies have found that use of research evidence requires an infrastructure to embed research evidence in routines, norms, and activities.38 The Center for Research Use in Education found that professional learning communities, instructional leadership teams, and instructional coaches were the most common structures reported by teachers that facilitated the connection between research and practice.39 Another study uncovered how multiple, interrelated routines were important for understanding use of research evidence in school districts. While district leaders did not explicitly discuss use of research evidence as part of these routines, it was often embedded in artifacts that were part of these routines; hence, it shaped decision-making processes.40 For example, literacy leaders embedded research in professional development slide decks, which in turn were used by teachers or leaders for curricular or instructional decisions. In addition to structures, a strong culture of research evidence use and a climate of trust, learning, and risk-taking support use of research evidence.41 However, as discussed in my own work, use of research evidence is often limited by pressures to determine improvement strategies based on monetary resources that are available (or not), and political pressures can reduce use of research evidence.42 In fact, conducting SNA, we found that leaders needed people who could facilitate access to evidence, interpretation of evidence, and development of action steps based on evidence and who were not also in positions of power, i.e., people who were not supervising them, given the high level of distrust and political vulnerability that such supervision entails. While systems and a research-focused culture can increase use of research evidence, whether they have resulted in “effective” or “high-quality” use of research evidence has not been sufficiently studied and requires a closer examination of the values, norms, and training of both researchers and research evidence users.
RPPs allow for collaborative work, including the co-construction of meaning from research findings, and they involve rituals and structures for meaningful and authentic joint work.43 RPPs are designed to develop longer-term collaborations, hence providing the opportunity for greater trust and organizational routines with respect to research evidence. While many RPPs likely incorporate use of research evidence by engaging researchers, we have much more to learn about how RPPs may enable greater and more effective research evidence use.44
Here, I note important areas for future research, building upon the understudied areas noted above. I also suggest some ways to contribute to the field based on particular designs and the research focus and context.
Quality.
Despite advances in our understanding of use of research evidence, we do not yet know enough about the quality of the research evidence that practitioners and policymakers are using or the quality of use.45 Published and disseminated research can vary a great deal in terms of methodological rigor, the quality of data collection and analysis, and the appropriateness of the interpretation of results. The initial work in this area helped us to broadly understand whether research evidence was being used at all, but the next step is to consider the quality of use of research evidence. Studies on this topic should consider how research evidence is being used and the impact of that use, particularly the strength or weakness of the research evidence, as it influences decisions or sways stakeholders. Given the prominence of symbolic uses and the misuse of research evidence, this is an important cautionary note, as various actors strategically use research evidence that may be weak or inconclusive to shape educational debates and decisions.
Measurement.
Both decision-making processes and cognitive processes are complex and messy. Studies have begun to uncover how research evidence is used in these processes, but the reality is that we do not have adequate measures. Additional work is necessary to develop measures that help us to determine whether practitioners or policymakers have developed a deeper understanding of particular issues that is aligned with rigorous research. A tool that measures conceptual use (and the related learning processes) could also help us understand whether conceptual use is a necessary precursor to higher-quality use of research evidence, whether research evidence is part of an iterative or a cyclical nature of learning, and whether conceptual use is part of or takes different forms in the “softening up” of a policy process, as well as other important and related questions. Valid and reliable measures of use of research evidence will be important for the field, and these measures must take into account both individual and collective use. In addition, the open sharing of codebooks and observation tools will allow researchers to conduct studies across settings and contexts via similar methods to strengthen the work in this area.46 Some work from other related field such as child welfare may help inform this area.47
Power, politics, and learning.
Few studies of use of research evidence have adopted critical theories to frame research questions, methods, and interpretation of findings.48 The use of these theoretical frameworks could make use of research evidence more useful for the communities that the research itself is trying to impact, help disrupt power or interrupt policies that harm marginalized groups, and alter the way in which we think about and measure research production, interpretation, and use. Research designs that incorporate critical lenses or theories in examining the use of research evidence might build on the literature showing how intermediaries (as described above) facilitate relationships between some research producers and some research evidence users and not others. Understanding the politics and racialized impact of use of research evidence is crucial for advancing our understanding of this field of work. The adoption of critical theory leads to important research questions that, to date, have not been addressed. Additionally, the use of critical methodological approaches, such as critical policy analysis, critical network analysis, and QuantCrit,49 could bring new understandings to this body of scholarship. For example, studies of use of research evidence could target questions regarding whether the composition of the research team, the lenses and methods used by researchers, or the democratization of the research agenda influences use of research evidence (especially whether it increases the quality of use of research evidence) by practitioners or policymakers. In other words, the use of these lenses and methods could expand not only the research that is produced but also our understanding of when, how, and why it is used (or not) by practitioners and policymakers. Studies could also examine whether use of research evidence might be related to individuals’ own racial/ethnic backgrounds or to engagement with and the understanding of different communities that are marginalized not only in policy and practice but also in research in terms of how issues are framed, questions are developed, and data are interpreted. Such an examination is important because the field of research evidence/use of research evidence has overlooked key communities, lenses, and perspectives.
We also need to pay greater attention to what it means to use research evidence as part of policy and decision-making cycles. Doing so requires theoretical or conceptual frameworks that build upon the relational aspects of use of research evidence and the political processes involved. Additional theoretical work that accounts for the iterative cycle of learning around research evidence is needed, and the research in this area to date can also help develop more appropriate theoretical lenses for this complex area of study. For example, one recent project of mine found that diffusion theories can be helpful in exploring use of research evidence in general. However, these theories do not fully account for the social and political processes in schools and school systems or the findings that emphasize the cognitive and relational aspects of research evidence use.
Research designs.
Over the last decade, our understanding of use of research evidence has grown, with many studies reporting self-reported use of research evidence by practitioners and policymakers. Some studies have been able to triangulate these self-reports, e.g., by tracing use of research evidence through analysis of documents, testimonies, legislation, observational studies, or social networks. The research evidence on use of research evidence has several strengths. First, a variety of methods has been used, including rigorous qualitative studies that have involved case studies, document analysis, discourse analysis, and interviews. These studies have involved rich qualitative data, and their analyses have provided insights into more symbolic and conceptual uses. Some of these studies have also involved mixed-methods study designs, and others have included longitudinal designs, allowing for both triangulation and a deeper understanding in terms of evolution or adaptation over time.
Second, several studies of use of research evidence have included surveys of different groups of educators, including school and district leaders and teachers. Some of these works have been large-scale studies conducted by national centers, and these works have included larger and representative samples. While the surveys were developed specifically for these purposes, they went through multiple stages to increase their validity. They have been especially useful for exploratory studies of use of research evidence. Third, several use of research evidence studies, including my own work, have included SNA, with some of these studies including longitudinal network designs. SNA is especially important for this area of work because of the relational nature of use.50 The SNA studies discussed above increase the reliability of the data because of the way data are gathered from multiple individuals in combination, allowing for a triangulation of responses that is not available in more typical surveys.
Overall, the evidence to date is stronger in terms of providing rich case studies and descriptive results that help uncover the conditions that support or hinder use of research evidence and the different type of uses than in terms of predicting use. These studies have been particularly useful as the field has emerged and evolved in terms of understanding what research evidence is and how it is used. They also serve as a form of triangulation in terms of providing stronger evidence of findings across multiple studies via different methods (e.g., the findings on the importance of trust for use of research evidence across several studies with different designs). Future research would benefit from longitudinal quantitative designs that move beyond self-reports, as well as studies that involve stronger theoretical grounding.
Research focus and context.
A few studies have focused on what is being used and the types of use (e.g., through Weiss’ lens), particularly at the level of K-12 school systems. As mentioned above, the two national centers have also focused directly on schools and districts, hence growing scholarship on use of research evidence at this system level. Another gap in research relates to the greater attention to use of research evidence in the K-12 arena than in other parts of the system, namely, higher education, policy, and coalition groups. The extent to which higher education administrators use research evidence in instrumental or conceptual ways has not gained as much attention in recent years. Beyond the level or context, more policy analysis research related to use of research evidence is needed. While some studies, as mentioned above, have focused on how use of research evidence enters the policy process, there is a great deal more work that could be done, given the different levels and stages of policy. Furthermore, as a field, we must better understand how the political and policy processes work separately and together to influence what research evidence is used and how. The final gap pertains to understanding how research evidence is diffused in ways that increase the scale of use of research evidence or that impact use at both the practitioner and policy levels. However, to understand diffusion as part of a learning process, new theoretical lenses are needed. Research on coalition groups, networks, professional associations, and other intermediary groups continue to be a ripe area of study. Understanding the political economy of knowledge uptake in the policy process is important for understanding use of research evidence (or its lack), particularly in national policy debates. Attention to how or whether researchers, practitioners, or policymakers facilitate use of research evidence when a policy window opens and to the alignment between the kind of “softening up” process that is often considered in political studies on policy adoption and conceptual use of research evidence could help deepen our understanding of the complex process of learning, persuasion, and decision-making involved in research evidence use.51
Finnigan, Kara S., and Alan J. Daly. 2014. Using Research Evidence in Education: From the Schoolhouse Door to Capitol Hill. Policy Implications of Research in Education. Springer US.↩︎
Finnigan, K. S., A. J. Daly, A. Caduff, and C. C. Leal. 2021. Broken Bridges: The Role of Brokers in Connecting Educational Leaders around Research Evidence. In Networks, Knowledge Brokers, and the Public Policymaking Process. Edited by M. Weber and I. Yanovitzky. Palgrave Macmillan. 129–514; Wentworth, L., P. Arce-Trigatti, C. Conaway, and S. Shewchuk. 2023. Brokering in Education Research–Practice Partnerships: A Guide for Education Professionals and Researchers. Routledge. https://doi.org/10.4324/9781003334385.↩︎
Coburn, C. E., and W. R. Penuel. 2016. Research–Practice Partnerships in Education: Outcomes, Dynamics, and Open Questions. Educational Researcher 45: 48–54; Farley-Ripple, E., H. May, A. Karpyn, K. Tilley, and K. McDonough. 2018. Rethinking Connections between Research and Practice in Education: A Conceptual Framework. Educational Researcher 47(4): 235–245; Yamashiro, K. L. Wentworth, and M. Kim. 2022. Politics at the Boundary: Exploring Politics in Education Research–Practice Partnerships. Educational Policy 37(1): 3–30. https://doi.org/10.1177/08959048221134916. ↩︎
For details regarding the evolution of our understanding of use of research evidence and the contributions of various scholars during this period of time, see Neal, Z. P., J. Lawlor, J. W. Neal, K. Mills, and K. McAlindon. 2019. Just Google It: Measuring Schools’ Use of Research Evidence with Internet Search Results. Evidence and Policy 15: 103–123. https://doi.org/10.1332/174426418X15172392413087.↩︎
Farley-Ripple, E. N., K. Oliver, and A. Boaz. 2020. Mapping the Community: Use of Research Evidence in Policy and Practice. Humanities and Social Sciences Communications 7(83). https://doi.org/10.1057/s41599-020-00571-2.↩︎
Initial studies funded by the William T. Grant Foundation related to use of research evidence in education are documented in Finnigan and Daly (2014).↩︎
For example, Reckhow, S. and M. Tompkins-Stange. 2018. Financing the Education Policy Discourse: Funders as Catalysts in Policy Networks. Interest Groups and Advocacy 7(3): 258–288; Scott, J., H. Jabbar, P. Lalonde, E. DeBray, and C. Lubienski. 2015. Evidence Use and Advocacy Coalitions: Intermediary Organizations and Philanthropies in Denver, Colorado. Education Policy Analysis Archives 23(124). http://epaa.asu.edu/ojs/article/view/2079; McDonnell, L. M., and M. S. Weatherford. 2014. Evidence Use and the Common Core State Standards Movement: From Problem Definition to Policy Adoption. American Journal of Education 120(1): 1–25.↩︎
According to Tseng and Nutley, “[R]esearch use is contingent, interactive, and iterative. It involves people individually and collectively engaging with research over time, bringing their own and their organization’s goals, motivations, routines, and political contexts with them” (p. 165). Tseng, V., and S. Nutley. 2014. Building the Infrastructure to Improve the Use and Usefulness of Research in Education. In Using Research Evidence in Education: From the Schoolhouse Door to Capitol Hill. Edited by K. S. Finnigan and A. J. Daly. Springer International Publishing. 163–175.↩︎
Doucet, Fabienne. 2021. Centering the Margins: (Re)Defining Useful Research Evidence through Critical Perspectives. William T. Grant Foundation. February 20. https://wtgrantfoundation.org/digest/centering-the-margins-redefining-useful-research-evidence-through-critical-perspectives/fabienne-doucet-2019-wtg-digest-2; DuMont, K. 2019. Reframing Evidence-Based Policy to Align with the Evidence. William T. Grant Foundation. http://wtgrantfoundation.org/digest/reframing-evidence-based-policy-to-align-with-the-evidence; Finnigan, K. S. 2023. The Political and Social Contexts of Research Evidence Use in Partnerships. Educational Policy; Tseng, V. 2021. Toward a New Agenda for Education Research. Phi Delta Kappan 102(5): 52–53; Tseng, V., and C. E. Coburn. 2019. Using Evidence in the U.S. In What Works Now: Evidence Informed Policy and Practice. Edited by A. Boaz, H. Davies, A. Fraser, and S. Nutley. Policy Press. 351–368; Weiss, C. 1977. Research for Policy’s Sake: The Enlightenment Function of Social Research. Policy Analysis 3(4): 531–545.↩︎
Asen, R., D. Gurke, P. Conners, R. Solomon, and E. Gumm. 2013. Research Evidence and School Board Deliberations: Lessons from Three Wisconsin School Districts. Educational Policy 27(1): 33–63; Asen, R., D. Gurke, R. Solomon, P. Conners, and E. Gumm. 2011. “The Research Says”: Definitions and Uses of a Key Policy Term in Federal Law and Local School-Board Deliberations. Argumentation and Advocacy 47: 195–213; Farley-Ripple, E. N. 2012. Research Use in Central Office Decision-Making: A Case Study. Education Management, Administration and Leadership 40(6): 784–804; Finnigan et al. (2021); Finnigan, K. S., A. Daly, J., and J. Che. 2013. Systemwide Reform in Districts under Pressure: The Role of Social Networks in Defining, Acquiring, and Diffusing Research Evidence. Journal of Educational Administration 51(4): 476–497.↩︎
Penuel, W. R., D. C. Briggs, K. L. Davidson, C. Herlihy, D. Sherer, H. C. Hill, C. Farrell, and A.-R. Allen. 2017. How School and District Leaders Access, Perceive, and Use Research. AERA Open. https://doi.org/10.1177/2332858417705370.↩︎
Blackman, H., H. May, E. Farley-Ripple, C. Farrell, and W. R. Penuel. 2018. Using Research at the Classroom, School, District, & State Levels: Results from the Knowledge Utilizations R&D Centers. In Presentation at the IES Annual Principal Investigators Meeting. Arlington, Virginia.↩︎
Finnigan, Daly, and Che (2013); Finnigan et al. (2021).↩︎
Haskins, R., and G. Margolis. 2014. Show Me the Evidence. Washington, DC: Brookings Institution Press; Massell, D., M. E. Goertz, and C. A. Barnes. 2012. State Education Agencies’ Acquisition and Use of Research Knowledge for School Improvement. Peabody Journal of Education 87(5): 609–626; McDonnell, L. M., and M. S. Weatherford. 2014. Evidence, Politics, and Education Policy. Harvard Education Press; McDonnell and Weatherford (2014).↩︎
Blackman et al. (2018).↩︎
Asen et al. (2013); Asen et al. (2011); Farley-Ripple (2012); Penuel et al. (2017).↩︎
See note 11.↩︎
Bogenschneider, K., E. Day, and E. Parrott. 2019. Revisiting Theory on Research Use: Turning to Policymakers for Fresh Insights. American Psychologist 74(7): 778–793. https://doi.org/10.1037/amp0000460.↩︎
McDonnell and Weatherford (2020); McDonnell and Weatherford (2014).↩︎
Horn, C. L., P. Marin, L. M. Garces, K. Miksch, and J. T. Yun. 2020. Shaping Educational Policy through the Courts: The Use of Social Science Research in Amicus Briefs in Fisher I. Educational Policy 34(3): 449–476. https://doi.org/10.1177/0895904818773902.↩︎
Tseng, V. 2012. The Uses of Research in Policy and Practice. SRCD Social Policy Report 26(2): 1–24. Tseng (2021).↩︎
Penuel et al. (2017).↩︎
Farley-Ripple, Oliver, and Boaz (2020).↩︎
Farley-Ripple (2012); Farley-Ripple, Oliver, and Boaz (2020); Gitomer, D. H., and K. Crouse. 2019. Studying the Use of Research Evidence: A Review of Methods. William T. Grant Foundation. http://wtgrantfoundation.org/library/uploads/2019/02/A-Review-of-Methods-FINAL003.pdf; Penuel et al. (2017).↩︎
Penuel et al. (2017).↩︎
Finnigan and Daly (2014).↩︎
Penuel et al. (2017).↩︎
Daly, A., and K. Finnigan. 2012. Exploring the Space Between: Social Networks, Trust, and Urban School District Leaders. Journal of School Leadership 22(3): 493–530.↩︎
Leal, Luengo-Aravena, Caduff, Finnigan, and Daly. Under review.↩︎
Asen et al. (2013); Asen et al. (2011).↩︎
Finnigan, Daly, and Che (2013); Penuel et al. (2017).↩︎
Hopkins, Megan, Kathryn E. Wiley, William R. Penuel, and Caitlin C. Farrell. 2018. Brokering Research in Science Education Policy Implementation: The Case of a Professional Association. Evidence & Policy 14(3): 459–476; Malin, Joel R., and Christopher Lubienski. 2015. Educational Expertise, Advocacy, and Media Influence. Education Policy Analysis Archives 23 (January): 6. https://doi.org/10.14507/epaa.v23.1706. Penuel et al. (2017); Reckhow and Tompkins-Stange (2018); Scott et al. (2015).↩︎
Daly, Alan J., Kara S. Finnigan, Stuart Jordan, Nienke M. Moolenaar, and Jing Che. 2014. Misalignment and Perverse Incentives. Educational Policy 28(2): 145–174. https://doi.org/10.1177/0895904813513149.; Finnigan et al. (2021).↩︎
Neal, J. W., Z. P. Neal, M. Kornbluh, K. Mills, and J. Lawler. 2015. Brokering the Research–Practice Gap: A Typology. American Journal of Psychology 56(3–4): 422–435.↩︎
Finnigan and Daly (2014); Finnigan et al. (2021).↩︎
DeBray, E., J. Scott, C. Lubienski, and H. Jabbar. 2014. Intermediary Organizations in Charter School Policy Coalitions: Evidence from New Orleans. Educational Policy 28(2): 175–206. https://doi.org/10.1177/0895904813514132. Scott, J., and H. Jabbar. 2014. The Hub and the Spokes: Foundations, Intermediary Organizations, Incentivist Reforms, and the Politics of Research Evidence. Educational Policy 28(2): 233–257. https://doi.org/10.1177/0895904813515327.↩︎
DeBray et al. (2014); Scott and Jabbar (2014); Scott et al. (2015).↩︎
Farrell, C. C., C. E. Coburn, and S. Chong. 2018. Under What Conditions Do School Districts Learn from External Partners? The Role of Absorptive Capacity. American Educational Research Journal 56: 955–994; Honig, M. I., N. Venkateswaran, and P. McNeil. 2017. Research Use as Learning: The Case of Fundamental Change in School District Central Offices. American Educational Research Journal 54(5): 938–971.↩︎
Center for Research Use in Education. University of Delaware. 2018. https://www.research4schools.org/wp-content/uploads/2018/09/Structures-for-Research-Use-in-Schools-18.09.pdf.↩︎
Coburn, C. E., J. P. Spillane, A. X. Bohannon, A.-R. Allen, R. Ceperich, A. Beneke, and L.-S. Wong. 2020. The Role of Organizational Routines in Research Use in Four Large Urban School Districts (Technical Report). National Center for Research in Policy and Practice.↩︎
Daly and Finnigan (2012); Finnigan and Daly (2014); Penuel et al. (2017).↩︎
Daly et al. (2014). Finnigan, Daly, and Che (2013).↩︎
Farley-Ripple et al. (2018); Penuel, W. R., C. C. Farrell, A. R. Allen, Y. Toyama, and C. E. Coburn. 2016. What Research District Leaders Find Useful. Educational Policy 32(4): 540–568. Wentworth et al. (2023); Yamashiro, Wentworth, and Kim (2022). ↩︎
Finnigan (2023).↩︎
See Gitomer and Crouse (2019).↩︎
Ibid.↩︎
Aarons, Gregory A., Danielle L. Fettes, Michael S. Hurlburt, Lawrence A. Palinkas, Lara Gunderson, Cathleen E. Willging, and Mark J. Chaffin. 2014. Collaboration, Negotiation, and Coalescence for Interagency-Collaborative Teams to Scale-Up Evidence-Based Practice. Journal of Clinical Child & Adolescent Psychology 43(6): 915–928. doi:10.1080/15374416.2013.876642.↩︎
For more details, see the virtual panel discussion “Critical Race Perspectives on the Use of Research Evidence” hosted by the William T. Grant Foundation and including Vivian Tseng, Jamila Michener, Janelle Scott, and Fabienne Doucet. http://wtgrantfoundation.org/panel-discussion-critical-race-perspectives-on-the-use-of-research-evidence.↩︎
Young, Michelle D., and Sarah Diem. 2017. Critical Approaches to Education Policy Analysis Moving beyond Tradition. Cham: Springer International Publishing; Finnigan, K. S., and Jabbar, H. 2023. Critical Networks in Critical Times. In Handbook on Critical Approaches to Education Research. Edited by M. D. Young and S. Diem. New York, NY: Routledge; Garcia, Nichole M., Nancy López, and Verónica N. Vélez. 2017. QuantCrit: Rectifying Quantitative Methods through Critical Race Theory. Race Ethnicity and Education 21(2): 149–157. doi:10.1080/13613324.2017.1377675.↩︎
For more details on how to apply SNA in education research, see Finnigan, K. S., D. Luengo-Aravena, and K. Garrison. 2018. Social Network Analysis Methods in Educational Policy Research. In Complementary Research Methods for Educational Leadership and Policy Studies. Edited by C. Lochmiller. New York, NY: Palgrave Macmillan. 231–252.↩︎
See Kingdon, J. W. 1984. Agendas, Alternatives, and Public Policies.↩︎
Finnigan, Kara (2025). "Research Use," in Live Handbook of Education Policy Research, in Douglas Harris (ed.), Association for Education Finance and Policy, viewed 04/13/2025, https://livehandbook.org/miscellaneous/research-use/.