
Advancements in artificial intelligence are shaping nearly every facet of society, including education. Over the past few years, especially with the availability of large language models like ChatGPT, there’s been an explosion of AI-powered edtech. Some of these tools are truly helping students, while many are not. For educational leaders seeking to leverage the best of AI while mitigating its harms, it’s a lot to navigate.
That’s why the organization I lead, the Advanced Education Research and Development Fund, collaborated with the Alliance for Learning Innovation (ALI) and Education First to write Proof Before Hype: Using R&D for Coherent AI in K-12 Education. I sat down with my coauthors, Melissa Moritz, an ALI senior advisor, and Ila Deshmukh Towery, an Education First partner, to discuss how schools can adopt innovative, responsible, and effective AI tools.
Q: Melissa, what concerns you about the current wave of AI edtech tools, and what would you change to ensure these tools benefit students?
Melissa: Too often, AI-powered edtech is developed without grounding in research or educators’ input. This leads to tools that may seem innovative, but solve the wrong problems, lack evidence of effectiveness, ignore workflow realities, or exacerbate inequities.
What we need is a fundamental shift in education research and development so that educators are included in defining problems and developing classroom solutions from the start. Deep collaboration across educators, researchers, and product developers is critical. Let’s create infrastructure and incentives that make it easier for them to work together toward shared goals.
AI tool development must also prioritize learning science and evidence. Practitioners, researchers, and developers must continuously learn and iterate to give students the most effective tools for their needs and contexts.
Q: Ila, what is the AI x Coherence Academy and what did Education First learn about AI adoption from the K-12 leaders who participated in it?
Ila: The AI x Coherence Academy helps cross-functional school district teams do the work that makes AI useful: Define the problem, align with instructional goals, and then choose (or adapt) tools that fit system priorities. It’s a multi-district initiative that helps school systems integrate AI in ways that strengthen, rather than disrupt, core instructional priorities so that adoption isn’t a series of disconnected pilots.
We’re learning three things through this work. First, coherence beats novelty. Districts prefer customizable AI solutions that integrate with their existing tech infrastructure rather than one-off products. Second, use cases come before tools. A clear use case that articulates a problem and names and tracks outcomes quickly filters out the noise. Third, trust is a prerequisite. In a world increasingly skeptical of tech in schools, buy-in is more likely when educators, students, and community members help define the problem and shape how the technology helps solve it.
Leaders are telling us they want tools that reinforce the teaching and learning goals already underway, have clear use cases, and offer feedback loops for continuous improvement.
Q: Melissa and Ila, what types of guardrails need to be in place for the responsible and effective integration of AI in classrooms?
Ila: For AI to be a force for good in education, we need several guardrails. Let’s start with coherence and equity. For coherence, AI adoption must explicitly align with systemwide teaching and learning goals, data systems, and workflows. To minimize bias and accessibility issues, product developers should publish bias and accessibility checks, and school systems should track relevant data, such as whether tools support (versus disrupt) learning and development, and the tools’ efficacy and impact on academic achievement. These guardrails need to be co-designed with educators and families, not imposed by technologists or policymakers.
The districts making real progress through our AI x Coherence Academy are not AI-maximalists. They are disciplined about how new tools connect to educational goals in partnership with the people they hope will use them. In a low-trust environment, co-designed guardrails and definitions are the ones that will actually hold.
Melissa: We also need guardrails around safety, privacy, and evidence. School systems should promote safety and protect student data by giving families information about the AI tools being used and giving them clear opt-out paths. As for product developers, building on Ila’s points, they need to be transparent about how their products leverage AI. Developers also have a responsibility to provide clear guidance around how their product should and shouldn’t be used, as well as to disclose evidence of the tool’s efficacy. And of course, state and district leaders and regulators should hold edtech providers accountable.
Q: Melissa and Ila, what gives you hope as we enter this rapidly changing AI age?
Melissa: Increasingly, we are starting to have the right conversations about AI and education. More leaders and funders are calling for evidence, and for a paradigm shift in how we think about teaching and learning in the AI age. Through my work at ALI, I’m hearing from federal policymakers, as well as state and district leaders, that there is a genuine desire for evidence-based AI tools that meet students’ and teachers’ needs. I’m hopeful that together, we’ll navigate this new landscape with a focus on AI innovations that are both responsible and effective.
Ila: What gives me hope is that district leaders are getting smarter about AI adoption. They’re recognizing that adding more tools isn’t the answer—coherence is. The districts making real progress aren’t the ones with the most AI pilots; they’re the ones who are disciplined about how new tools connect to their existing goals, systems, and relationships. They’re asking: Does this reinforce what we’re already trying to do well, or does it pull us in a new direction? And they’re bringing a range of voices into defining use cases and testing solutions to center, rather than erode, trust. That kind of strategic clarity is what we need right now. When AI adoption is coherent rather than chaotic, it can strengthen teaching and learning rather than fragment it.
Auditi Chakravarty is CEO of the Advanced Education Research and Development Fund.



