Who Is Funding Research on Whether AI Actually Helps Students? Spencer Foundation Bets Big on Equity-First Answers.

March 24, 2026 · 7 min read

David Almeida

There is no shortage of money flowing toward artificial intelligence in education. The Department of Education's FIPSE program committed $169 million to AI-powered learning tools. Stephen Schwarzman's $48 billion foundation has made AI-driven education one of its core pillars. Technology companies are deploying AI tutoring products into classrooms faster than school districts can evaluate them.

What there is a shortage of is money flowing toward research that asks whether any of this works — and for whom. The Spencer Foundation, a Chicago-based philanthropy that has funded education research for more than 60 years, is making a deliberate bet that the answer matters more than the technology. Its new Initiative on AI and Education, launched after a Spring 2024 convening of researchers, practitioners, and technologists, channels dedicated funding through multiple grant programs toward a specific question: how does AI interact with educational equity? (Granted News)

The initiative is not the largest pot of money in the AI-education space. But it may be the most consequential, because it is funding the research that will determine whether the much larger investments produce equitable results or deepen existing divides.

What Spencer Is Actually Funding

The initiative does not operate as a standalone grant program. Instead, Spencer designated additional AI-focused funds within two existing programs — the Vision Grant Program and the Racial Equity Program — while also welcoming AI-related proposals across its full portfolio of research grants.

This structural choice matters. Rather than creating a separate silo for "AI research," Spencer embedded the funding within programs that already have established review criteria, community networks, and equity expectations. Researchers applying to the Racial Equity Program with an AI-focused proposal are evaluated against the same standards as all racial equity research. The AI component is treated as a context, not a category.

The four research priority areas Spencer identified reflect this integrated approach:

AI and Learning covers the development and evaluation of culturally relevant AI tools aligned with learning science — across PK-12, higher education, and lifelong learning contexts. The emphasis is on tools that work within existing pedagogical frameworks rather than replacing them, and on research that measures learning outcomes across different student populations rather than in aggregate.

AI Policy funds analysis of how local, state, federal, and international governments are implementing AI policies in education. This includes procurement practices — how school districts decide which AI products to buy — and regulatory frameworks that may or may not protect student data, algorithmic transparency, and teacher autonomy.

AI Ethics and Justice examines fairness, bias, data sovereignty, privacy, representation, and impacts on underserved communities. This is where the equity lens is most explicit: Spencer wants research on whether AI systems trained on data from predominantly white, English-speaking student populations produce equitable results when deployed in diverse classrooms.

AI's Impact on Educational Research addresses a meta-question that most funders ignore: how is AI changing the practice of education research itself? This includes methodological implications of using AI tools in data analysis, standards for responsible AI use in research, and the epistemic questions raised when AI-generated content becomes part of the educational environment being studied.

The Funding Mechanisms

For researchers, the practical question is how to access this funding. Spencer operates several grant programs with different scales and timelines:

Large Research Grants on Education support projects with budgets from $125,000 to $500,000 over one to five years. Starting in 2026, Spencer moved to a single annual cycle for large grants but increased the number of proposals it funds. The full proposal deadline is July 7, 2026. AI-focused proposals are welcome and evaluated alongside the full portfolio.

Small Research Grants fund projects up to $50,000 for one to two years. These are well-suited for pilot studies, exploratory research, and projects testing methodological approaches before scaling to larger investigations. Small grants operate on a rolling basis with multiple deadlines per year.

Vision Grants support bold, forward-looking projects that may not fit traditional research categories. Spencer expects to award approximately 10 Vision Grants per cycle, plus additional grants specifically focused on AI and education. The Vision program is designed for proposals that cross disciplinary boundaries or challenge conventional approaches — exactly the kind of work that the AI-education intersection demands.

Research-Practice Partnership Grants fund collaborations between researchers and educational organizations (school districts, community colleges, nonprofit education providers) working on problems of practice. AI-focused partnerships — where a researcher collaborates with a school district implementing AI tools, for instance — fit naturally within this program.

Where Spencer Sits in the Broader Landscape

The AI-education research funding landscape in 2026 is fragmented across agencies and foundations with different priorities, review criteria, and expectations.

NSF funds AI-education research primarily through its Directorate for STEM Education, including programs on AI and learning sciences, the Future of Computational Research (CoRe), and Secure and Trustworthy Cyberspace Education (SaTC-EDU). NSF's emphasis is on foundational research — the kind of work that advances scientific understanding of learning processes and computational systems. NSF grants tend to be larger than Spencer's (R01-scale and above) and require the full apparatus of NSF proposal writing, including broader impacts and facilities sections.

FIPSE (Fund for the Improvement of Postsecondary Education) at the Department of Education committed $169 million to AI-powered educational tools, but with an implementation focus rather than a research focus. FIPSE is funding the deployment of AI in classrooms, not the evaluation of whether that deployment achieves equitable outcomes. The gap between FIPSE's implementation funding and Spencer's research funding is precisely where the evidence base needs to be built.

IES (Institute of Education Sciences) funds rigorous evaluation research through its National Center for Education Research, including studies of educational technology. IES grants carry the strictest methodological requirements in the education research landscape — randomized controlled trials, quasi-experimental designs, and carefully specified outcome measures. Researchers with strong quantitative designs and existing partnerships with schools should consider IES alongside Spencer.

Private foundations beyond Spencer are also active, though with varying levels of commitment to equity-centered research. The Gates Foundation funds education technology research with a focus on low-income students and students of color. The Walton Family Foundation supports school choice and technology-enabled personalization. The Chan Zuckerberg Initiative funds personalized learning research with an engineering orientation. Each brings different assumptions about what constitutes good education and good technology.

Why the Equity Lens Is Not Optional

The urgency behind Spencer's initiative is grounded in early evidence about how AI tools perform across different student populations. Natural language processing systems trained on standard American English produce lower accuracy rates for students who speak African American Vernacular English, regional dialects, or English as a second language. Recommendation algorithms trained on historical achievement data can reinforce existing tracking patterns, steering students from disadvantaged backgrounds toward less rigorous courses. Automated writing feedback tools calibrated to mainstream rhetorical conventions may penalize culturally specific communication styles.

These are not hypothetical risks. They are documented patterns in deployed systems. The research Spencer is funding asks whether these patterns can be identified early, mitigated through design changes, and monitored through ongoing evaluation. Without this research, the billions of dollars flowing into AI-education deployment will produce tools optimized for the students who least need help and potentially harmful for the students who most do.

The timing is critical. School districts are making procurement decisions now. State education agencies are writing AI policies now. Federal funding is shaping which tools get built and deployed now. The research funded today will either inform those decisions or arrive too late to matter.

How to Position for This Funding

Lead with the equity question, not the technology. Spencer reviewers are education researchers, not computer scientists. A proposal that begins with a research question about educational equity and uses AI as the context will resonate more than one that begins with an AI system and then looks for educational applications.

Partner with schools and communities. The Research-Practice Partnership program exists specifically for this. But even in the Large and Small grant programs, proposals that demonstrate community engagement — co-design with teachers, input from parents, partnerships with community organizations serving underserved populations — will be evaluated more favorably than those that treat schools as data collection sites.

Address the methodological challenges directly. AI-education research raises genuine methodological questions: how do you measure the impact of an adaptive system that provides different experiences to different students? How do you control for the technology's evolution during a multi-year study? How do you ensure that student data collected for research purposes does not become training data for commercial products? Spencer expects applicants to wrestle with these questions, not avoid them.

Think about the policy implications. Spencer's AI Policy priority area signals that the foundation wants research that speaks to decision-makers, not just other researchers. Proposals that connect findings to actionable policy recommendations — procurement guidelines, data privacy standards, teacher preparation requirements — will have an advantage in a landscape where Spencer is explicitly trying to bridge the research-practice divide.

The window for shaping how AI enters American education is open but narrowing. Every month that passes without rigorous, equity-centered research is a month in which deployment outpaces understanding. Spencer Foundation's initiative is not the only funding source, but it is the one most explicitly designed to ensure that the question of who benefits is not an afterthought. Researchers positioned to ask that question should be preparing proposals now — and platforms like Granted can help identify complementary funding across NSF, IES, and other agencies to build a research portfolio that matches the scale of the problem.

Get AI Grants Delivered Weekly

New funding opportunities, deadline alerts, and grant writing tips every Tuesday.

More Tips Articles

Not sure which grants to apply for?

Use our free grant finder to search active federal funding opportunities by agency, eligibility, and deadline.

Find Grants

Ready to write your next grant?

Draft your proposal with Granted AI. Win a grant in 12 months or get a full refund.

Backed by the Granted Guarantee