Granted

Broader Impacts for AI Proposals: Going Beyond 'We'll Create Jobs'

February 25, 2026 · 6 min read

Jared Klein

Every NSF program officer I've spoken with says the same thing: AI proposals have the weakest broader impacts sections of any field they review. The intellectual merit is often dazzling -- novel architectures, clever training regimes, benchmark-beating results. Then the PI gets to broader impacts and writes three paragraphs about job creation, workforce development, and making code available on GitHub. No partners named. No timeline. No metrics. Just promises that could have been copy-pasted from any proposal in any discipline.

This matters more now than it did a year ago. NSF's updated merit review guidelines, rolled out across directorates in 2025, explicitly prioritize economic competitiveness, national security, and workforce development as broader impact categories. AI proposals should be natural fits for these priorities. Instead, most squander the opportunity with boilerplate that gives reviewers nothing concrete to score.

Browse current AI funding opportunities on our AI Grants page, and read on for what actually convinces panels that your broader impacts plan is real.

The Three Sins Reviewers See in Every AI Stack

Review panelists evaluate broader impacts independently from intellectual merit. A proposal with an Excellent rating on merit and a Good on impacts will lose to a proposal rated Excellent on both -- and panelists know this. The most common failures in AI submissions fall into predictable patterns.

Sin one: the phantom workforce pipeline. "This research will train the next generation of AI researchers" appears in thousands of proposals. It means nothing without specifics. How many students? At what level? What skills will they gain that they wouldn't get from coursework alone? A reviewer reading this has no basis to score it above Fair.

Sin two: the open-source alibi. Releasing code on GitHub is not a broader impact. It's a minimum expectation for reproducibility. Reviewers want to know who will use that code, how you'll support adoption, what documentation and community infrastructure you'll build, and how the release connects to a population that wouldn't otherwise have access. NSF's Pathways to Enable Secure Open-Source Ecosystems (PESOSE) program exists specifically because the foundation recognizes that dumping code in a public repository is not the same as building a sustainable ecosystem.

Sin three: the tacked-on outreach event. One Saturday workshop at a local school does not constitute a broader impacts plan. It constitutes a photo opportunity. Reviewers look for sustained engagement with defined partners, not isolated events that end when the PI's graduate student defends.

Community Partnerships That Reviewers Actually Score Highly

The strongest broader impacts sections in funded AI proposals share a structural feature: they name specific partners with signed commitment. Not "we will seek partnerships with local schools" but "we have a letter of collaboration from Springfield Unified School District's STEM coordinator, Dr. Maria Sandoval, committing to six classroom sessions per semester over two years."

NSF's $100 million investment in National AI Research Institutes in 2025 illustrates the scale of partnership the foundation expects. The Institute for Student AI-Teaming (iSAT), led by the University of Colorado, doesn't just talk about K-12 impact -- it deploys AI tools directly in classrooms and measures learning outcomes over multi-year cycles. The AI Research Institute on Interaction for AI Assistants (ARIA), led by Brown University, partners with mental health practitioners and civil society organizations, not just other computer science departments.

You don't need to be running a $20 million institute to borrow this model. What reviewers reward is the same principle at any scale: a named partner organization, a defined population being served, a specific activity with a timeline, and a mechanism to assess whether it worked. A CAREER proposal that partners with a regional community college's IT program to co-develop an applied ML module -- with enrollment targets, pre/post assessments, and a plan to share the curriculum through an existing network -- will outscore a proposal three times its budget that merely promises to "develop educational materials."

K-12 AI Literacy: The Funding Signal You Shouldn't Ignore

NSF is spending real money to push AI education into K-12 classrooms, and proposals that align with this push have a structural advantage in broader impacts scoring. The Expanding K-12 Resources for AI Education initiative offers supplemental awards up to $300,000 -- 20% of the original grant budget -- for projects that build K-12 AI education components. Separately, the EducateAI initiative targets preK-12 through undergraduate AI workforce preparation.

The signal here is unmistakable: NSF wants AI researchers connected to schools. A broader impacts section that proposes a structured K-12 engagement plan isn't just checking a box -- it's positioning the proposal for supplemental funding that makes the project more attractive to the program officer managing the portfolio.

What works in practice: a PI developing a computer vision system for agricultural monitoring partners with a rural school district to create a semester-long module where high school students use simplified versions of the model to classify crop health from drone imagery they collect themselves. Students learn about neural networks through direct application, the PI gets field-tested edge cases, and the school gains curriculum it can sustain after the grant period because the tools run on commodity hardware.

What doesn't work: "The PI will visit local schools to give talks about AI." Talks are not impacts. They are marketing.

If your research produces software, reviewers will expect it to be open-source. That baseline earns you nothing. What earns you points is a plan for the ecosystem around the code.

Funded proposals distinguish themselves by specifying: documentation standards (API docs, tutorials, worked examples for domain scientists who aren't ML engineers), a maintenance commitment beyond the grant period, integration with existing community infrastructure (contributing to established frameworks rather than launching yet another standalone tool), and accessibility features that lower the barrier for institutions with limited compute resources.

NSF's emphasis on this is structural, not rhetorical. The PESOSE program (NSF 26-506) funds proposals specifically to build governance, security, and sustainability around open-source research software. If your broader impacts section describes a genuine open-source ecosystem strategy -- with user personas, adoption metrics, and community governance -- you're speaking the language that program officers in CISE want to hear.

One approach that works well for AI proposals: commit to releasing not just code but pre-trained models, synthetic training data, and containerized environments that let researchers at primarily undergraduate institutions reproduce your results without GPU clusters. That's a broader impact with a defined beneficiary, a measurable outcome, and a direct connection to NSF's goal of democratizing AI research capacity.

Workforce Development That Means Something Specific

"This project will contribute to the AI workforce" is not a plan. Here's what a plan looks like.

The strongest workforce development sections in funded AI proposals specify the pipeline stage they're targeting (high school, undergraduate, graduate, postdoc, or mid-career reskilling), the number of participants, the skills those participants will gain, and how those skills map to documented workforce gaps. A proposal that will train four PhD students in reinforcement learning is fine but unremarkable. A proposal that will train four PhD students in reinforcement learning and place them in structured industry residencies with named partner companies and create a graduate certificate curriculum that three partner universities have committed to piloting -- that's a broader impacts section that earns Excellent.

NSF's AI workforce development page lays out the foundation's priorities plainly: they want researchers who can work across disciplines, who understand the ethical and societal dimensions of the systems they build, and who come from institutions and backgrounds that are currently underrepresented in AI. Proposals that address these priorities with specifics -- not aspirations -- score highest.

The ExpandAI program, which funds AI capacity building at minority-serving institutions through partnerships with existing AI Institutes, is another concrete signal. If your proposal includes a meaningful collaboration with an MSI -- not a token subaward but a genuine co-PI arrangement with shared research goals -- you're addressing multiple broader impact categories simultaneously: workforce development, institutional capacity building, and broadening participation.

For researchers building AI proposals this cycle, the broader impacts section is not the place to relax after the hard work of the research plan. It's where you prove that your science connects to the world outside your lab, with evidence that a skeptical reviewer can verify. Granted can help you identify the right funding programs and build a proposal that treats broader impacts as seriously as the algorithms.

Get AI Grants Delivered Weekly

New funding opportunities, deadline alerts, and grant writing tips every Tuesday.

More Tips Articles

Not sure which grants to apply for?

Use our free grant finder to search active federal funding opportunities by agency, eligibility, and deadline.

Find Grants

Ready to write your next grant?

Let Granted AI draft your proposal in minutes.

Try Granted Free