What Grant Reviewers Actually Look for (And How to Find Out Before You Submit)
February 19, 2026 · 11 min read
Jared Klein
Most grant writers operate with a fundamental information asymmetry. They know what the NOFO says. They can read the review criteria. They understand the scoring rubric, at least in the abstract. But they do not know what reviewers actually prioritize when they sit down with a stack of 30 proposals and four hours to read them.
Reviewers have limited time and limited patience. They develop heuristics -- shortcuts for identifying strong proposals and weeding out weak ones. They gravitate toward specific signals that indicate whether a PI or organization can actually execute what they are promising. They notice patterns that applicants rarely think about: the gap between the aims and the methods, the budget line that contradicts the narrative, the broader impacts section that reads like it was written at 2 AM.
Understanding these signals is the difference between a proposal that gets funded and one that gets a polite rejection letter and a summary statement full of critiques you could have anticipated. What follows is a breakdown of what reviewers at the NIH, NSF, and major foundations actually focus on when they score proposals -- and practical strategies for identifying weaknesses before you submit.
What NIH Reviewers Prioritize
NIH peer review uses five scored criteria: Significance, Investigator(s), Innovation, Approach, and Environment. Every applicant knows these categories exist. Far fewer understand how reviewers actually weight them in practice.
Significance
Significance is not a test of whether your topic is important. Cancer is important. Alzheimer's is important. Reviewers know that. What they are looking for is whether you have made a compelling case for a specific gap in knowledge -- and whether filling that gap will change clinical practice, shift a research paradigm, or open new directions that other investigators can build on. A proposal that says "this disease affects millions of people" without articulating precisely what we do not yet understand, and why that specific gap matters now, will score poorly on significance regardless of the disease burden.
The strongest significance sections frame the problem as a bottleneck. They show reviewers that progress in a field is stalled because of a specific unanswered question, and that the proposed work will remove that obstacle. Reviewers respond to this framing because it makes the urgency concrete rather than abstract.
Investigator(s)
Reviewers assess whether the PI and the assembled team have the expertise and track record to execute the proposed work. For established investigators, this means publications in the relevant area, prior funded grants, and preliminary data that demonstrates technical capability. For early-stage investigators, reviewers look harder at the mentoring plan, institutional support, and whether the research environment compensates for a thinner CV.
Team composition matters more than many applicants realize. If the proposal involves computational modeling and the PI is a bench scientist with no computational collaborators, reviewers will flag it. If the project spans two institutions and there is no evidence that the collaborators have actually worked together before, that is a concern. Letters of support that read like templates -- generic enthusiasm without specific commitments of time, resources, or expertise -- do not help.
Innovation
NIH reviewers distinguish between incremental innovation and transformative innovation, but they also recognize a third category that applicants often overlook: innovation in application. Applying a well-established method from one field to a new problem in another field counts as innovation, provided you can articulate why that cross-pollination is likely to produce new insights. What does not count is simply claiming your work is innovative without explaining what specifically is new about it. Reviewers see this constantly and it reliably produces low innovation scores.
The key is specificity. Do not write "our approach is innovative because no one has studied this population." Instead, explain what about your method, framework, or conceptual model differs from prior work and why that difference matters for the outcomes you expect to observe.
Approach
This is where most proposals fail. Reviewers report that Approach is the criterion with the widest range of scores and the most substantive critiques. The reason is straightforward: the Approach section is where vagueness becomes impossible to hide.
Reviewers look for clear hypotheses tied to each specific aim, methods that are appropriate and sufficiently detailed for each experiment, power calculations and sample size justifications, explicit alternative approaches if primary methods fail, and a timeline that is realistic given the scope of work. A common failure mode is proposing three aims that each require a full R01's worth of effort, compressed into a single five-year project with no acknowledgment of the resource constraints.
Vague methods produce low scores. "We will use standard molecular biology techniques to assess protein expression" tells the reviewer nothing about whether you have thought carefully about the actual experimental design. "We will quantify protein X expression in tissue samples from our biobank (n=150) using quantitative Western blotting with validated antibodies, with three biological replicates per condition and densitometric analysis normalized to beta-actin loading controls" tells the reviewer you know what you are doing.
Environment
The Environment criterion asks whether the institutional setting provides the resources, infrastructure, and intellectual community needed for the project to succeed. Reviewers look for access to core facilities, relevant patient populations or data sources, institutional commitment (startup packages, protected time), and a track record of successful projects in the same domain.
A weak Environment score signals flight risk -- the reviewer worries that even a good investigator with a good plan may struggle to execute because the institution cannot support the work. This is particularly important for early-stage investigators at smaller institutions. If you are at a primarily undergraduate institution proposing work that requires a BSL-3 facility, you need a clear explanation of how you will access that facility, not just a vague reference to "regional collaborations."
What NSF Reviewers Prioritize
NSF uses two primary review criteria: Intellectual Merit and Broader Impacts. Unlike NIH's five-criterion system, NSF panels weigh these two categories with roughly equal importance -- and reviewers who serve on NSF panels consistently report that proposals which treat Broader Impacts as an afterthought receive lower overall scores, even when the Intellectual Merit is strong.
Intellectual Merit
NSF reviewers assess the potential of the proposed work to advance knowledge within and across fields. They want to see that you understand the current state of the field -- not a textbook summary, but a precise articulation of where the frontier is and what the open questions are. They then want to see how your proposed work pushes that frontier forward in a meaningful way.
The most effective Intellectual Merit sections do not just describe what the PI will do. They explain what the field will be able to do afterward that it cannot do now. Reviewers are looking for the delta -- the difference between the current state of knowledge and the state of knowledge after your project is complete. If you cannot articulate that delta clearly and specifically, the Intellectual Merit score will reflect it.
Broader Impacts
This is the criterion that most frequently separates competitive proposals from funded ones. Broader Impacts is not a box to check. NSF reviewers evaluate the quality and feasibility of your broader impacts plan with the same rigor they apply to the science. A single sentence about mentoring undergraduates does not constitute a plan.
Strong Broader Impacts sections identify a specific audience, describe concrete activities, explain how those activities connect to the research, and include a plan for assessing whether the activities achieved their goals. Partnerships with schools, museums, community organizations, or industry -- if they are real and specific rather than hypothetical -- strengthen this section significantly. Reviewers can tell the difference between a partnership that exists on paper and one where you have already had planning conversations with a named contact at a named organization.
Feasibility
NSF reviewers are also assessing a question that does not appear as a formal criterion but pervades the review: can this PI, at this institution, with these resources, actually execute this plan in the proposed timeframe? Proposals that are overly ambitious relative to the budget, timeline, or team capacity get flagged. Reviewers have seen enough projects to know what is realistic for a three-year, $300,000 award versus a five-year, $1.5 million award. If your proposal reads like the latter but requests the former, expect skepticism.
What Foundation Reviewers Prioritize
Foundation review operates under a different logic than federal review. Understanding that logic is essential if you are applying to private funders.
Mission Alignment
Mission alignment is not a formality -- it is the primary filter. Foundations fund what they care about, and what they care about is defined by their strategic priorities, their board's interests, and their theory of change. A proposal can be scientifically excellent and still receive no funding because it does not align with what the foundation is trying to accomplish. Before you write a single word, study the foundation's recent grantmaking. Read their annual reports. Look at who they have funded and for what. If your project requires the foundation to stretch its mission to see the connection, you are probably applying to the wrong funder.
Sustainability and Organizational Capacity
Foundations think about what happens after their money runs out. They want to see a credible plan for sustaining the work -- whether through other funding sources, institutional adoption, revenue generation, or policy change. A proposal that depends entirely on continued foundation support for its long-term viability is a red flag.
Foundations also evaluate organizational capacity with a directness that federal reviewers rarely match. They want to know whether your organization has the financial health, governance structures, and management expertise to handle the grant effectively. For community-based organizations applying to larger foundations, this means demonstrating not just programmatic expertise but also administrative capability -- financial controls, board oversight, and staff capacity. Community engagement matters deeply here, especially for foundations focused on equity and justice. Reviewers want to see evidence that the people affected by the problem are involved in designing and implementing the solution, not just receiving services.
The Patterns Across All Reviewers
Regardless of whether a proposal goes to NIH, NSF, or the Gates Foundation, certain red flags consistently produce low scores.
Budget-narrative mismatch. If your narrative describes a complex multi-site intervention but your budget shows one postdoc and a half-time coordinator, reviewers will question whether you have thought seriously about what the project actually requires. The budget should be a financial translation of the narrative -- every major activity in the research plan should have a corresponding budget line.
Missing letters of support. If you claim a partnership with a hospital system, a school district, or a community organization, reviewers expect a letter from that partner confirming the collaboration and specifying their contribution. Generic letters that could apply to any project do not count. Letters that specify what the partner will provide -- data access, patient recruitment, classroom time, in-kind staff support -- are evidence. Letters that express enthusiasm without commitment are decoration.
Vague evaluation plans. Reviewers see this constantly: a proposal with ambitious goals and no credible plan for measuring whether those goals were achieved. Stating that you will "track outcomes and report results" is not an evaluation plan. Specify what you will measure, how, when, and what constitutes success. If your evaluation section is shorter than a single page, it is probably too thin.
Internal inconsistencies. Reviewers read proposals linearly but evaluate them holistically. If your specific aims mention four objectives but your methods section only addresses three, that gets noticed. If your timeline shows data collection ending in Month 24 but your analysis plan requires 30 months of data, that gets noticed. If your budget requests a biostatistician but your methods section describes the PI doing all the analysis, that gets noticed.
Formatting violations. It seems trivial, but exceeding page limits, using incorrect margins, or ignoring formatting requirements signals carelessness. Reviewers interpret it as a preview of how you will manage the grant.
The "so what" problem. The single most common weakness across all funders is the failure to explain, clearly and compellingly, why anyone should care about this work. Technical excellence is necessary but not sufficient. Reviewers need to understand the stakes.
How to Identify Weaknesses Before Submission
Knowing what reviewers look for is only half the equation. The other half is finding the weaknesses in your own proposal before reviewers do.
Fresh Eyes and Scoring Checklists
The most effective pre-submission strategy is simple in concept and difficult in execution: have someone unfamiliar with your project read the proposal and tell you what they do not understand. Not a collaborator. Not your co-PI. Someone who does not already know what you mean -- because reviewers do not already know what you mean, either. Ask them to identify every place where they had to guess at your reasoning or re-read a passage to understand it.
Pair this with the funder's actual scoring criteria. Print out the NIH review criteria or the NSF merit review principles and score your own proposal on each dimension. Be honest. If you give yourself a 3 on Approach, you have work to do -- and it is better to discover that now than in the summary statement.
Calibration Through Funded Examples
Reading funded proposals is one of the most underused preparation strategies. NIH Reporter lets you search funded grants by keyword, institute, and mechanism -- and many include the funded abstract and specific aims. NSF Award Abstracts provide similar calibration. Reading 10 funded proposals in your area gives you a benchmark for scope, ambition level, and writing quality. You start to see what a fundable proposal actually looks like, as opposed to what you imagine it looks like.
The Time Problem
Here is the reality most grant writers face: the strategies above work, but they require time. Finding a qualified reviewer outside your project takes days. Scheduling their review takes weeks. Getting their feedback often arrives too close to the deadline to meaningfully revise. The best proposals go through multiple rounds of review and revision, but many applicants are working against a deadline that does not accommodate that process.
This is where structured review tools earn their value. Granted's AI Committee Review provides independent, multi-perspective feedback in roughly 15 minutes -- six AI reviewers assess your proposal against the funder's actual criteria and produce consensus-ranked findings that tell you what a review panel would flag. It is not a replacement for human review, but it identifies the obvious weaknesses before you invest a colleague's time or a consultant's fee, and it does it on a timeline that works even when the deadline is next week.
Closing
The best grant writers do not just write well. They anticipate what reviewers will criticize and address it proactively in the proposal itself. They read their own work through the eyes of a skeptical reviewer with limited time and high standards. Understanding what reviewers actually look for -- across NIH, NSF, and foundation panels -- is the first step toward writing proposals that survive scrutiny rather than succumb to it.
Keep Reading
- Common Mistakes in NIH Proposals
- What NIH Reviewers Wish You Knew
- NSF Broader Impacts: Examples That Work
- Grant Evaluation Plan: Writing Measurable Outcomes Funders Trust
Ready to find and win your next grant? Granted AI searches 85,000+ opportunities, analyzes your RFP, coaches you through each section, and runs AI committee review before you submit. Start free -- no credit card required.
