Why Grants Get Rejected: 12 Common Reasons and How to Fix Them
March 20, 2026 · 15 min read
Claire Cummings
The Uncomfortable Truth About Grant Rejection
Most grant proposals are rejected. NIH funds approximately 20% of R01 applications. NSF funds 25% to 30% depending on the directorate. Competitive foundation programs fund 5% to 15% of submissions. If you have been rejected, you are in the statistical majority, and a rejection does not mean your project lacks merit.
What rejection usually means is that your proposal — the written document — failed to convince reviewers. The distinction matters. Reviewers do not evaluate your idea in the abstract. They evaluate how you presented it: the clarity of the problem statement, the logic of the methodology, the alignment with funder priorities, the credibility of the budget, and the quality of the writing. Brilliant projects wrapped in weak proposals lose to competent projects presented with precision.
After reviewing hundreds of proposals across federal study sections and foundation review panels, I have identified twelve rejection reasons that appear consistently. Each one is fixable. This guide explains what goes wrong, shows you what reviewer critiques actually look like, and provides specific remedies for each problem.
1. Misalignment With Funder Priorities
This is the most common and most preventable reason for rejection. The applicant finds a funding opportunity, sees a dollar amount that works, and retrofits their project to fit the funder's language. Reviewers see through this immediately. The proposal reads like a square peg forced into a round hole — the technical work does not genuinely address the funder's stated priorities, and the connection feels strained.
What reviewers write: "While the proposed work has scientific merit, it does not address the priorities outlined in the funding opportunity announcement. The connection between the applicant's approach and the program's emphasis on community-based implementation is unclear."
How to fix it. Before writing a single word, extract the funder's stated priorities from the solicitation, program website, strategic plan, and any published review criteria. Map each of your project's objectives to a specific funder priority. If you cannot draw a direct, honest line from your work to what the funder has said it wants to fund, the opportunity is not the right fit. Move on. Applying to a misaligned program wastes your time and the reviewers' time.
For federal grants, read the Funding Opportunity Announcement (FOA) word by word. Pay particular attention to the "Program Description" and "Review Criteria" sections — they tell you exactly what the agency values. For foundations, review the most recent 990-PF filings and published grant lists to understand what they actually fund, not just what their mission statement says.
2. Weak or Missing Specific Aims / Problem Statement
For NIH proposals, the Specific Aims page is the most consequential page in the application. For foundation proposals, the opening problem statement serves the same function. If this section fails, the rest of the proposal is dead on arrival — reviewers form their first impression here, and that impression colors their reading of everything that follows.
Common failures include aims that are too vague ("understand the mechanisms underlying disease X"), too numerous (four or five aims in a two-year project), or interdependent (Aim 3 depends on Aim 2 succeeding, which depends on Aim 1). Equally damaging is a problem statement that describes a topic rather than a specific gap — "diabetes is a growing problem" does not tell a reviewer what you are going to do about it.
What reviewers write: "The specific aims are overly broad and do not articulate testable hypotheses. Aim 2 is contingent on results from Aim 1, creating risk to the overall project if initial experiments are unsuccessful."
How to fix it. Each aim should be independently achievable — if Aim 1 fails, Aims 2 and 3 should still produce meaningful results. Frame each aim around a specific, testable hypothesis or a concrete deliverable. The aims page must answer four questions in one page: What is the problem? Why does it matter? What will you do? What will be different when you succeed? If a reader cannot answer all four after reading your aims page, rewrite it.
3. Budget That Does Not Match Scope
Reviewers evaluate budgets for credibility, not just cost. A $250,000 budget for a project that clearly requires $500,000 raises feasibility concerns — the reviewer wonders what corners you plan to cut. A $500,000 budget for work that could be done for $200,000 raises efficiency concerns — the reviewer wonders if you are padding the request.
The most common variant is a personnel budget that does not match the proposed work plan. If your narrative describes a postdoc running experiments full-time for two years but your budget shows that postdoc at 25% effort, the proposal contradicts itself. Reviewers catch these inconsistencies.
What reviewers write: "The budget appears insufficient for the proposed scope of work. Three full-time personnel are described in the research plan, but the budget supports only 1.5 FTEs. It is unclear how the remaining work will be accomplished."
How to fix it. Build your budget from the work plan, not the other way around. List every task, estimate the hours required, assign personnel, and calculate the cost. Then compare the total to the funder's budget range. If the honest budget exceeds the available funding, reduce the scope — do not simply cut budget lines while leaving the narrative unchanged. Every dollar in your budget should be traceable to a specific activity in your work plan, and every activity in your work plan should have corresponding budget support.
4. No Preliminary Data or Evidence of Feasibility
For R01 applications, reviewers expect to see substantial preliminary data demonstrating that your methods work, your model system behaves as expected, and your approach is technically feasible. For foundation proposals, reviewers expect evidence that your organization has delivered similar programs successfully or that your team has the relevant experience and infrastructure.
"Trust me, it will work" is not a feasibility argument. Neither is citing someone else's published results without demonstrating that you can replicate them in your own context.
What reviewers write: "The applicant provides no preliminary data supporting the proposed methodology. While the cited literature suggests the approach is theoretically sound, there is no evidence that the applicant's laboratory can execute the proposed experiments."
How to fix it. For research proposals, include pilot data — even if the sample size is small. A figure showing that your assay detects the target, that your model produces the expected phenotype, or that your recruitment strategy yields participants is worth more than pages of literature review. For program proposals, include outcome data from prior work: evaluation reports, participant counts, follow-up statistics, or testimonials with measurable results. If you genuinely have no preliminary data, consider applying for an R21 exploratory grant or a foundation pilot grant to generate it.
5. Unclear Methodology
Reviewers need enough methodological detail to evaluate whether your approach can answer the questions you are asking. Proposals that describe methods at a conceptual level — "we will use statistical analysis to evaluate outcomes" or "interviews will be conducted with stakeholders" — do not provide enough information for this evaluation.
This problem is especially acute in interdisciplinary proposals where reviewers may not share your methodological background. Methods that seem obvious to you may be unfamiliar to a reviewer from an adjacent field.
What reviewers write: "The experimental design lacks sufficient detail. Sample size justification is absent, the analytical plan is vague, and the timeline for data collection appears unrealistic given the described recruitment strategy."
How to fix it. For every method you propose, specify the procedure, the sample or data source, the sample size with a power analysis or rationale, the controls, the analytical approach, and the criteria for success or failure. Name the specific statistical tests you will use. Describe your qualitative coding framework if applicable. Include a timeline that shows when each methodological step occurs. If space permits, add a figure showing the experimental design or data flow. A reviewer who can trace the path from your research question through your methods to your expected results will score Approach favorably.
6. Missing or Weak Evaluation Plan
Federal agencies and foundations increasingly require robust evaluation plans that go beyond counting outputs (number of workshops held, number of patients enrolled). They want to know how you will measure outcomes (behavior change, clinical improvement, policy adoption) and how you will determine whether your intervention caused those outcomes.
A proposal that describes activities without explaining how you will know if they worked suggests that the applicant has not thought carefully about impact.
What reviewers write: "The evaluation plan consists entirely of process measures. There is no plan for measuring participant outcomes, no comparison or control group, and no discussion of how the applicant will attribute observed changes to the proposed intervention."
How to fix it. Structure your evaluation around a logic model or theory of change that links inputs to activities to outputs to outcomes. For each outcome, specify the indicator, the data source, the collection method, the timeline, and the benchmark for success. If a randomized control design is not feasible, describe your quasi-experimental approach — pre-post comparison, matched controls, interrupted time series, or dose-response analysis. For qualitative evaluation, describe your sampling strategy, interview protocols, and analytical framework. Consider including an external evaluator if the budget allows — reviewers view independent evaluation favorably.
7. Eligibility Issues
Getting rejected for eligibility is the most frustrating outcome because it is entirely avoidable. Common eligibility errors include applying as the wrong organization type (for-profit applying to a nonprofits-only program), not meeting geographic requirements (applying to a state program from outside the state), exceeding revenue or employee thresholds for small business programs, or failing to register in SAM.gov before a federal submission deadline.
What reviewers write: Nothing — the application is administratively withdrawn before review. You receive a form letter stating the application did not meet eligibility requirements.
How to fix it. Read the eligibility section of the funding opportunity announcement before anything else. Check every criterion: organization type, tax status, geographic location, organizational budget size, employee count, prior grant history, registration requirements, and any special conditions. If anything is ambiguous, contact the program officer and get a definitive answer before investing time in the application. For federal grants, verify that your SAM.gov registration is active (registrations expire annually) and that your UEI number is current.
8. Late or Incomplete Submission
Federal agencies enforce deadlines with no exceptions. Grants.gov closes at 11:59 PM Eastern on the due date, and a submission that arrives at 12:00 AM is rejected. Equally damaging is a submission that arrives on time but is missing required components — a biosketch, a facilities description, a data management plan, a letter of support, or a required certification.
What reviewers write: Again, nothing — the application is administratively rejected. For Grants.gov submissions, you receive an automated rejection notice identifying the error.
How to fix it. Submit 48 to 72 hours before the deadline. Grants.gov processing can take 24 to 48 hours, and if the system identifies errors, you need time to correct and resubmit. Create a checklist of every required document from the FOA's "Application and Submission Information" section. Have a colleague verify the checklist against your submission package before you upload. For foundation submissions with email or portal-based deadlines, submit at least 24 hours early and confirm receipt.
9. Poor Writing Quality and Jargon Overload
Grant reviewers are experienced professionals, but they are not necessarily experts in your specific subfield. NIH study sections assemble researchers from across a discipline. NSF panels include specialists from adjacent areas. Foundation review committees may include practitioners, policymakers, and community representatives alongside researchers.
Proposals written for a narrow specialist audience — dense with acronyms, subfield jargon, and assumed knowledge — alienate reviewers who cannot follow the argument. This is not a matter of dumbing down the science. It is a matter of communicating clearly enough that a knowledgeable professional outside your exact niche can evaluate your proposal.
What reviewers write: "The proposal is written for a highly specialized audience and does not provide sufficient context for reviewers outside the applicant's immediate subfield. Key terminology is undefined, and the rationale for the chosen methodology is difficult to follow."
How to fix it. Define every acronym on first use. Explain technical concepts in one sentence before using them as building blocks for your argument. Use headers, subheaders, and white space to guide the reader. Bold key sentences that carry your main arguments. Read your proposal aloud — if a sentence requires a second read to parse, rewrite it. Most importantly, have someone outside your immediate field read the proposal and tell you what they understood. If they cannot summarize your project in three sentences, the writing is not doing its job.
10. No Sustainability Plan
Funders want to know what happens when the grant ends. A project that depends entirely on continued grant funding — with no plan for revenue generation, institutional absorption, or alternative funding — raises concerns about long-term impact. Why invest in a program that disappears the moment funding stops?
This is particularly critical for foundation grants and federal community-based programs. Funders are investing in change, not in temporary activities.
What reviewers write: "There is no discussion of how the program will be sustained after the grant period. The applicant appears to assume continued external funding without identifying specific sources or describing how the program could be maintained through institutional support or revenue."
How to fix it. Describe a realistic sustainability path. Options include institutional absorption (the university or organization commits to funding the position or program after the grant), fee-for-service revenue (the program generates income from clients or partner organizations), follow-on funding (you have identified specific grants or funders for subsequent support), or integration into existing infrastructure (the grant funds a pilot that becomes part of standard operations). Do not promise sustainability through vague references to "diversified funding." Identify specific mechanisms, and if possible, secure preliminary commitments — a letter from your organization's leadership committing to continued support is far more convincing than a sentence in the narrative.
11. Ignoring Reviewer Feedback on Resubmission
When a federal agency permits resubmission — NIH allows one A1 resubmission, NSF allows revised proposals for many programs — the reviewer critiques from the first round are a roadmap. Ignoring them is the single most damaging mistake you can make in a resubmission, because the same reviewers (or their colleagues who read the summary statement) will notice.
The opposite error is equally common: resubmissions that argue with the reviewers rather than addressing their concerns. Even when you believe a reviewer misunderstood your proposal, the productive response is to acknowledge the misunderstanding and make the relevant section clearer — not to explain why the reviewer was wrong.
What reviewers write: "The resubmission introduction indicates that the applicant disagrees with the prior critique regarding sample size adequacy but has not modified the experimental design. The original concern remains unaddressed."
How to fix it. Structure your resubmission introduction around each critique from the summary statement. For each concern: summarize the critique in one sentence, describe how you addressed it (with page references to the revised proposal), and explain the reasoning behind your changes. If you disagree with a critique, frame your response as clarification rather than rebuttal: "We have revised Section X to more clearly explain our rationale for..." not "The reviewer incorrectly stated that..." Revised proposals that demonstrate responsiveness to feedback consistently outscore those that do not.
12. Not Reading the RFP Carefully Enough
This is the meta-mistake that enables many of the others. The Request for Proposals (or Funding Opportunity Announcement, Notice of Funding Opportunity, or Program Solicitation — the terminology varies by funder) contains every piece of information you need to write a competitive application: the funder's priorities, eligibility requirements, format specifications, page limits, required sections, review criteria and their relative weights, submission procedures, and deadlines.
Applicants who skim the RFP miss critical details. They submit 12-page research plans when the limit is 10. They omit the required data management plan. They use a font size that violates formatting requirements. They address three of five review criteria and ignore the other two. Every one of these errors is self-inflicted, and every one reduces the reviewer's confidence in the applicant's attention to detail.
What reviewers write: "The proposal exceeds the stated page limit for the research plan. The required evaluation framework described in Section IV.B of the solicitation is not addressed."
How to fix it. Read the entire solicitation before you start writing. Read it again after you finish your first draft, using it as a checklist. Create a compliance matrix: list every requirement from the solicitation in one column, and note where your proposal addresses it in the second column. If any cell in the second column is empty, you have a gap. Share the compliance matrix with a colleague who has not read your proposal and ask them to verify that each requirement is genuinely addressed — not just mentioned — in the referenced section.
What to Do After Rejection
Rejection is not the end. It is information.
Request and Study Reviewer Feedback
For federal grants, summary statements with detailed reviewer critiques are typically available four to eight weeks after the review meeting. For foundations, some provide written feedback and some do not — if feedback is not automatically provided, ask for it. Program officers are often willing to share general observations about why a proposal was not competitive.
Determine Whether to Resubmit or Redirect
If the critiques are primarily about presentation — unclear writing, insufficient detail, budget inconsistencies — a revised proposal to the same program is likely worth the effort. If the critiques suggest fundamental misalignment with the funder's priorities, or if they question the core premise of your approach, consider whether a different funder or a redesigned project is the better path.
Build What Was Missing
If reviewers identified specific gaps — no preliminary data, no evaluation plan, no sustainability strategy, insufficient partnerships — invest the time to fill those gaps before resubmitting. A resubmission that addresses critiques superficially (adding a paragraph about sustainability without any new commitments) will not change the outcome. A resubmission that demonstrates genuine progress (new pilot data, a signed MOU with a partner organization, a letter of support from institutional leadership) tells reviewers that you took their feedback seriously.
Use Rejection Strategically
Track your rejection reasons across multiple submissions. If the same critique appears repeatedly — weak methodology, unclear significance, budget issues — that is a systemic problem in your proposal-writing process, not bad luck. Address the pattern, not just the individual instance.
Frequently Asked Questions
How do I know if my proposal was rejected because of the idea or the writing?
Read the reviewer feedback carefully. Critiques about "clarity," "detail," "justification," "feasibility," and "organization" are writing and presentation problems — the reviewers could not evaluate the idea because the proposal did not communicate it effectively. Critiques about "significance," "innovation," "impact," and "alignment" suggest the idea itself did not resonate with the panel. Most rejected proposals receive both types of critique, but writing issues almost always appear first because they affect how reviewers perceive everything else.
Is it worth contacting the program officer after a rejection?
Yes, almost always. Federal program officers can explain the review context, clarify confusing critiques, and advise you on whether a resubmission is appropriate for their program. Foundation program officers can often provide candid feedback about fit and competitiveness that is more specific than written reviewer comments. Be professional, concise, and genuinely interested in their guidance — do not argue with the decision or ask them to reconsider. The goal is to gather information that strengthens your next proposal.
How many times should I resubmit before moving on?
For NIH, you get one formal resubmission (A1). If the A1 is not funded, you can submit the concept as a new application (new A0) — effectively unlimited resubmissions, though each new A0 must include substantive revisions. For other federal agencies and foundations, policies vary. The practical answer is: resubmit as long as the reviewer feedback suggests the proposal is getting closer to fundable. If your scores or feedback are not improving between submissions, the problem may be fundamental alignment rather than incremental revision.
Do grant writing consultants improve success rates?
A skilled consultant who understands your funder, your field, and the review process can significantly improve a proposal's competitiveness — particularly for first-time applicants who are unfamiliar with review culture and proposal conventions. However, consultants cannot fix a fundamentally weak project or substitute for the applicant's domain expertise. The best use of a consultant is to review a near-final draft with fresh eyes, identify the gaps that reviewers will flag, and help you revise the framing and structure. Hiring a consultant to write the entire proposal from scratch rarely produces a competitive result because the proposal lacks the specificity and conviction that comes from deep familiarity with the work.
What is the single most impactful thing I can do to improve my next proposal?
Have someone outside your field read it and tell you what they understood. Not what they thought of it — what they understood. If they cannot articulate your central problem, your approach, and your expected outcome in two to three sentences, the writing is not doing its job. This single exercise catches most of the communication failures that reviewers penalize. Do this with enough time to make substantive revisions, not the night before the deadline.
Every rejected proposal contains information about what to do differently next time — Granted helps you apply those lessons by finding aligned opportunities and building stronger applications from the start.
Related Guides
Ready to put this into practice?
Draft your proposal with Granted AI. Win a grant in 12 months or get a full refund.
Backed by the Granted Guarantee