Inside the SBIR Review Panel: How Proposals Are Actually Scored

March 4, 2026 · 5 min read

David Almeida

Reviewers do not read your proposal the way you wrote it. They are not starting at page one and following your narrative arc to a satisfying conclusion. They are scanning for specific information, under time pressure, with a scoring rubric open on their second monitor. Understanding this reality — what actually happens inside an SBIR review panel — is the single most useful thing you can do before writing a proposal.

The process is not mysterious. It is structured, predictable, and documented. But most applicants have never seen it from the other side.

How the Review Process Works

Each proposal is assigned to two or three primary reviewers — subject-matter experts recruited for their domain knowledge, not agency staff. Before the panel meeting, each reviewer writes an independent critique and assigns preliminary scores. During the panel meeting, primaries present their critiques to the full panel for discussion.

After discussion, all panel members who read the proposal assign final scores. The agency uses the aggregate scores — along with portfolio balance and programmatic priorities — to make funding decisions.

The critical point: your proposal must work for a reader who is evaluating it against a rubric, not for a reader who is casually interested in your technology. Every section of your proposal should make the reviewer's job easier by directly addressing the criteria they are scoring. For a broader view of how funding rates differ across agencies, the SBIR success rate data provides useful context for setting expectations.

NIH Scoring: Five Criteria, One Number

NIH uses the most formalized scoring system in the SBIR ecosystem. Each proposal is evaluated on five criteria, each scored from 1 (exceptional) to 9 (poor):

Significance. Does the project address an important problem? Will it improve scientific knowledge, technical capability, or clinical practice? Reviewers want to see a clear articulation of the unmet need and quantified impact.

Investigator(s). Does the team have the expertise and track record to execute the project? For SBIR, this includes both the PI's technical credentials and the company's ability to manage a federal award. Preliminary data and prior SBIR experience are weighted heavily here.

Innovation. Does the project employ novel concepts, approaches, or technologies? Reviewers distinguish between incremental improvements and genuine innovation. "We do what others do, but faster" is incremental. "We use a fundamentally different mechanism that enables capabilities no existing approach can achieve" is innovative.

Approach. Is the research strategy well-designed and feasible? Are potential problems identified and alternative strategies considered? This is where reviewers scrutinize your work plan against your budget and timeline. Overambitious scope relative to Phase I funding is the most common weakness.

Environment. Does the company have the facilities, equipment, and resources to complete the work? For early-stage companies, this often means demonstrating access to shared facilities, incubator space, or partner labs.

The five criterion scores feed into an Overall Impact Score, which ranges from 10 (highest) to 90 (lowest). Funding decisions are based on percentile rankings within each NIH institute. Typical paylines fall between the 15th and 25th percentile, meaning only the top 15-25% of scored proposals receive awards. The SBIR Complete Application Guide maps each of these criteria to the specific proposal sections where they are evaluated.

DoD Scoring: Technical Merit Dominates

Department of Defense SBIR reviews operate differently. Technical panels are composed of military and civilian evaluators with operational domain expertise. The scoring criteria typically weight three areas:

Technical merit receives the most weight — usually 40-50% of the total score. DoD reviewers are especially attuned to feasibility: can this actually be built and tested within Phase I?

Qualifications of key personnel accounts for roughly 25-30%. Panels look for demonstrated experience with similar technologies and prior SBIR/STTR performance.

Commercialization potential makes up the remaining 20-30%, including both military transition potential and commercial dual-use applications. A letter from a DoD program manager is the single strongest piece of evidence for this criterion.

Unlike NIH, DoD does not use a standardized 1-9 scale. Scoring rubrics vary by agency and topic — read the solicitation carefully.

NSF: The Two-Panel System

NSF SBIR/STTR proposals go through two separate reviews. A technical panel evaluates the scientific and engineering merit. A separate commercialization review — often conducted by a different set of evaluators with business and industry backgrounds — assesses the market opportunity, business model, and team's ability to execute commercially.

This dual-panel structure means that a proposal can score well on technical merit and still be declined if the commercialization case is weak. NSF is explicit about this: they fund technologies that will become products, not technologies that will become papers. The commercialization review looks for evidence of customer discovery, a realistic go-to-market strategy, and a team that includes business expertise alongside technical depth.

NSF also weighs broader impacts — how the technology will benefit society beyond the immediate commercial application. Proposals that articulate workforce development, environmental benefits, or impact on underserved communities score higher on this dimension.

The SBIR/STTR program page includes links to current NSF solicitations with their specific evaluation criteria and review processes.

What Separates Funded Proposals From Rejections

After reviewing hundreds of proposals, patterns emerge in what distinguishes top-scoring submissions from the rest.

Specificity over ambition. Funded proposals define a narrow, achievable scope with concrete milestones. Rejected proposals promise transformative results without a credible path to achieve them in Phase I. A reviewer who sees "we will develop a complete prototype, conduct three clinical trials, and secure FDA clearance" in a Phase I proposal knows the team does not understand the program.

Feasibility framing. The strongest proposals acknowledge technical risks explicitly and present mitigation strategies. This is counterintuitive — many applicants think admitting risk makes them look weak. The opposite is true. A team that identifies the three most likely failure modes and has a plan for each demonstrates maturity that reviewers reward.

Preliminary data. Not every agency requires it, but proposals with preliminary data almost always score higher. Even limited feasibility data — a proof-of-concept experiment, a computational model, a benchtop prototype — gives reviewers concrete evidence that the approach works.

Budget-plan alignment. Reviewers cross-reference your work plan against your budget. If your budget allocates 60% to personnel but your work plan is equipment-intensive, that disconnect will be flagged.

Writing clarity. Reviewers read dozens of proposals in a sitting. Short paragraphs, active voice, and clear topic sentences make it easier for a reviewer to find the information they need to score favorably.


Related SBIR reading:

The proposals that score best are the ones that respect the reviewer's time and make the scoring decision obvious — and that is a skill that Granted is built to help you develop, from structuring your aims to aligning your budget with your technical plan.

Get AI Grants Delivered Weekly

New funding opportunities, deadline alerts, and grant writing tips every Tuesday.

Browse all SBIR grants

More SBIR Articles

Not sure which grants to apply for?

Use our free grant finder to search active federal funding opportunities by agency, eligibility, and deadline.

Find Grants

Ready to write your next grant?

Draft your proposal with Granted AI. Win a grant in 12 months or get a full refund.

Backed by the Granted Guarantee