Granted
Sign in

Grant Evaluation Plan: Writing Measurable Outcomes Funders Trust

August 30, 2025 · 11 min read

Tomas Kowalski

Cover image

The evaluation plan is where many grant proposals reveal a fundamental gap between ambition and accountability. Applicants describe transformative programs -- reducing youth violence, improving literacy outcomes, restoring wetland ecosystems, training healthcare workers in underserved communities -- and then propose to evaluate those programs with a paragraph about "collecting data" and "measuring success." Funders have learned to read evaluation plans as a proxy for organizational seriousness. A vague evaluation section signals that you have not thought carefully about whether your program will actually work.

This guide covers the mechanics of writing evaluation plans that funders trust: how to design measurable outcomes, align them with a logic model, select appropriate data collection methods, build an evaluation timeline, choose an external evaluator, and budget for the work. The examples span education, health, environmental, and community development programs, because evaluation principles are universal even when the specifics differ.

Why Evaluation Plans Matter to Funders

Funders care about evaluation for three interconnected reasons.

Accountability. Public and philanthropic dollars carry an obligation to demonstrate impact. Federal agencies are required by the Government Performance and Results Act (GPRA) to report on the effectiveness of funded programs. Private foundations face pressure from their boards and the public to show that grants produce results. Your evaluation plan tells the funder how you will hold yourself accountable.

Learning. Good evaluation generates knowledge about what works, what does not, and why. Funders want to invest in programs that produce generalizable insights, not just local outcomes. An evaluation plan that includes qualitative methods -- understanding the mechanisms behind outcomes, not just whether outcomes occurred -- demonstrates that you view evaluation as a learning tool, not just a compliance exercise.

Sustainability. Programs with strong evaluation data are easier to sustain beyond the grant period. If you can demonstrate measurable outcomes, you have evidence to support future funding applications, legislative support, and organizational investment. Funders think about this. They prefer to invest in programs that will generate the evidence needed to continue.

Formative vs. Summative Evaluation

Every evaluation plan should include both formative and summative components. These serve different purposes and use different methods.

Formative Evaluation

Formative evaluation happens during program implementation. Its purpose is to monitor progress, identify problems early, and make real-time adjustments. Think of it as the feedback loop that keeps your program on track.

Formative evaluation activities include:

Formative evaluation tells you whether the program is being implemented in a way that gives it a fair chance of achieving its goals. Funders value it because it demonstrates adaptiveness.

Summative Evaluation

Summative evaluation happens at the end of the program (or at defined intervals) and assesses whether the program achieved its intended outcomes. This is the "did it work?" question.

Summative evaluation activities include:

Strong evaluation plans give approximately equal attention to both formative and summative components. Plans that focus only on summative outcomes miss the operational intelligence that formative evaluation provides. Plans that focus only on formative process measures fail to demonstrate ultimate impact.

Writing SMART Outcomes

Measurable outcomes are the foundation of any credible evaluation plan. The SMART framework -- Specific, Measurable, Achievable, Relevant, Time-bound -- is widely used and for good reason. It forces precision.

How SMART Outcomes Work in Practice

Weak outcome: "Participants will improve their health literacy."

SMART outcome: "By Month 18, 70% of program participants (n=200) will demonstrate a statistically significant improvement in health literacy as measured by a validated pre/post assessment using the Health Literacy Questionnaire (HLQ), with an effect size of at least 0.3."

The SMART version tells the funder exactly what you are measuring, how, when, for how many people, and what threshold constitutes success. It is evaluable. The weak version is not.

Outcome Hierarchies: Short-Term, Intermediate, Long-Term

Most programs produce outcomes at multiple time horizons. A workforce development program might produce short-term knowledge gains, intermediate behavior changes, and long-term economic outcomes. Your evaluation plan should identify outcomes at each level.

Example for a community health worker training program:

Short-term outcomes (by Month 6):

Intermediate outcomes (by Month 12):

Long-term outcomes (by Month 24):

This hierarchy shows the funder that you understand how change happens -- it is sequential, building from knowledge to behavior to system-level outcomes.

Logic Model Alignment

A logic model is a visual representation of how your program is expected to work. It maps the relationships between resources (inputs), activities, outputs, and outcomes. Your evaluation plan should map directly onto your logic model.

Logic Model Components

Inputs: The resources you invest in the program -- staff, funding, facilities, partnerships, materials.

Activities: What you do with those resources -- training sessions, outreach events, clinical services, environmental restoration work.

Outputs: The direct products of your activities -- number of people trained, number of events held, acres restored, patients served. Outputs are countable and controllable.

Short-term outcomes: The immediate changes resulting from your outputs -- increased knowledge, changed attitudes, new skills.

Intermediate outcomes: Behavioral or systemic changes that result from the short-term outcomes -- changed practices, new policies adopted, sustained behavioral changes.

Long-term outcomes (impact): The ultimate changes your program contributes to -- improved health, reduced poverty, restored ecosystems, stronger communities.

Mapping Evaluation to the Logic Model

For each element in your logic model, your evaluation plan should specify:

If your logic model includes an output of "500 youth complete the after-school STEM program," your evaluation plan should specify how completion is defined, how attendance is tracked, and what the minimum attendance threshold is for "completion."

If your logic model includes an intermediate outcome of "participants apply computational thinking skills in academic coursework," your evaluation plan should specify how that application is measured -- teacher surveys, analysis of student work products, classroom observations, or some combination.

The logic model and the evaluation plan should be perfectly aligned. Every outcome in the logic model should appear in the evaluation plan, and every measure in the evaluation plan should connect to the logic model. If there is a mismatch, either the logic model or the evaluation plan needs revision.

Data Collection Methods

The credibility of your evaluation depends on the quality of your data. Your evaluation plan should describe specific data collection methods for each outcome measure.

Quantitative Methods

Surveys and assessments. Use validated instruments when they exist. If you are measuring depression, use the PHQ-9. If you are measuring self-efficacy, use a validated self-efficacy scale for your domain. If no validated instrument exists, describe how you will develop and pilot-test a custom instrument. Funders are skeptical of custom measures that have not been tested for reliability and validity.

Administrative data. Program records, attendance logs, enrollment databases, clinical records, school records, and government databases. Administrative data is valuable because it is collected regardless of the evaluation and therefore does not introduce response burden. Specify which administrative data systems you will access and any data-sharing agreements required.

Pre/post testing. Measuring the same outcome before and after the intervention. Simple and intuitive, but limited by the absence of a comparison group. Pre/post designs cannot distinguish program effects from maturation, historical events, or regression to the mean. Use them for short-term outcomes where these threats are minimal.

Comparison group designs. If feasible, compare outcomes between program participants and a similar group that did not participate. Randomized controlled trials are the gold standard but are often impractical in community-based programs. Quasi-experimental designs using matched comparison groups or regression discontinuity are rigorous alternatives that funders respect.

Qualitative Methods

Quantitative methods tell you what changed. Qualitative methods tell you why and how. The key qualitative approaches for evaluation plans are:

Mixed Methods

The strongest evaluation plans combine quantitative and qualitative approaches so each compensates for the other's limitations. Specify how you will integrate findings -- sequentially (quantitative first, then qualitative to explain), concurrently (collect both simultaneously and triangulate), or embedded (qualitative within a primarily quantitative design).

Evaluation Timeline

Your evaluation plan should include a timeline showing when each data collection activity occurs relative to program implementation. For a three-year program, the typical structure includes: instrument finalization and IRB approval in Months 1-2, baseline data collection in Month 3, continuous formative monitoring throughout with semi-annual reports, interim outcome assessment at Month 15, second-round qualitative data collection at Month 20, final quantitative and qualitative data collection at Months 30-32, and the final evaluation report at Month 36. Build this timeline into your proposal's Gantt chart so reviewers see that evaluation is integrated with program delivery, not appended to it.

External Evaluator Selection

Many funders require or strongly prefer an external evaluator -- someone outside your organization who brings objectivity and methodological expertise. Even when not required, external evaluation strengthens your proposal's credibility.

Look for evaluators with methodological expertise matching your design, substantive knowledge of your program area, experience with your funder, and cultural competence relevant to your target population. Include a letter of commitment from your proposed evaluator describing their qualifications, responsibilities, time commitment, and independence from program implementation. The evaluator should contribute to writing the evaluation section -- their expertise strengthens the methodology, and their name on the application signals credibility to reviewers.

Evaluation Budgets

A credible evaluation requires real resources. The general guideline is that evaluation should constitute 5-10% of the total project budget, though this varies by funder expectations and program complexity.

Budget Components

Key line items include: external evaluator compensation ($100-250/hour or fixed fee; $25,000-50,000 for a $500,000 project), data collection costs (survey platforms at $500-2,000/year, transcription at $1-3/minute, participant incentives at $25-75 per session), data management and storage, analysis software, reporting, and IRB fees ($500-2,500).

Sample Evaluation Budgets

As a reference: a $300,000 education program typically allocates $28,000-30,000 for evaluation (roughly 10%), covering the external evaluator ($6,000/year), survey platforms, transcription, participant incentives, and reporting. A $750,000 community health program might budget $75,000 (10%), with higher costs for clinical data management, statistical analysis, and compliance. Environmental projects often run higher (11-12%) due to monitoring equipment and laboratory analysis costs.

Examples by Program Type

Education Program Example

Program: After-school STEM enrichment for middle school students in underserved urban schools.

SMART Outcomes:

Data Collection: Pre/post STEM attitudes survey (validated instrument), course enrollment records from school district, annual student intention surveys, quarterly attendance tracking, semi-annual teacher interviews (n=12), two participant focus groups per year (n=8-10 per group).

Evaluation Design: Quasi-experimental with matched comparison group drawn from similar schools not receiving the program. Propensity score matching on demographics, prior academic performance, and school characteristics.

Health Program Example

Program: Diabetes self-management education for rural adults with Type 2 diabetes.

SMART Outcomes:

Data Collection: Clinical records (HbA1c, blood pressure, BMI at baseline, 6, 12, and 24 months), emergency department visit records from partnering health system, participant self-management logs, quarterly participant surveys on self-management behaviors, annual interviews with 15 participants and 5 healthcare providers.

Evaluation Design: Single-group pre/post with historical comparison. Each participant serves as their own control using 12 months of pre-enrollment health records.

Environmental Program Example

Program: Coastal wetland restoration to reduce flooding in two vulnerable communities.

SMART Outcomes: Short-term: 15 acres restored with 60% native vegetation cover by Year 1. Intermediate: 25% increase in water retention capacity by Year 3. Long-term: measurable reduction in flood event duration by Year 4.

Data Collection: Vegetation surveys (quarterly transects), water level monitoring (continuous automated gauges), soil infiltration testing (biannual), NOAA precipitation records, community flood impact surveys (annual), stormwater manager interviews (biannual).

Evaluation Design: Before/after with comparison sites (restored vs. similar degraded wetlands), controlling for precipitation and tidal influence.

Community Development Program Example

Program: Small business technical assistance and microloans for immigrant entrepreneurs.

SMART Outcomes: Short-term: 90% course completion and viable business plan submission by Month 6. Intermediate: 60% of microloan recipients reporting revenue covering operating expenses by Month 18. Long-term: 80% two-year business survival rate (vs. 50% national average) with an average of 2.3 jobs created per business by Month 36.

Data Collection: Course completion records, rubric-based business plan scores, quarterly financial reports, annual business census, semi-annual interviews (n=20), annual focus groups, and five longitudinal case studies.

Evaluation Design: Mixed methods with longitudinal tracking over 36 months.

Final Principles

An evaluation plan is not an afterthought. It is a demonstration to the funder that you are serious about learning, accountability, and evidence. The evaluation plan should be woven into the program design from the beginning -- not added at the end of the proposal writing process.

Write your evaluation plan as if the funder will read it first. In many review processes, they do. A proposal with a rigorous, specific, well-budgeted evaluation plan signals that you are an organization worth investing in, regardless of the specific program you are proposing.

Keep Reading


Ready to write your next proposal? Granted AI analyzes your RFP, coaches you through the requirements, and drafts every section. Start your 7-day free trial today.

More Tips Articles

Not sure which grants to apply for?

Use our free grant finder to search active federal funding opportunities by agency, eligibility, and deadline.

Ready to write your next grant?

Let Granted AI draft your proposal in minutes.

Try Granted Free