Granted
Sign in

Logic Models for Grant Proposals: Examples and Free Templates

October 1, 2025 · 13 min read

Ana Estrada

Cover image

A logic model is one of the most powerful tools in grant writing, and one of the most frequently misunderstood. When done well, it demonstrates that you have thought rigorously about how your project will create change. When done poorly -- or when it is missing altogether -- it signals to reviewers that your program design lacks coherence.

This guide explains what a logic model is, walks through the components in detail, provides three fully worked examples from different program areas, covers how reviewers evaluate logic models, and identifies the most common mistakes that weaken proposals.

What Is a Logic Model?

A logic model is a visual diagram that maps the theory of change behind your project. It shows the logical chain from the resources you invest to the activities you perform to the results you expect to achieve. It answers the question every reviewer asks: if you do what you say you will do, why should we believe the intended outcomes will follow?

The standard logic model framework has five components, arranged from left to right:

Inputs --> Activities --> Outputs --> Outcomes --> Impact

Some funders use slightly different terminology (resources instead of inputs, strategies instead of activities, short-term and long-term outcomes instead of outcomes and impact), but the underlying structure is consistent across virtually all federal agencies and most foundations.

Inputs (Resources)

Inputs are the resources you will invest in the project. These include:

Inputs are not what you do -- they are what you have available to work with. They are the raw materials of your project.

Activities

Activities are the specific actions your project will carry out using the inputs. Activities should be concrete and verifiable:

Activities are what your staff and partners do on a daily, weekly, and monthly basis to implement the project.

Outputs

Outputs are the direct, countable products of your activities. They measure the volume of work completed:

Outputs are not changes -- they are evidence that you did the work you said you would do. They are necessary but not sufficient for demonstrating project success.

Outcomes

Outcomes are the changes that result from your activities and outputs. This is where the logic model demonstrates impact. Outcomes are typically organized by timeframe:

Short-term outcomes (during the project period): Changes in knowledge, attitudes, skills, or awareness. For example: 80% of workshop participants demonstrate increased knowledge of lead exposure risks on a post-workshop assessment.

Medium-term outcomes (by end of project or shortly after): Changes in behavior, practice, or decision-making. For example: 60% of trained community health workers are actively conducting home environmental assessments 12 months after training.

Long-term outcomes (1-5 years post-project): Changes in conditions, systems, or population-level indicators. For example: 25% reduction in childhood blood lead levels in the target census tracts within three years.

Impact

Impact represents the ultimate change your project contributes to -- the broad, long-term difference in the lives of the people or communities you serve. Impact is usually beyond the scope of a single project to achieve alone, but your project contributes to it:

Impact connects your project to the funder's larger mission. It shows that your work is not an isolated effort but part of a broader trajectory of change.

The Critical Distinction: Outputs vs. Outcomes

The single most common error in logic models is confusing outputs with outcomes. This confusion undermines the entire model and signals to reviewers that the applicant does not understand program evaluation.

Here is the test: an output is something you produce or deliver. An outcome is something that changes as a result.

OutputOutcome
200 students receive tutoring65% of tutored students improve math scores by one grade level
12 workshops conducted75% of participants report increased confidence managing their chronic condition
500 home environmental assessments completed40% of assessed households implement recommended remediation actions within 6 months
1 prototype biosensor developed and testedBiosensor achieves 95% sensitivity and 90% specificity for target pathogen detection

If every item in your outcomes column could also be an item on a to-do list, you have outputs disguised as outcomes. Outcomes describe changes in people, communities, systems, or conditions.

Worked Example 1: Community Health Program

Project: Reducing childhood asthma hospitalizations in an urban environmental justice community through community health worker-led home environmental assessments and remediation.

Funder: EPA Environmental Justice Collaborative Problem-Solving (EJCPS) grant, $300,000 over 3 years.

Logic Model

Inputs:

Activities:

Outputs:

Short-Term Outcomes (Year 1):

Medium-Term Outcomes (Years 2-3):

Long-Term Outcomes (3-5 years):

Impact:

Worked Example 2: STEM Education Initiative

Project: After-school STEM enrichment program for middle school students in rural Title I schools, integrating hands-on engineering projects with mentoring by local industry professionals.

Funder: NSF Advancing Informal STEM Learning (AISL) program, $500,000 over 3 years.

Logic Model

Inputs:

Activities:

Outputs:

Short-Term Outcomes (Year 1):

Medium-Term Outcomes (Years 2-3):

Long-Term Outcomes (3-5 years):

Impact:

Worked Example 3: Environmental Conservation Project

Project: Restoring tidal wetland habitat along a degraded estuary through invasive species removal, native plant restoration, and community-based water quality monitoring.

Funder: NOAA Habitat Restoration Grant, $750,000 over 4 years.

Logic Model

Inputs:

Activities:

Outputs:

Short-Term Outcomes (Years 1-2):

Medium-Term Outcomes (Years 3-4):

Long-Term Outcomes (5-10 years):

Impact:

How Reviewers Evaluate Logic Models

Reviewers look for five things when they examine a logic model:

1. Logical Coherence

Does each column flow naturally from the one before it? If you remove any single activity, do the downstream outputs and outcomes still make sense? Reviewers mentally trace the causal chain from left to right, asking at each step: does this follow?

A logic model where the activities do not plausibly lead to the stated outcomes will undermine the entire proposal, no matter how well-written the narrative is.

2. Specificity

Vague logic models are useless. "Provide training" is not an activity -- "Deliver 8 two-day training workshops on motivational interviewing to 50 community health workers" is an activity. "Improve community health" is not an outcome -- "Reduce pediatric asthma ED visits by 20% in target census tracts" is an outcome.

Reviewers reward specificity because it demonstrates that you have thought through the details of implementation and evaluation.

3. Measurability

Every outcome should be measurable with a defined metric, data source, and target value. If a reviewer reads your outcome and cannot immediately envision how you would measure it, the outcome is too vague.

Strong outcomes include: the metric (what you are measuring), the baseline (where you are starting), the target (where you expect to be), the timeframe (when you expect to reach it), and the data source (how you will know).

4. Proportionality

Are the proposed outcomes proportional to the proposed inputs and activities? A $100,000 grant funding one part-time staff member should not claim system-level transformation as its outcome. Conversely, a $2,000,000 grant with 10 staff members should aim higher than training 50 people.

Reviewers flag logic models that are either too ambitious (inputs do not support the claimed outcomes) or too modest (outcomes do not justify the requested investment).

5. Evidence Base

The connections in your logic model should be supported by evidence. If your model claims that community health worker home visits lead to reduced asthma hospitalizations, cite the published research that supports this causal link. If your model claims that after-school STEM programming increases career interest, reference the literature on informal STEM learning outcomes.

You do not need to prove the causal chain from scratch -- you need to show that your model is grounded in existing evidence about what works.

Common Logic Model Mistakes

Confusing outputs with outcomes. Already discussed above, but it bears repeating because it is the most common error. Count the items in your outcomes column. If they are all things you produce or deliver rather than changes that result, you have outputs, not outcomes.

Missing the causal links. A logic model is not a list of things you will do and things you hope will happen. There must be a plausible causal relationship between activities and outcomes. If the connection is not obvious, your narrative should explain the mechanism of change.

Overpromising on impact. Long-term impact should be aspirational but plausible. Claiming that a three-year, $300,000 project will "eliminate health disparities" is not credible. Claiming it will "contribute to the evidence base for community-based environmental health interventions that reduce disparities" is both honest and appropriate.

Leaving out assumptions. Every logic model contains implicit assumptions: participants will be recruited successfully, partners will fulfill their commitments, external conditions will remain stable. Acknowledging key assumptions (ideally in a footnote or accompanying narrative) demonstrates intellectual rigor.

Making it too complicated. A logic model with 30 activities, 40 outputs, and 15 outcomes is not more impressive than one with 5 activities, 8 outputs, and 6 outcomes. It is less readable and harder for reviewers to evaluate. Focus on the primary causal pathways and keep the model clean.

Not aligning the logic model with the narrative. The logic model and the project narrative must tell the same story. If your narrative describes an activity that does not appear in the logic model, or if your logic model includes an outcome that the narrative does not address, reviewers will notice the inconsistency.

Building Your Own Logic Model

Start with the outcomes. What changes do you want to see, and on what timeline? Then work backward: what activities will produce those changes? What resources do you need to carry out those activities? What outputs will you track to confirm the activities are happening?

This backward design approach -- starting with the end state and working back to the inputs -- produces stronger logic models than starting with activities and trying to figure out what they might accomplish.

Use a table format for the proposal document and a flowchart format for presentations and internal planning. Most funders accept either, but check the NOFO for any formatting requirements.

If you are building a grant proposal and want help structuring the logic model alongside the rest of the narrative, Granted AI walks you through the program design process and ensures your logic model aligns with the evaluation plan, budget, and project narrative.

Keep Reading


Ready to write your next proposal? Granted AI analyzes your RFP, coaches you through the requirements, and drafts every section. Start your 7-day free trial today.

More Tips Articles

Not sure which grants to apply for?

Use our free grant finder to search active federal funding opportunities by agency, eligibility, and deadline.

Ready to write your next grant?

Let Granted AI draft your proposal in minutes.

Try Granted Free