Logic Models for Grant Proposals: Examples and Free Templates
October 1, 2025 · 13 min read
Ana Estrada

A logic model is one of the most powerful tools in grant writing, and one of the most frequently misunderstood. When done well, it demonstrates that you have thought rigorously about how your project will create change. When done poorly -- or when it is missing altogether -- it signals to reviewers that your program design lacks coherence.
This guide explains what a logic model is, walks through the components in detail, provides three fully worked examples from different program areas, covers how reviewers evaluate logic models, and identifies the most common mistakes that weaken proposals.
What Is a Logic Model?
A logic model is a visual diagram that maps the theory of change behind your project. It shows the logical chain from the resources you invest to the activities you perform to the results you expect to achieve. It answers the question every reviewer asks: if you do what you say you will do, why should we believe the intended outcomes will follow?
The standard logic model framework has five components, arranged from left to right:
Inputs --> Activities --> Outputs --> Outcomes --> Impact
Some funders use slightly different terminology (resources instead of inputs, strategies instead of activities, short-term and long-term outcomes instead of outcomes and impact), but the underlying structure is consistent across virtually all federal agencies and most foundations.
Inputs (Resources)
Inputs are the resources you will invest in the project. These include:
- Funding (the grant amount plus any matching funds or cost share)
- Staff time (number of FTEs, hours, or percent effort)
- Partnerships and collaborations (what partner organizations contribute)
- Facilities and equipment (lab space, community centers, vehicles, technology)
- Existing data, curricula, or tools you will build upon
- Volunteer time
Inputs are not what you do -- they are what you have available to work with. They are the raw materials of your project.
Activities
Activities are the specific actions your project will carry out using the inputs. Activities should be concrete and verifiable:
- Conduct 12 community workshops on lead exposure prevention
- Train 50 community health workers in motivational interviewing techniques
- Deploy 200 air quality sensors across a 15-square-mile monitoring grid
- Provide 500 hours of individual tutoring to middle school students
- Develop and test a prototype biosensor for agricultural pathogen detection
Activities are what your staff and partners do on a daily, weekly, and monthly basis to implement the project.
Outputs
Outputs are the direct, countable products of your activities. They measure the volume of work completed:
- Number of workshops held
- Number of people trained
- Number of sensors deployed and hours of data collected
- Number of tutoring sessions delivered
- Number of prototypes developed and tests completed
Outputs are not changes -- they are evidence that you did the work you said you would do. They are necessary but not sufficient for demonstrating project success.
Outcomes
Outcomes are the changes that result from your activities and outputs. This is where the logic model demonstrates impact. Outcomes are typically organized by timeframe:
Short-term outcomes (during the project period): Changes in knowledge, attitudes, skills, or awareness. For example: 80% of workshop participants demonstrate increased knowledge of lead exposure risks on a post-workshop assessment.
Medium-term outcomes (by end of project or shortly after): Changes in behavior, practice, or decision-making. For example: 60% of trained community health workers are actively conducting home environmental assessments 12 months after training.
Long-term outcomes (1-5 years post-project): Changes in conditions, systems, or population-level indicators. For example: 25% reduction in childhood blood lead levels in the target census tracts within three years.
Impact
Impact represents the ultimate change your project contributes to -- the broad, long-term difference in the lives of the people or communities you serve. Impact is usually beyond the scope of a single project to achieve alone, but your project contributes to it:
- Elimination of environmental health disparities in underserved communities
- A scientifically literate workforce prepared for STEM careers
- Restoration of coastal ecosystems to pre-degradation ecological function
Impact connects your project to the funder's larger mission. It shows that your work is not an isolated effort but part of a broader trajectory of change.
The Critical Distinction: Outputs vs. Outcomes
The single most common error in logic models is confusing outputs with outcomes. This confusion undermines the entire model and signals to reviewers that the applicant does not understand program evaluation.
Here is the test: an output is something you produce or deliver. An outcome is something that changes as a result.
| Output | Outcome |
|---|---|
| 200 students receive tutoring | 65% of tutored students improve math scores by one grade level |
| 12 workshops conducted | 75% of participants report increased confidence managing their chronic condition |
| 500 home environmental assessments completed | 40% of assessed households implement recommended remediation actions within 6 months |
| 1 prototype biosensor developed and tested | Biosensor achieves 95% sensitivity and 90% specificity for target pathogen detection |
If every item in your outcomes column could also be an item on a to-do list, you have outputs disguised as outcomes. Outcomes describe changes in people, communities, systems, or conditions.
Worked Example 1: Community Health Program
Project: Reducing childhood asthma hospitalizations in an urban environmental justice community through community health worker-led home environmental assessments and remediation.
Funder: EPA Environmental Justice Collaborative Problem-Solving (EJCPS) grant, $300,000 over 3 years.
Logic Model
Inputs:
- $300,000 EPA EJCPS grant funding
- 2 FTE community health workers (CHWs)
- 0.5 FTE program coordinator
- Partnership with county health department (referral data, epidemiological support)
- Partnership with local hospital (ER visit data, patient referrals)
- Partnership with university school of public health (evaluation expertise)
- Existing EPA-approved indoor air quality assessment protocol
Activities:
- Recruit and train community health workers in environmental health assessment, motivational interviewing, and community outreach
- Conduct door-to-door outreach in target census tracts (4012, 4013, 4015) to identify households with children under 12 experiencing asthma symptoms
- Perform in-home environmental health assessments using standardized EPA protocol (mold, dust mites, cockroach allergens, secondhand smoke, volatile organic compounds)
- Provide individualized remediation plans and connect households with resources for implementing changes (HEPA filters, mattress encasements, integrated pest management, smoking cessation referrals)
- Conduct follow-up assessments at 6 and 12 months post-intervention
- Convene quarterly community advisory board meetings to review progress and gather feedback
- Present findings to county board of health and state environmental agency
Outputs:
- 2 CHWs trained and certified in environmental health assessment
- 600 households contacted through outreach
- 400 in-home environmental assessments completed
- 400 individualized remediation plans delivered
- 300 households receive follow-up assessments at 6 months
- 250 households receive follow-up assessments at 12 months
- 12 quarterly advisory board meetings convened
- 2 presentations to policymakers delivered
Short-Term Outcomes (Year 1):
- 80% of assessed households identify at least one actionable environmental trigger
- 70% of households report increased knowledge of asthma triggers and management strategies on post-assessment survey
Medium-Term Outcomes (Years 2-3):
- 50% of households implement at least 2 recommended remediation actions within 6 months
- 30% reduction in self-reported asthma symptom days among children in participating households
Long-Term Outcomes (3-5 years):
- 20% reduction in pediatric asthma-related emergency department visits in target census tracts
- County health department adopts CHW-led environmental assessment as standard practice for high-risk neighborhoods
Impact:
- Elimination of disproportionate environmental health burden on low-income communities of color in the county
Worked Example 2: STEM Education Initiative
Project: After-school STEM enrichment program for middle school students in rural Title I schools, integrating hands-on engineering projects with mentoring by local industry professionals.
Funder: NSF Advancing Informal STEM Learning (AISL) program, $500,000 over 3 years.
Logic Model
Inputs:
- $500,000 NSF AISL grant
- 1 FTE program director
- 1 FTE STEM educator
- 0.5 FTE data coordinator
- Partnerships with 6 rural middle schools (facilities, student recruitment, teacher engagement)
- Partnership with regional manufacturing association (industry mentors, facility tours)
- Partnership with state university College of Engineering (curriculum design, graduate student assistants)
- Existing validated STEM interest and self-efficacy assessment instruments
Activities:
- Design 36-week after-school curriculum (12 weeks per year, 3 years) centered on engineering design challenges relevant to rural industries (agriculture technology, water systems, renewable energy)
- Recruit 150 students (50 per cohort across 6 schools) with emphasis on students from underrepresented groups in STEM
- Deliver twice-weekly 90-minute after-school sessions combining hands-on engineering projects with team-based problem solving
- Match each student team with an industry mentor who provides monthly guidance and hosts one workplace visit per semester
- Train 12 classroom teachers (2 per school) in project-based STEM instruction through 40-hour summer institute
- Conduct pre-post assessments of STEM interest, self-efficacy, and content knowledge each semester
- Produce and disseminate a replicable curriculum guide and implementation manual
Outputs:
- 36-week after-school curriculum developed and refined over 3 years
- 150 students participate (50 per annual cohort)
- 432 after-school sessions delivered (72 per year across 6 sites)
- 18 industry mentors recruited and trained
- 12 teachers complete 40-hour professional development institute
- 900 pre-post assessments administered (6 per student over 3 years)
- 1 curriculum guide and implementation manual published
Short-Term Outcomes (Year 1):
- 85% of participants attend at least 75% of sessions (retention measure)
- 70% of participants demonstrate statistically significant gains in STEM self-efficacy on the validated assessment instrument
- 60% of participants demonstrate measurable gains in engineering design thinking as assessed by project rubrics
Medium-Term Outcomes (Years 2-3):
- 50% of Year 1 participants voluntarily re-enroll in Year 2 programming (sustained engagement)
- 40% of participants report increased interest in pursuing a STEM-related career on annual survey
- 75% of trained teachers integrate at least 2 project-based STEM activities into their regular classroom instruction
Long-Term Outcomes (3-5 years):
- Participating students enroll in high school STEM coursework (physics, calculus, computer science) at rates 20% above baseline for their schools
- 3 of 6 partner schools sustain after-school STEM programming beyond the grant period using the published curriculum
Impact:
- Increased STEM participation and career readiness among rural and underrepresented students
Worked Example 3: Environmental Conservation Project
Project: Restoring tidal wetland habitat along a degraded estuary through invasive species removal, native plant restoration, and community-based water quality monitoring.
Funder: NOAA Habitat Restoration Grant, $750,000 over 4 years.
Logic Model
Inputs:
- $750,000 NOAA grant
- $200,000 in-kind match from state Department of Natural Resources (heavy equipment, staff time for invasive species management)
- 1 FTE restoration ecologist (project lead)
- 1 FTE field technician
- 0.5 FTE community engagement coordinator
- Partnership with university marine science program (water quality analysis, ecological monitoring)
- Partnership with local watershed council (volunteer recruitment, long-term stewardship)
- 300 acres of degraded tidal wetland with willing landowner access agreements
Activities:
- Conduct baseline ecological assessment of 300-acre restoration site (vegetation surveys, water quality sampling, benthic invertebrate surveys, avian counts)
- Remove invasive Phragmites australis from 200 acres using integrated management approach (herbicide application, mechanical removal, prescribed burns)
- Plant 50,000 native salt marsh plants (Spartina alterniflora, Juncus roemerianus, Distichlis spicata) across 150 acres of treated area
- Install and maintain 8 permanent water quality monitoring stations measuring dissolved oxygen, salinity, turbidity, pH, and nutrient concentrations
- Recruit and train 40 community volunteers as certified water quality monitors through 16-hour training program
- Conduct quarterly ecological monitoring surveys over 4 years to track restoration progress
- Host 8 public community science events (2 per year) combining monitoring with environmental education
Outputs:
- 1 comprehensive baseline ecological assessment report
- 200 acres of invasive Phragmites treated and managed
- 50,000 native plants installed across 150 acres
- 8 monitoring stations installed and operational
- 40 community volunteers trained and certified
- 16 quarterly monitoring surveys completed
- 8 community science events held
Short-Term Outcomes (Years 1-2):
- 80% reduction in Phragmites cover across treated acres
- 60% survival rate of planted native species at 12 months post-installation
- Measurable improvement in dissolved oxygen levels at monitoring stations nearest to restored areas (target: increase from 3.2 mg/L baseline to above 5.0 mg/L)
Medium-Term Outcomes (Years 3-4):
- Native salt marsh vegetation establishes self-sustaining cover on at least 100 acres (defined as 70% ground cover by native species)
- 50% increase in benthic invertebrate species diversity in restored areas compared to baseline
- Return of juvenile fish species (mummichog, Atlantic silverside, killifish) to restored tidal channels as documented by seine net surveys
- 30 of 40 trained volunteers actively participating in ongoing monitoring at project end
Long-Term Outcomes (5-10 years):
- Full ecological function restored across 150 acres (tidal exchange, nursery habitat, flood attenuation, carbon sequestration)
- Watershed council assumes long-term stewardship of monitoring program and site maintenance
- Restored wetland provides measurable flood attenuation benefit to adjacent community (estimated at $2.1M in avoided flood damage per 50-year flood event, based on FEMA benefit-cost methodology)
Impact:
- Resilient coastal ecosystem that supports biodiversity, mitigates flooding, and improves water quality for downstream communities
How Reviewers Evaluate Logic Models
Reviewers look for five things when they examine a logic model:
1. Logical Coherence
Does each column flow naturally from the one before it? If you remove any single activity, do the downstream outputs and outcomes still make sense? Reviewers mentally trace the causal chain from left to right, asking at each step: does this follow?
A logic model where the activities do not plausibly lead to the stated outcomes will undermine the entire proposal, no matter how well-written the narrative is.
2. Specificity
Vague logic models are useless. "Provide training" is not an activity -- "Deliver 8 two-day training workshops on motivational interviewing to 50 community health workers" is an activity. "Improve community health" is not an outcome -- "Reduce pediatric asthma ED visits by 20% in target census tracts" is an outcome.
Reviewers reward specificity because it demonstrates that you have thought through the details of implementation and evaluation.
3. Measurability
Every outcome should be measurable with a defined metric, data source, and target value. If a reviewer reads your outcome and cannot immediately envision how you would measure it, the outcome is too vague.
Strong outcomes include: the metric (what you are measuring), the baseline (where you are starting), the target (where you expect to be), the timeframe (when you expect to reach it), and the data source (how you will know).
4. Proportionality
Are the proposed outcomes proportional to the proposed inputs and activities? A $100,000 grant funding one part-time staff member should not claim system-level transformation as its outcome. Conversely, a $2,000,000 grant with 10 staff members should aim higher than training 50 people.
Reviewers flag logic models that are either too ambitious (inputs do not support the claimed outcomes) or too modest (outcomes do not justify the requested investment).
5. Evidence Base
The connections in your logic model should be supported by evidence. If your model claims that community health worker home visits lead to reduced asthma hospitalizations, cite the published research that supports this causal link. If your model claims that after-school STEM programming increases career interest, reference the literature on informal STEM learning outcomes.
You do not need to prove the causal chain from scratch -- you need to show that your model is grounded in existing evidence about what works.
Common Logic Model Mistakes
Confusing outputs with outcomes. Already discussed above, but it bears repeating because it is the most common error. Count the items in your outcomes column. If they are all things you produce or deliver rather than changes that result, you have outputs, not outcomes.
Missing the causal links. A logic model is not a list of things you will do and things you hope will happen. There must be a plausible causal relationship between activities and outcomes. If the connection is not obvious, your narrative should explain the mechanism of change.
Overpromising on impact. Long-term impact should be aspirational but plausible. Claiming that a three-year, $300,000 project will "eliminate health disparities" is not credible. Claiming it will "contribute to the evidence base for community-based environmental health interventions that reduce disparities" is both honest and appropriate.
Leaving out assumptions. Every logic model contains implicit assumptions: participants will be recruited successfully, partners will fulfill their commitments, external conditions will remain stable. Acknowledging key assumptions (ideally in a footnote or accompanying narrative) demonstrates intellectual rigor.
Making it too complicated. A logic model with 30 activities, 40 outputs, and 15 outcomes is not more impressive than one with 5 activities, 8 outputs, and 6 outcomes. It is less readable and harder for reviewers to evaluate. Focus on the primary causal pathways and keep the model clean.
Not aligning the logic model with the narrative. The logic model and the project narrative must tell the same story. If your narrative describes an activity that does not appear in the logic model, or if your logic model includes an outcome that the narrative does not address, reviewers will notice the inconsistency.
Building Your Own Logic Model
Start with the outcomes. What changes do you want to see, and on what timeline? Then work backward: what activities will produce those changes? What resources do you need to carry out those activities? What outputs will you track to confirm the activities are happening?
This backward design approach -- starting with the end state and working back to the inputs -- produces stronger logic models than starting with activities and trying to figure out what they might accomplish.
Use a table format for the proposal document and a flowchart format for presentations and internal planning. Most funders accept either, but check the NOFO for any formatting requirements.
If you are building a grant proposal and want help structuring the logic model alongside the rest of the narrative, Granted AI walks you through the program design process and ensures your logic model aligns with the evaluation plan, budget, and project narrative.
Keep Reading
- Grant Writing for Nonprofits: The Complete Playbook
- Grant Budget Justification: Template, Examples, and Common Mistakes
- How to Write a Grant Proposal: Complete Step-by-Step Guide
- See all Granted AI features
Ready to write your next proposal? Granted AI analyzes your RFP, coaches you through the requirements, and drafts every section. Start your 7-day free trial today.
