What is conservation monitoring and evaluation?
Quick answer
Monitoring and Evaluation for Wildlife Conservation is a structured approach to measuring and reporting conservation impact by tracking changes in biodiversity, threats, behaviours, and influences. Unlike activity-based reporting, it focuses on measurable results—did threats decrease? Did behaviours change? Did biodiversity improve?
The WildTeam framework uses 4 steps (define success → plan monitoring → plan evaluation → report success) guided by 3 principles to help teams prove their conservation work achieved real impact, learn from both success and failure, and demonstrate accountability to donors.
Source: WildTeam. (2026). Monitoring and Evaluation for Wildlife Conservation v2. WildTeam UK, Cumbria, UK.
All WildTeam best practices are grounded in an extensive review of the relevant scientific and professional literature and are peer-reviewed by conservation experts from across the sector to ensure accuracy, practicality, and global applicability.
The WildTeam framework uses 4 steps (define success → plan monitoring → plan evaluation → report success) guided by 3 principles to help teams prove their conservation work achieved real impact, learn from both success and failure, and demonstrate accountability to donors.
Source: WildTeam. (2026). Monitoring and Evaluation for Wildlife Conservation v2. WildTeam UK, Cumbria, UK.
All WildTeam best practices are grounded in an extensive review of the relevant scientific and professional literature and are peer-reviewed by conservation experts from across the sector to ensure accuracy, practicality, and global applicability.
Access this best practice as part of the Monitoring and Evaluation for Wildlife Conservation course.
UNLOCK OUR FULL BEST PRACTICES AND GET CERTIFIED CONSERVATION SKILLS
Ready to go deeper?
Build practical skills for wildlife conservation by exploring our expert-led courses designed to help you apply what you’ve learned in real-world contexts. From career development to technical conservation tools, our training is built to support your next step.
Contents
-
Why conservation projects cannot prove their impact
-
3 principles that make conservation M&E effective and ethical
-
The 4 conservation M&E steps explained
-
Monitoring vs evaluating: understanding the difference
-
Common conservation monitoring and evaluation pitfalls
-
FAQ
Why conservation projects cannot prove their impact
Many conservation projects cannot demonstrate measurable biodiversity results. Conservation funds can often support projects without verified impact, leaving donors frustrated and conservationists unable to prove their work made a difference.
This isn't because conservation work doesn't work—it's because teams don't measure the right things in the right way. Many projects track activities (workshops delivered, rangers trained, patrols conducted) rather than results (threats reduced, behaviours changed, biodiversity improved). When a donor asks "did your project succeed?" teams respond with "we completed all our activities on time" rather than "we reduced poaching by 40% and tiger populations increased 25%."
The challenge is unique to conservation: proving causation in complex ecological and social systems. Did your project cause the observed changes, or would they have happened anyway due to rainfall patterns, government policy, or another organisation's work? Without structured monitoring and evaluation, you cannot answer this question—and donors increasingly require this evidence.
The monitoring and Evaluation for Wildlife Conservation approach addresses three critical needs:
- Making informed management decisions about whether to continue, adapt, or stop current work
- Learning from success and failure so you and others can create stronger strategies for future work
- Being accountable for the activities you carry out and the funds you spend
3 principles that make conservation M&E effective and ethical
Three principles guide effective and ethical M&E planning and implementation:
Match the need: Gather only information essential for making management decisions, reporting to stakeholders, and improving conservation work in other situations. Collecting large amounts of unnecessary information wastes time and money. The Match the need principle encourages teams to focus M&E activities on gathering only what is essential for understanding and reporting impact.
Face up to failure: Identify, document, and share project failure as much as project success. Teams may feel pressure from donors and colleagues to only report successes, but identifying and learning from failures helps ensure you don't waste further funds on activities that don't lead to desired impact. By sharing failures, you help all other teams working in similar situations avoid wasting time and funds.
Protect participants: Identify and minimise any harm to wildlife or people that could result from monitoring and evaluation activities. In some cases, M&E could cause unintended harm—for example, asking local villagers to identify poachers may expose them to threats and violence. The Protect participants principle encourages teams to proactively identify and plan ways to mitigate such harm.
The 4 conservation M&E steps explained
The Monitoring and Evaluation for Wildlife Conservation best practice provides a 4-step process for planning and reporting on your M&E activities:
Step 1: Defining project success: Before you can measure success, you must define it. This step involves selecting which results to monitor (you cannot monitor everything), setting objectives for direct results, establishing tolerance limits that define acceptable achievement ranges, choosing indicators that measure each result, and setting planned indicator values across your project timeframe. This step happens during the Plan phase of your project.
Step 2: Planning monitoring activities: Monitoring tracks changes in your selected results over time. This step involves selecting specific monitoring methods to collect data for each indicator and scheduling when monitoring will occur based on when you need information for management decisions and donor reporting. You document your monitoring plan in the monitoring and evaluation section of your Project plan.
Step 3: Planning evaluation activities: Evaluation goes beyond monitoring by determining what caused the observed changes and how much of that change was due to your project specifically. This step involves selecting which results need evaluation (typically indirect results affected by multiple factors), choosing evaluation methods to assess causation and attribution, and scheduling evaluation activities. Most evaluation occurs near project end, though some may be needed during implementation for critical management decisions.
Step 4: Reporting project success: At project end, you document your impact in a Project-end report. This involves creating actual change diagrams that show what caused changes in your results, calculating impact by comparing predicted change (what would have happened without your project) against actual change (what did happen), and determining implementation success, strategic success, and overall project success ratings. This provides defendable, quantitative assessment of your conservation impact.
Monitoring vs evaluating: understanding the difference
Monitoring and evaluation serve different but complementary purposes in demonstrating conservation impact:
Monitoring tracks change: Monitoring involves measuring changes in selected results over time. It answers the question: "What happened?" For example, monitoring might show that poaching decreased 40% during your project. Monitoring activities collect data to generate indicator values at scheduled points throughout your project. You need monitoring for all selected results—both direct and indirect. Monitoring provides the evidence base for assessing whether your project achieved its objectives and whether broader strategic results changed as intended.
Evaluation determines causation and attribution: Evaluation goes beyond monitoring to answer two deeper questions:
What caused the change? (Did your work packages lead to the observed results through the causal pathways you planned?)
How much was due to your project? (What would have happened without your project, and how much of the difference can you claim as impact?) Evaluation is primarily needed for indirect results because these are influenced by factors outside your project. For example, if tiger populations increased during your anti-poaching project, evaluation helps determine whether your patrols caused the increase or whether it resulted from rainfall improving prey populations, or whether both factors contributed.
Direct results don't usually need evaluation for causation (you did the work, so the change is due to you), but you still need to monitor them to verify the change occurred within your tolerance limits.
Common conservation monitoring and evaluation pitfalls
Several predictable failure modes undermine conservation M&E effectiveness:
Monitoring everything: Teams often try to monitor every species, threat, behaviour, and influence in their conservation strategy. This wastes resources collecting data that's never used for decisions or reporting. Instead, strategically select only the results essential for management decisions, stakeholder reporting, and global learning.
Only reporting success: Pressure to satisfy donors leads teams to emphasise successful results while hiding or minimising failures. However, documenting failure prevents waste—if your education campaign didn't change attitudes, sharing that finding prevents other organisations from repeating the mistake. Honest failure reporting builds donor trust more than exaggerated success claims. S
Measuring activities not results: The activity trap is monitoring what you did (workshops held, rangers trained) rather than what changed (knowledge increased, poaching decreased). Activity completion doesn't equal conservation success. Your objectives should focus on biodiversity results, threat reduction, and behaviour change—not activity delivery.
Missing baseline data: Starting work before measuring the initial status of results makes impact demonstration impossible. You cannot prove your project reduced poaching from 200 to 120 incidents if you never measured the starting level. Always conduct baseline monitoring before implementation begins.
No control or comparison: Observing change during your project doesn't prove you caused it. Without estimating what would have happened anyway (predicted change), you cannot isolate your project's specific contribution from background trends or other organisations' work. Plan evaluation activities to address attribution from the start.
UNLOCK OUR FULL BEST PRACTICES AND GET CERTIFIED CONSERVATION SKILLS
Ready to go deeper?
Build practical skills for wildlife conservation by exploring our expert-led courses designed to help you apply what you’ve learned in real-world contexts. From career development to technical conservation tools, our training is built to support your next step.
FAQ
What's the difference between monitoring and evaluation?
Monitoring measures changes in results over time (what happened), while evaluation determines what caused those changes and how much change was due to your project specifically. Monitoring is needed for all selected results; evaluation is primarily needed for indirect results influenced by multiple factors.
Do I need to monitor everything in my conservation strategy?
No. Apply the Match the need principle by selecting only results essential for management decisions, stakeholder reporting, or shared learning. Monitoring everything wastes resources on data you'll never use.
How do I know if my conservation project caused the observed changes?
You need evaluation activities that estimate predicted change (what would have happened without your project). Compare predicted change against actual change to isolate your project's contribution. Methods include using pre-project trends, monitoring control areas, or modelling.
How many indicators should I have per result?
Start with 1-2 indicators per result. Add more only if single indicators don't adequately capture important aspects of the result or if you need multiple lines of evidence for high-stakes decisions. More indicators mean more monitoring effort and cost.
Should I set objectives for all results I'm monitoring?
No. Set objectives only for direct results (those caused only by your project work). Do not set objectives for indirect results because changes in those results are influenced by factors outside your project control. You still monitor indirect results, but don't set specific objectives for them. See Direct vs indirect results: how to classify results and determine monitoring needs.
What's the difference between direct and indirect results?
Direct results are changes caused only by your project work (e.g., reduced hunting by trained rangers). Indirect results are changes caused entirely or partly by other factors beyond your project (e.g., increased wildlife populations influenced by rainfall, other projects, and your work). This distinction determines evaluation needs. See Direct vs indirect results: how to classify results and determine monitoring needs.
Related articles
-
What is conservation project planning?
-
xxx
JOIN THE CONVERSATION
EMPOWERING CONSERVATIONISTS TO RESTORE NATURE
We give conservationists worldwide the knowledge, the skills, and the community support they need to design and deliver conservation projects that have more impact.
WILDTEAM is a registered charity in England and Wales. Number 1149465. © 2026 by WildTeam
BEFORE YOU GO!
You may be eligible for a fully funded bursary!
You could eligible for a partial or fully funded bursary. Giving you full access to one of our expert-led conservation training courses. The application is quick and easy and you'll receive an immediate response, you can start learning today.
Our bursaries are made possible by our funding partners. Every course purchase helps fund another conservationist’s opportunity.

