Measuring Our Work: Internal and External Evaluations

From its inception in 1977, The Hunger Project was designed as a learning organization — one that continually assesses the landscape of development, identifies what is missing, and redesigns its programs to achieve the highest leverage impact based on what it has learned. As an organization deeply grounded in grassroots advocacy and development from the bottom up, we know that understanding the extent of our interventions’ impact at the community level is paramount for our community partners, our dedicated global staff, our investors, and policy makers considering adopting our approach. For these reasons, we are a data-driven organization. We measure what matters in order to deliver on our organizational mission to end chronic hunger and poverty.

We are continuously collecting data to monitor and evaluate our programs in every community in which we work. Monitoring allows us to be sure that community-owned program activities are taking place; evaluation allows us and our community partners to assess whether expected results are occurring.  This process occurs through our highly developed Monitoring and Evaluation System (M&E System).

Below is a list of our current internal, external and community-led evaluations.

Community-Led Internal Evaluations

MDG Report: THP-Ghana Report on Attainment of the Millennium Development Goals – Views of partner communities in Ghana; (here) conducted by M&E Animators in 36 epicenters in Ghana
 

Internal

Chokwe Epicenter Evaluation in Mozambique [2016] (highlights here); Conducted by THP M&E staff and enumerators

Outcome Evaluation Report for Six Epicenters in Ghana [2015] (here); Conducted by THP M&E staff and enumerators

Kiboga Epicenter Outcome Evaluation in Uganda [2014] (here; highlights here); Conducted by THP M&E staff and enumerators

Outcome Evaluation: Highlights from the Boulkon Epicenter in Burkina Faso [2013] (here); Conducted by THP M&E staff and enumerators

Kiruhura Epicenter Outcome Evaluation in Uganda [2013] (here; highlights here); Conducted by THP M&E staff and enumerators

Majete I Baseline Evaluation in Malawi [2012] (here); Conducted by THP M&E staff and enumerators

Outcome Evaluation Pilot Project: Measuring Outcomes of THP’s Epicenter strategy in Malawi and Ghana [2012] (here); Conducted by THP M&E staff and enumerators

 

External

Evaluating the Invaluable: A Rapid Assessment of Gender Equality and Women’s Empowerment in Bangladesh [2012]; Conducted by the School of International and Public Affairs at Columbia University (here)

Joint Terminal Evaluation Report of Debre Libanos Epicenter in Ethiopia [2012]; Conducted by the Government of Ethiopia in partnership with THP (here)

Evaluation Report: Strengthening the Leadership of Women in Local Democracy – Gram Panchayats in 14 blocks in Rajasthan, India [2012]; Conducted by UN Democracy Fund (UNDEF) (here)
Participatory Evaluation of THP’s Malawi’s HIV/AIDs Program [2011] (here); Conducted by Watipasta Consulting
Qualitative Evaluation of The Hunger Project in Ghana’s Eastern Region [2009] in Kyeremase and Supriso epicenters; Conducted by the University of Ghana’s Institute of Statistical, Social, and Economic Research (ISSER) (here)
A Change to Believe in: THP Uganda’s Impact [2009] (here); Conducted by senior external consultants

Evaluations on THP’s M&E System

Using Innovative Communications to Improve Monitoring & Evaluation at THP [2013]; Conducted by the School of International and Public Affairs at Columbia University (here)

Evaluating the Invaluable: A Rapid Assessment of Gender Equality and Women’s Empowerment in Bangladesh [2012]; Conducted by the School of International and Public Affairs at Columbia University (here) [starting on page 53]

 

Descriptions of Studies

Baseline Data Collection | Routine Outcome Evaluations

At the beginning of a new program site, a baseline is conducted to establish the benchmark from which the impact is measured. The baseline will provide a basis for comparison in subsequent outcome valuations. The same data collection tools used in baseline outcome evaluation surveys will be used subsequent studies to track progress against those questions over time.

The policy to have baseline studies for our work began in 2012. Since some of our programs were well underway by this time, research was compiled from secondary sources to estimate baseline values for key performance indicators wherever possible. In follow up studies, it is clearly indicated when the research uses a secondary baseline value as a comparison point.

Midterm Data Collection | Routine Outcome Evaluations

All The Hunger Project programs have midterm studies to track performance against key indicators at the project’s midpoint. At minimum, a midterm evaluation will be conducted two years prior to the program site’s expected graduation date to allow for sufficient time to make adjustments to program design and implementation prior to exiting that community. When funding and resources are available, more than one midterm evaluation will be conducted. Conducting multiple midterm evaluations is especially important when the program strategy exceeds eight years, as important changes are likely to happen at various intervals.

Endline Data Collection | Routine Outcome Evaluations

Demonstrating progress from baseline is crucial for graduating program sites and participants. Thus, a final outcome evaluation is required to assess progress of communities and individuals towards meeting their goals, as well as, the key performance metrics identified by The Hunger Project.

External Evaluations

An evaluation performed by entities outside of the program being evaluated. As a rule, external evaluation of a project, program or subprogram is conducted by entities free of control or influence by those responsible for the design and implementation of the project and programs.

Ex-Post Evaluations

In some program sites, three to five years after graduation, The Hunger Project will conduct a follow up (ex-post evaluation) to understand the effectiveness and sustainability of its approach. These studies aim to derive lessons and recommendations for improved program design and planning in the future.