Evaluation Copy

A programme’s evaluation system is designed to assess the impact of the programme in relation to the context. Out of an evaluation, lessons learnt and recommendations for future programming can be collected. In addition to internal learning, the results of an evaluation can be used to account for the programme towards its donor.

To do this, an evaluation system needs to be in place, which can contain one or more of the following elements:

  1. Baseline study to establish what a programme’s context looks like before interventions start (see section 2.1).
  2. Evaluation
    • Mid-term evaluation to assess the programme’s progress towards its objectives, halfway through the programme, with the intention of adjusting the programme for more impact;
    • End-term evaluation to evaluate the degree to which the programme has achieved its objectives in relation to the baseline study, and what lessons learnt can be drawn from this.
  3. Outcome harvesting to qualitatively verify your programme’s contribution to outcomes observed throughout the programme period.

Programmes can be evaluated by their own organisations or an independent external (team of) expert(s). This is often a donor requirement (as stated in a contractual agreement), but is desirable even when it is not mandatory.

At RNW Media, for larger-scale programmes, like Next Generation and REA, RNW Media contracts external independent evaluators to establish the impact of the programme. For smaller-scale programmes and projects, for example Ideas42, evaluations are conducted internally.

International evaluation standards – DAC

If you have been involved in an evaluation, for example carrying out an evaluation yourself, or defining the questions or Terms of reference for an external evaluation, you might have come across the “OECD Development Assistance Committee (DAC) Evaluation Criteria”.

The DAC Evaluation Criteria are widely used criteria in the development sector, intended to guide evaluations. These criteria have been defined by the Network on Development Evaluation, a subsidiary body of the at the OECD, which aims to increase the effectiveness of international development programmes by supporting robust, informed and independent evaluation. (Source)

OECD DAC evaluation criteria

The criteria play a normative role. Together they describe the desired attributes of interventions: all interventions should be relevant to the context, coherent with other interventions, achieve their objectives, deliver results in an efficient way, and have positive impacts that last.

There are 2 principles that guide the use of the criteria:

  • The criteria should be applied thoughtfully to support high quality, useful evaluation. They should be contextualized – understood in the context of the individual evaluation, the intervention being evaluated, and the stakeholders involved.
  • Use of the criteria depends on the purpose of the evaluation. The criteria should not be applied mechanistically. Instead, they should be covered according to the needs of the relevant stakeholders and the context of the evaluation.
  1. RELEVANCE: IS THE INTERVENTION DOING THE RIGHT THINGS?
    • The extent to which the intervention objectives and design respond to beneficiaries, global, country, and partner/institution needs, policies, and priorities, and continue to do so if circumstances change.
  2. COHERENCE: HOW WELL DOES THE INTERVENTION FIT?
    • The compatibility of the intervention with other interventions in a country, sector or institution.
  3. EFFECTIVENESS: IS THE INTERVENTION ACHIEVING ITS OBJECTIVES?
    • The extent to which the intervention achieved, or is expected to achieve, its objectives, and its results, including any differential results across groups.
  4. EFFICIENCY: HOW WELL ARE RESOURCES BEING USED?
    • The extent to which the intervention delivers, or is likely to deliver, results in an economic and timely way.
  5. IMPACT: WHAT DIFFERENCE DOES THE INTERVENTION MAKE?
    • The extent to which the intervention has generated or is expected to generate significant positive or negative, intended or unintended, higher-level effects.
  6. SUSTAINABILITY: WILL THE BENEFITS LAST?
    • The extent to which the net benefits of the intervention continue or are likely to continue.

If you are interested in more information on these criteria or other interesting resources on evaluations, please see http://www.oecd.org/dac/evaluation/.

An excerpt from the Terms of Reference for an evaluation assignment for RNW Media programmes, illustrates how these OECD DAC criteria have been addressed in the evaluation questions.

Terms of reference for RNW Media programme evaluation
RNW Media wishes to conduct a mid-term evaluation for the CV and LM programmes to assess programme performance up to date against three OECD DAC criteria: effectiveness, relevance, impact. Moreover, the goal of the evaluation is to capture the lessons learned with the aim to improve the implementation of the programmes.

Effectiveness:
› To what extent have our Theory of Change (ToC) objectives been achieved?
› What effects do our platforms have on the (knowledge and attitudes and behaviour of) platform users and which factors were involved in achieving these?
› To what extent do we contribute to (building an evidence base for) advocacy?
› How do we contribute to an inclusive media landscape in our project countries?

Relevance:
› What is the relevance of the ToCs and results achieved in relation to the evolving context of the respective countries and needs of young people? Do we adapt fast enough to the rapidly changing (digital) environment?
› How can we optimise our approach to achieve increased knowledge, changing attitudes and any other relevant outcomes?

Impact:
› To what extent do we contribute to social change (systems, norms, values)? Which factors contributed to this?
› To what extent have the programmes contributed to young people’s ability to make informed decisions?

Programmatic successes and lessons learned up to date:
› Which successes and lessons learned have been encountered by the both programmes, if any? Why did they occur?
› What is the feasibility of replicating the programmatic successes and lessons learned in the future programmes of RNW Media?