The Strategy Unit is increasingly being commissioned to undertake evaluations on the premise of learning from the findings for programme delivery. As our evaluation blog series continues, Abeda Mulla sets out why this intent to share lessons learnt from evaluations is realised in frustratingly few instances and outlines our collective responsibility to share findings more widely.
Why does the Evaluation Team exist? No, I am not having a professional existential crisis, but mulling over why the demand for evaluations has grown over the last decade (certainly in the time I joined the Strategy Unit).
My take is that many programme leads and managers (our clients) recognise the benefit of a robust and objective evaluation. They see that evaluation can underpin their responsibility to maintain a cycle of iterative programme delivery and learning. They also see that lessons learnt from one evaluation can be adopted and applied for future programmes. They see that as specialist evaluators that are also NHS employees, the Strategy Unit’s Evaluation Team can meet their needs.
And yet whilst learning is sharing, our website does not reflect that significant proportion of the evaluation work we do. This is because many evaluation clients support sharing in principle at the outset of evaluations, but are not always able to follow through on this. Reasons for not sharing include:
- The evaluation findings are deemed to be unhelpful because they cannot demonstrate impact (see Mike Woodall’s blog in this series)
- The findings are perceived to be negative. Sharing findings is then viewed as a risk to delivery of the programme or call into question the investment made into design and set-up costs
- The recommendations (which are based on the findings) are incompatible with how the programme leads or wider stakeholders of the intervention wish to proceed (see David Callaghan’s blog in this series)
- The organisational sign-off process for publication (independent of the medium) is onerous and subject to lengthy delay.
As evaluators the inability to close the loop of our evaluations – not being allowed to share and discuss our work - is naturally disappointing. It means we are doomed to repeat ourselves in our outputs as we are unable to reference and share our own work.
More disappointing is when we are requested to change findings or recommendations as a pre-requisite to publication (we always say no to this request unless a second /third/fourth look at the data and analysis can justify an amendment). On occasion it can be even more frustrating, we sometimes come across our evaluation findings being shared externally (without credit) after they have been cherry-picked for the more positive findings.
Strip all of this away and some fundamentals remain. As public servants in the NHS, we have an ethical duty to ensure that evaluation findings are shared transparently. Public funding necessitates accountability, and suppressing findings ultimately hinders the progress of publicly funded initiatives.
Collectively then, as clients and evaluators, we have a responsibility to do better.
But how?
Improving the dissemination of evaluation findings requires collaboration between clients and evaluators from the outset, building a shared understanding of why sharing the outputs matters. My suggestions are therefore very practical:
- Be brave – If programmes commit to sharing evaluation outputs including no impact/negative findings from the outset, then we will enthusiastically support you to develop and deliver a dissemination plan for the evaluation.
- Manage expectations – If programmes can identify the sensitivities of sharing outputs, we can work with you to prepare evaluation stakeholders for difficult messages as soon as they emerge
- Engage early – If programmes can commit to, and honour, the sharing of outputs with the professionals and public who have participated in the evaluation in a timely way (ahead of wider public dissemination) then evaluators can use this commitment to canvas for improved engagement with the evaluation’s activities
- Timely sign-off – If programmes can inform themselves of their organisational process for sign-off and involve Comms colleagues in evaluation discussions at an early stage, then we can match that support by ensuring outputs meet requirements including style, tone, audience and timelines for sign-off.
By collectively committing to these practical steps, we can build stronger evaluation practices, foster learning across programmes, and avoid duplication of work.