Why have we become obsessed with meaningful measurement?

Jeremy Spafford, Director of the Old Fire Station, spoke at a Meaningful Measurement Inquiry:

Why have we become obsessed with meaningful measurement?

I have spent a lot of time raising funds and, over time, become frustrated by the gap between genuine learning and how we measure impact and report on that impact.  Over the years I’ve felt under pressure to overclaim when applying for funds and then to overclaim when reporting on activity.

I’ve also found myself entering numbers into forms which I don’t believe (not because they are a lie but because it is not always clear whether we’re all using the same definitions when collecting the numbers – how do we define homeless? What if we don’t describe the people we work with as service users? How much does someone need to participate to be counted as a participant?) or, even if I do believe the number I don’t think it tells us anything worth knowing but it is being submitted as if it does. For example: we might have committed to working with 100 homeless people and can report, accurately, that we worked with 105 which looks like success - but actually I know that 80 of them touched the project superficially whereas 25 had a transformational experience, and that all of them had a completely different experience of homelessness and of the project. The funder uses this report as a measure of success, but your real mileage may vary.

This is not just my experience. Through this inquiry we’ve found it is a common experience and, often but not always fairly, providers locate the problem in funders. 

We arrived at storytelling as a methodology because evaluation for us had become an annoying afterthought based on feedback forms and lots of counting and was wasting a lot of our time.  Our first attempt in 2017 produced so much learning about our impact that we re-wrote our mission statement and we are now supporting organizations across the country to use the approach. 

To be clear... 

We do not think storytelling or Signal are the answer to everything.  

We do think it is important to count the right things in the right way. We are not opposed to quantitative measures.   

We are acutely aware that it is relatively easy for us, as a smallish charity to explore these ideas as we are not wrestling with the hugely complex process of managing public services and, let’s face it, if we don’t get an exhibition up on time, nobody dies.  

We do not have the answer, but we do think it is important to ask the question. 

Provocation 

A few suggestions: 

Learning should not be an afterthought. It should be central to the work. Service providers should be required by commissioners and funders to explain how they will learn and share that learning. To facilitate this, commissioners and funders should not be imposing reporting requirements which distract providers from doing the work and learning from it. 

Genuine learning has to include the possibility of failure. Grantees need to know they won’t be punished for failing to hit targets if they can show what has been learned. 

The experiences of those most closely involved should be centred (to maximize learning for everyone). This means adapting processes away from servicing the convenience of the funder towards hearing the experiences of those most affected. 

Those responsible for distributing funds on behalf of others (central government, philanthropists, Foundation Trustees) should protect grantees from unhelpful processes imposed from above instead of simply passing them on. 

This is complex and there is no quick fix. As we all know, culture eats strategy for breakfast. 

We need a shift in culture that encourages everyone to be brave about pushing back if they are being asked to answer meaningless questions or collect pointless data. 

Everybody accepts that all activity has to have due regard to safeguarding and financial competence (for example). Can we get to a point where everyone accepts that all activity should build in reflection and learning?