Measuring social returns: how much is enough?

INCREASING THE FLOW OF CAPITAL FOR GOOD - INVESTING AND GIVING

Magazine article

Few seem to doubt that companies are better at making profit than charities are at having impact. Personally, having started my career advising multi-nationals, I question the assumption that business always knows best. Business is not perfectly efficient, and failing companies often secure repeat rounds of investment. The key difference I see is transparency. Sooner or later it becomes absolutely clear whether a business has succeeded in making a profit or not, and if not, they eventually run out of money and can’t raise more. There has not previously been an equivalent mechanism for charities or social enterprises.

It is tempting for social investors to see it as part of their role to bring tools from the business world to improve this, ideally even putting reporting of social returns on a par with financial returns. I will put to one side the perils of boiling down complex outcomes into one figure. Even before that, the broader implication is that we should measure every drop of impact, just as we account for every penny of profit. Taken too far this can be expensive, burdensome for staff and intrusive for users. Are we faced with an impossible dichotomy between ‘just trust us’ and ‘measure everything’? For me the breakthrough was reading the Realising Ambition programme insights. Realising Ambition is a Big Lottery Fund programme to replicate 25 evidencebased initiatives aimed at preventing children and young people from entering the criminal justice system. Unusually, it included sufficient funding for deeper evaluation and learning. I had two key takeaways which I hope could be helpful for social investors. Investment in learning should be proportionate to what we intend to do with it The first, which may seem blindingly obvious, is that the level of data measurement should be driven by what you intend to do with the data once you’ve got it. Good doesn’t mean a randomised controlled trial (RCT) every time. There are three reasons you might be measuring something:

i. Learning: for all involved, to add to the evidence base around the effectiveness of a methodology in principle

ii. Managing: for providers, to identify and investigate when implementation is achieving worse – or better – than expected outcomes in practice

iii. Holding to account: for investors and funders, to reallocate resources to the providers best able to deliver positive outcomes in order to maximise their social return.

Learning is hugely important, and there is far too little good quality evaluation. However, this needs to be proportionate both to the existing evidence base and to the potential level of replication. The standard of evidence matters hugely if resources are diverted at a policy level to roll out a ‘proven’ methodology, just to find the results of the original trial were distorted by selection bias. For social investors, this may be an important priority for social impact bonds, particularly if they use innovative methodologies which could go on to be widely adopted. This is the one area that may justify RCTs, as in our Project Crewe pilot of intensive, solution-focused support for families of children in need. But randomisation matters much less for methodologies like Family Focused Therapy with a strong evidence base, already tested for bias, and where we have a good sense of expected outcomes. In this case, a proportionate approach might be to simply compare outcomes for users with different characteristics within the same service. If this suggests a major breakthrough in our understanding which could drive future decisions, then we can then plan an RCT.

It’s ok for learning not to be on the agenda every time

It is also equally important to recognise that learning should not always be a priority. There is just no point eating into limited funds to learn about a methodology in small-scale delivery that is unlikely to be replicated. For social investors, this could be relevant to different extents when they back social enterprises. For example, in our social enterprise garage Auto22 we are very proud that young people who were at risk of being ‘NEET’ (not in employment, education or training) have been able to get a good career. However, we haven’t used control groups, and we have lost contact with a few of the young people. We have sufficient confidence of our impact because we know how the young people were selected, we know what almost all of them are doing, and there is an existing body of evidence about the impact of being NEET early in life. We may be able to use qualitative learnings to increase social returns, e.g. on effective placement support. But unless we want to roll this out across the country, formal control groups feel like an expensive distraction.

There should be at least as much focus on performance improvement

The Realising Ambition team argues that there should usually be less focus on learning (‘proving’) than on managing (‘improving’). Even if we build a strong evidence base for a methodology, replicating the same outcomes can be tremendously difficult. Perhaps it wasn’t quite the same kind of cohort, or perhaps the ‘core’ elements that made the original intervention tick were wrongly identified or weren’t replicated with fidelity. But more fundamentally, the provider’s quality of implementation matters at least as much as the design. Can we definitely say that social enterprise garages work, or did we just get lucky recruiting inspirational staff? This is not a minor point: for example, in comparisons of psychotherapy treatments, the quality of the therapist makes eight times more difference than the treatment used. In a different context, ‘proven’ interventions in Kenyan education no longer worked when rolled out in the public system.

This needs a common sense, context-specific approach

For an investor managing the social impact of their investments, this doesn’t need the same standard of evidence as is required to prove a methodology. It is enough to track outcomes either at an aggregate level or for a sample selected without obvious bias; to have a reasonable idea of what outcomes to expect; and to take a closer look where outcomes are out of whack with expectations or where there is variation within a service. This could lead to extra support for struggling teams, initiatives to spread behaviours of high performing teams – or perhaps it could turn out to be random. Rough and ready data simply shows us where to target our resources. Using Auto22 as an example again, many of the young people were referred from our Study Programme, so a reasonable starting point might be, what do other people on the study programme end up achieving? Were the young people placed in Auto22 facing more or less barriers than the rest of that cohort? How many young people from Jamie’s Fifteen end up in employment? But again, context matters: in this case, as it happens, almost every young person placed at Auto22 has achieved their desired employment outcome, so there is a little less to gain from these comparisons.

The culture of the investee

My second take-away is much briefer: the single most important factor is the culture of the investee. We do need organisations to add to the evidence base for – and against – methodologies. But even more, we need organisations focused on genuinely trying to improve their impact every day, using whatever data they can get hold of. Social investors can do a huge amount to influence this, simply by asking the right questions, and continuing to ask them.

Conclusion

Coming back to where we started, social investors can indeed bring tools from business to change the sector for the better. Commercial managers rarely disaggregate profit performance to the nth degree. They will usually have a good idea of how well equivalent products are selling elsewhere, and if their sales are performing much below or above the market, they will look into why. In business as well, good managers know that teams succeed because of culture, not just product design. If we are going to learn from business, let’s be as pragmatic about increasing impact as they are about making profit.

This article first appeared in Philanthropy Impact Magazine issue 13. Download the article as a PDF. 

This article is tagged under:

  • Impact measurement
  • Promoting philanthropy
  • Social investment