"Entrepreneurial philanthropy for all"
Director General for Regulation, Office for Statistics Regulation
I joined the Board of Pro Bono Economics (PBE) a couple of years ago, with a reasonable background in data (in my day job as head of the Office for Statistics Regulation) and a less strong background in civil society. What struck me was the general pessimism in the civil society community about data. I heard things like “Of course we don’t use data like we should” and “We’re not good at measures of impact”. It’s an echo of the awkward humanities student who says, “I don’t do numbers”.
In this essay I want unpick that pessimism – not from any great expertise in the world of voluntary organisations, but instead drawing on my role at the Office for Statistics Regulation (OSR). I want to look at data in the sector in the same way as I would for any sector – like adult social care (where we’ve recently published a critical report about the availability of official statistics) or mental health (again, a recent area of focus for the OSR).
My starting point is to distinguish three different aspects of data that are often conflated: data about; data for; and data from.
Let’s look at each of these in turn:
On 28 October 2020, my team at the Office for Statistics Regulation published a report on business demography statistics. These statistics describe the birth and death of companies – how many new companies are formed and where, and how many companies close. They provide a barometer of the vibrancy of the market economy. They are used to identify which sectors are growing; to highlight the sources of productivity, innovation and entrepreneurial activity and to assess the effectiveness of Government policy initiatives.
There are problems with these statistics, to be sure. Yet by comparison to what’s available in the third sector and civil society, even these flawed statistics represent an embarrassment of riches.
For civil society, there are certainly some good sources – to pick just two, the National Council for Voluntary Organisations (NCVO) UK Civil Society Almanac (the Almanac) is a great source to describe the size and shape of the voluntary sector; and 360Giving is developing an excellent resource on grant-making.
But none of the available sources provides as rich and detailed picture of this sector as we have of the market sector. As a result, the third sector is often invisible in debates about economic policy. As Andy Haldane has said:
“Any discussion of the future role of the third sector needs to start from an understanding of its current contribution to the economy and society. That sounds simple. Yet measuring that contribution in practice is far from easy. Indeed, much of that contribution currently goes unmeasured.”
Why is this? One obvious answer is that it’s harder to define. Indeed, in writing this piece, I’m finding myself drifting imprecisely between the terms ‘charities’, ‘third sector’ and ‘voluntary sector’. Civil society is diffuse, and multiple definitions are available. As the Almanac says:
“The term ‘voluntary sector’ is not widely understood, and neither are alternatives such as ‘third sector’ or ‘social sector’. This is largely because of the sector’s huge diversity and the increasing blurred boundaries between the public, private and voluntary sectors.”
A second reason is that it is difficult to ascribe value to what the sector does. In most aggregate statistics about the economy, value is defined in money terms: GDP is measured in pounds sterling. For the third sector, this valuation is harder – what a charity does is usually not sold in a market. As a result, there is at present no single overarching framework that can link the inputs, outputs and outcomes of this non-market sector – hence the sector’s relative absence from statistics, like the Office for National Statistics’ (ONS) National Accounts, that depend on a coherent, market-based framework.
I suspect, though, that there is a third, more insidious factor: the perceived importance of the sector. Sure, the sector is hard to define and hard to quantify. But these problems of definition have been addressed elsewhere (e.g. by the Department for Digital, Culture, Media & Sport (DCMS) for “creative industries”, an inherently diffuse and imprecise sector; and by ONS for public sector productivity, inherently hard to quantify).
In our work at the Office for Statistics Regulation we often find that missing data reflects a lack of official policy recognition. Where something is seen to be important, it will be captured in official statistics. This matters: if it’s not measured properly, it tends not to be very visible in policy discussions either in considerations of how Government spending decisions impact on society or in understanding how change impacts people and organisations.
Our recent report on adult social care found a gap in official statistics, which simply do not reflect the diversity and performance of the social care sector. I wrote that we wouldn’t have parity of policy between social care and the NHS unless we get parity of measurement. The argument may run the same way for civil society: the sector won’t get sufficient weight in policy discussions until we see parity of measurement.
It’s one thing to have better national level statistics to raise the profile and the deemed importance of the sector. But in a practical, day-to-day sense, equally important is really good access to data for analysis and decisions – not data about the sector, but data for the sector.
There is a lot of interest in growing the sector’s digital skills. For example, this was emphasised in the Office for Civil Society’s Civil Society Strategy and has been supported through the National Lottery’s Digital Fund. This interest seeks to harness digital tools to improve services provided by civil society.
I want to focus on a subset of digital skills – use of datasets, particularly datasets held by the public sector. These data sets should enable civil society organisations to understand more about the problems it’s tackling, the people it’s helping and the impact it’s having. In many areas the most useful data sets are the public sector’s administrative data. As Dan Corry says in another essay in this collection: “there are reams of administrative data locked up within Whitehall departments that charities could use to understand their impact.”
Here, there is a patchy picture. There are some amazing resources available, like the Ministry of Justice’s Justice Data Lab which looks at the effectiveness of charitable programmes to reduce reoffending.
But there seem to be barriers here. When the Justice Data Lab was initiated, there was an expectation that it would be replicated in other parts of Government, like welfare. But the model hasn’t taken off. Some parts of the public sector are notoriously poor for data access – in particular NHS data in England (though there may be better access for outside researchers to NHS data in other parts of the UK).
The main attempt to create an accessible data hub for administrative data has been Admin Data Research UK, which makes research-ready datasets available for research and analysis.
The Admin Data Research UK datasets ought to be a treasure trove for third sector organisations. And they aren’t. Perhaps because access requires accredited researcher status, applications are biased towards academic institutions. An attraction of the data lab model is that charities need relatively little expertise to benefit from public data sets.
In short, there is only limited access to and use of administrative data resources by civil society. This means there are only limited examples for the sector to build on, and little building of data capability. This isn’t going to be addressed by one data project at a time – it needs a sustained investment of time and resources in building knowledge, partnerships and capability. In the absence of this effort, data for the sector will remain an area of untapped potential.
If I’ve given the impression that the sector is a data wasteland, then that’s a false one. The sector is rich in data. It produces a lot of its own research, often highlighting the lives and experiences of people who are bypassed by policy and academic research, and much of it is excellent.
Take the area of homelessness as an example. Lots of data is available from the sector: the Centre for Homelessness Impact; St Mungo’s, which manages the Combined Homelessness and Information Network (CHAIN); Centrepoint’s Youth Homeless Databank; Homelesslink, which supports the annual rough sleeping snapshot in England and the Homelessness Monitor produced by Crisis.
The variety of data from the homelessness sector is not unique. It is replicated in many other domains: I could write here about really great work that comes across my desk, like the Femicide Census and Future Care Capital. Yet it is here, on data from civil society that I have some of my biggest concerns.
Why? First, there is untapped potential to make much better use of the sector’s data. For example, as the 360Giving Strategy argues, grant making flows are opaque and for as long as data remains in organisational silos, the combined effects of bringing together data will be lost. In the homelessness sector the variety of data sources still may not amount to a coherent overall picture of homelessness. When PBE has looked at the data, we have found it does not support a really robust analysis of impact.
Second, while there is a wealth of research published by the sector on the clients and communities that it serves or on the problems it aims to address – there is less consistent reporting of impacts. The work of PBE is often about helping a charity understand and measure its impacts, but there is a lot more work to do here.
Underlying all of this is a further systemic factor. For both research and impact evidence, it seems to me that there are only limited standards for civil society organisations on how best to collect, present and disseminate data. The public sector has the Green Book (for impacts) and the Code of Practice for Statistics. The principles of this Code are what guides the ONS and has underpinned the widespread confidence and support that the ONS’ data has enjoyed during the pandemic. As one former senior civil servant said to me: “whenever I hear something from the ONS, I know that I can trust it”.
At the heart of the Code is the notion of trustworthiness – which means that an organisation can assure the public that their data and analysis represent more than just their own vested interests.
There is no similar overarching framework for data published by civil society. One solution is for civil society organisations to adopt the Code of Practice on a voluntary basis – indeed, this is what some non-Government bodies have started to do, notably the Social Metrics Commission.
In the absence of this common set of principles there is a risk that data from the sector may not be seen as trustworthy. Metrics driven by funding can easily tip over into measuring the wrong thing, simply because it’s quantifiable, skewing priorities and focus. At worst, data from the sector may be seen as reflecting little more than the vested interests of the organisations that publish it.
So how can we address the issues I have identified?
We need to rethink data about, for and from the sector:
We need better measurement of the sector: we will not have parity of esteem unless we get parity of measurement. This is work for the UK’s statistical authorities – the ONS and the statisticians within Government departments. It is important and will have the support of the whole sector.
We need better access to administrative data: this depends on the owners of data opening it up, both individually, through more examples like the Justice Data Lab and through greater access to linked data via Administrative Data Research UK. It won’t be enough just to increase data availability: building partnerships between public bodies, academic researchers and the sector is necessary to address capacity issues.
We need the sector to adopt common standards for the statistics that it publishes: these standards would protect against excess advocacy and build trustworthiness by demonstrating that the data published by an organisation do not simply reflect their interests. Voluntary adoption of the Code of Practice for Statistics would be a good place to start!
If these three things are done, we can lose the awkward self-deprecation (“we don’t do data well”), and instead see a sector that is self-confident and capable in its data use.
I always challenge people who, in the knowledge that I work in statistics, confess sheepishly they are ‘no good at maths’. It’s not that they are no good at maths; it’s just that they haven’t had enough opportunity to practice. I feel the same way about the civil society pessimism. There’s nothing inherent in the sector that makes it bad at data. The sector just needs more opportunities.
 Adult Social Care Statistics: a way forward for Great Britain, Office for Statistics Regulation, March 2020
 It’s worth adding that the main source of innovation data, the Community Innovation Survey run by the Department for Business, Energy and Industrial Strategy, focuses on business as a source of innovation – there is no recognition of the role of social innovation.
 A Haldane, The Third Sector and the Fourth Industrial Revolution, Pro Bono Economics Annual Lecture, 22 May 2019
 E Humpherson, “The Office for Statistics Regulation calls for better adult social care data for England to understand the experience of individuals getting social care”, Office for Statistics Regulation, January 2020
 Civil Society Strategy: Building a Future that Works for Everyone, Office for Civil Society, August 2018
 D Corry, Structural improvements: ideas for supporting a thriving civil society, Civil Society, Unleashed, December 2020