Britain is set for another dodgy data scandal. In last Friday’s Reality Check newsletter I picked up on reporting from the Times which called into question the income data used to calculate Britain’s child poverty metrics. Now, the BBC reports that those figures are going to be revised. The result: half a million children who the government previously claimed were in poverty were in fact not.
This is obviously good news. But let’s be clear: it’s also a total and utter scandal.
The way we measure child poverty has always been a bit of nonsense. It uses ‘relative poverty’, which sets a breadline of 60 per cent of median income. The problems with using relative income measures are obvious. If median earnings were to drop slightly, then hundreds of thousands would apparently be lifted out of poverty despite their material circumstances not having changed a jot.
So many of our political decisions now seem to be driven by these dodgy metrics
But worse still, they bake in stagnation and destroy a nation’s ambition. So long as we use those metrics, policy alarm bells will ring as soon as anyone starts to do well because that poverty line will be dragged up – worsening the statistics.
But fine, it’s still an internationally recognised and fairly standard methodology, so use it if you must. However, if you’re going to do so, at least make sure the income figures you feed into it are accurate. In Britain, they have been anything but.
To get those income figures, statisticians use something called the ‘Family Resources Survey (FRS)’ which simply asks a sample of around 19,000 families about their income, housing and living conditions. The trouble is respondents have been underreporting their benefits income. In the 2023 survey, the BBC points out, households reported £190 billion in income from welfare payments. Yet the DWP actually paid out some £234 billion. In other words, £44 billion of income was not reported.
When the DWP switches to using administrative data next month – e.g. the actual payments they made – incomes in the metric will rise and so poverty is expected to fall.
Think about what this means. Almost every contentious public spending debate over the last few years has come unstuck because of an impact assessment which somewhere references these poverty statistics. The two-child benefit cap was lifted because of what models said it would do to child poverty; Liz Kendall’s £5 billion cuts to sickness benefits didn’t happen, at least in part, because of poverty metrics. They were cited in the winter fuel debates too. It may even turn out to be the case that Tony Blair actually did hit his poverty reduction targets that these metrics infamously claim he missed.
This story will go largely ignored. It’s about one spreadsheet being replaced with another. But given how so many of our political decisions now seem to be driven by these dodgy metrics, it deserves serious scrutiny. It’s great that we’ll be using more accurate figures going forward. But they never should have been so off in the first place.
Comments