[Note: This post was updated in August 2011 and can be downloaded from here
(pdf). The updated article contains better statistics and is focussed more on the role of inspection to prevent abuse and neglect, rather than human rights per se. I’ve left the original post up though for interested parties.]
I will be devoting an entire chapter of my thesis to the potential and challenges of a regulatory approach to human rights protection in social care, and there is much to discuss therein. Whether the founding principles of the Care Quality Commission (CQC) should have been more explicitly human rights oriented, and whether they should have been able to take up individual complaints, as the Joint Committee on Human Rights had advocated. Whether the CQC’s approach to monitoring the deprivation of liberty safeguards is adequate and compliant with the UN Optional Protocol on the Convention Against Torture (see this blog post and this for more discussion). Whether the inspection methodology itself is ideally suited to human rights protection and promotion, and whether the care inspecting arms of CQC could perhaps learn something from the former Mental Health Act Commissioners ‘visiting’ methodology.
In today’s post, however, I will be limiting myself to considering a single issue: the case for a shift to a risk-based approach to inspection, justifying a significant overall reduction in inspection frequency. I will not be discussing in any great detail specific human rights issues, but it is the potential for CQC to act as a human rights inspectorate that leads me to consider this in the first place. I am very well aware that inspection is not a magic wand for human rights issues; it offers no cast iron guarantees against violations. I can tell you, having worked for many care providers, that there are a huge range of things that get tidied away before the CQC’s announced visits. There are behaviours that magically disappear when CQC inspectors are on site. However, inspection can – and does – pick up on the more glaring problems with a care provider, and can also have an important preventive function where there is a strong likelihood of a visit from an inspector.
The creation of the Care Quality Commission
To understand the current challenges facing the CQC, it’s important to understand how it came into being. The CQC came into being in April 2009, created by the Health and Social Care Act 2008
from the merger of its three predecessor bodies: the Commission for Social Care Inspection (CSCI), the Mental Health Act Commission (MHAC), and the Healthcare Commission (HC). The three predecessor organisations had somewhat different roles and functions. For instance, whereas both the CSCI and the HC had inspection and regulatory functions, only the HC – and not the CSCI – could review individual complaints. The MHAC was not a regulator or an ‘inspector’, but it was the most explicitly committed to promoting the rights of service users – specifically, the rights of people detained under the Mental Health Act 1983
The CQC has assumed the central responsibility for regulation of (some
) social care providers, hospitals, ambulances, community healthcare teams, and new responsibilities for regulation of dental surgeries and GP surgeries. The CQC are doing a bigger job, on a tighter budget, then their predecessors. In their first annual report
‘Achieving [the right organisational structure] involved centralising our business services and introducing home working. By the end of October 2009, we had reduced our estates from 23 offices in 13 locations to 8 regional offices and a corporate office in London, and by early 2010 had reduced our workforce from 2,900 to 2,100… As a result of these changes, we delivered the same level of assurance about the safety and quality of care as our three predecessors, but with an annual budget of £164.4 million compared to our predecessors’ combined budget of £240 million in 2005/06.’
The CQC effectively promised to deliver the same level of assurance about care safety and quality as it predecessors in the face of 28% staffing cuts and 32% budget cuts, in addition to increased responsibilities. Part of the way it hoped to do this was changing the way in which it regulated care.
Risk responsive regulation and inspection
In its early years, the CSCI was obliged by regulations (2004)
to inspect a care home twice yearly, but by 2007 new regulations were issued
that reduced this frequency to a minimum of once every three years. This minimum frequency continued into the newly formed CQC although I am not sure what the current statutory basis for this is; s61 Health and Social Care Act 2008
states that regulations may set out the statutory frequency of inspections, but I can’t find any such regulations on (see here
for my search results). In any case, as the charted results of a Freedom of Information request I made
show, the effect of the statutory reduction on inspection frequency in 2007 was striking:
Concern at the drop off in inspection frequency over the last year was recently reported by Community Care
and the Financial Times
, but this pattern goes back much further. In 2007 when the statutory frequency of inspections was reduced six-fold, CSCI moved towards ‘risk based’ assessment. The idea was that services that performed better and had a higher ‘star rating’ would receive less visits, freeing up resources to visit care providers where there were greater concerns about the risks to service users. At the time CSCI’s staff expressed concern
that the new framework would not be robust enough (and see also this article
…drawing in data from a number of sources which we analyse to identify areas of potential non compliance within a provider. We do this by producing a set of ‘risk estimates’ of non- compliance, one for each of the 16 essential standards… We can then respond with front-line regulatory action such as scheduling inspections or making targeted enquiries. It is important to recognise that QRPs do not produce judgements about the extent of a provider’s compliance. Inspectors make these judgements, and they use the QRPs as a starting point to their enquiries.
So QRP’s will potentially trigger an inspection, which is the basis for assessment of compliance; QRP’s are not themselves an assessment of compliance. The data that feeds into the QRP is drawn from a variety of sources, including inspections:
- Other regulatory bodies – for example the National Patient Safety Agency.
- NHS Litigation Authority.
- Routine data collections – for example, Hospital Episode Statistics and estates return information collection.
- Other CQC regulatory activity – for example, monitoring of compliance with the regulation on cleanliness and infection control.
- National clinical audit datasets.
- Information from people using services – for example NHS Choices and feedback from Local Involvement Networks (LINks)
The QRP approach will be one of the key differences between the new CQC and the CSCI. Baroness Young had caused some controversy when she characterised the differences
She said the HCC had taken a “big brain” approach to regulation, using intelligence systems to identify risks in the NHS and target inspections accordingly. The CSCI approach was more about “running the finger around the toilet bowl”. It had thousands of care homes to regulate and fewer statistical tools to identify where problems might lie. So, inevitably, it relied more on regularly visiting establishments.
At the moment the approach is only used in healthcare, and indicators for adult social care are still in development – CQC have kindly supplied me with some information
on what indicators they are currently considering for social care. With the abolition of star ratings, and with the QRP system for social care still under development, I am unsure what ‘risk responsive’ system is currently being used at CQC. At a general level then, I want to discuss the dangers of shifting to a QRP approach for triggering inspections using the example of healthcare.
Undetected non-compliance in the QRP system
Under the new system every care provider has to issue a statement as to whether or not they are compliant with these standards. The veracity of this self-assessment can only be confirmed by inspections. Inspections are either triggered by a provider falling into a ‘high risk’ category on the basis of its QRP, or a small proportion of services are inspected at random. The question is, how good are the QRP’s are detecting pockets of non-compliance? I asked the CQC about the research base for this, and they sent me a link to an article by Spiegelhalter and others called ‘Statistical methods for healthcare regulation: rating, screening and surveillance’
. The majority of this paper is highly technical, but the part that is relevant to my question is actually quite readable – and is given on pages 12-13.
Essentially, the HC developed QRPs for providers. They then inspected a set of providers who fell into a ‘higher risk’ group according to the QRP (10% of the total) and a random sample of the other providers who were not selected for risk-based assessment (10% of the total). Inspectors checked the actual compliance of providers with essential standards against their claimed compliance. They found that in the high risk group 26% of those providers declaring themselves compliant were not actually compliant in relation to one or more essential standard; in contrast, non-compliance was found in only 13% of the other providers inspected who were not characterised as high risk by the QRP.
From a statistical perspective, this is something of a triumph – the findings show that the QRPs were measuring some kind of valid risk. As a device for targeting heavily rationed inspection resources, this is extremely useful. But, as I will argue, as a research base for justifying rationing those resources in the first place, it is dangerously flawed. If you have resources that will only permit you to inspect 20% of services in any given year, using a blend of QRPs and random inspections is quite a good way to target those resources. The problem is that it cannot compare favourably to a system that inspects 100% of services in any given year.
The reason for this is evident in the research itself if we unpick it a bit. My calculations suggest that the system used in the research paper picked up a total of 20 non-compliant providers (13.014 in the high risk group; 7.371 in the low risk group). However, 80% of services went uninspected. If we assume (as statistically we should) that the ‘randomly inspected’ sample is representative of the uninspected group, then we should expect 13% of those providers to also be non-compliant – that is approximately 59 uninspected non-compliant providers in total. So of an estimated total of 79 non-compliant providers, only 20 – that’s 25% – are being picked upon in this new risk-responsive inspection regime. Of course, if you didn’t target your resources at all using the QRP only 13% would be detected, but that’s not the point here. The point is that in comparison with a system where all services are inspected annually, this targeted rationed system compares very poorly indeed.
It is no argument to say that of course, inspection itself misses out on some compliance issues. In this research, the very outcome that is supposed to validate the QRP model is non-compliance that is detected during inspection. If non-compliance is even more widespread than inspection data suggests, that would undermine the QRP model just as much as it would undermine arguments for 100% annual inspections.
It troubles me, reading that paper, to think of what has been built upon it. In interview after interview in the press, the luminaries of the CQC defend themselves against criticisms about falling inspection frequency on the basis that they don’t need as many inspections, because they are targeted by risk. What they are not addressing is the problem of ‘false negatives’ thrown up by the system; in this study, those 13% of services not highlighted as high risk by the QRP where there is still a problem.
One very illuminating example of such a service was in the news recently: Winterbourne View. Following the scandal I wrote to CQC and asked them what the QRP for Winterbourne View was. They gave my request special consideration, as ordinarily they prefer not to disclose scores in case providers try to ‘game’ their data to reduce the QRP. However, in view of the public interest they disclosed the data
. For 14 of the 16 Essential Standards there was either insufficient data or no data at all to generate a risk score. But there was data for two standards: for the ‘Care and welfare of people who use services’ and for the ‘Safety and Suitability of the Premises’ Winterbourne View was rated as having a ‘low neutral’ risk. It is entirely possible that had the CQC possessed a complete data set then Winterbourne View would have fallen into the ‘high risk’ category. However, the fact they scored so well on ‘care and welfare’ should ring alarm bells about the quality of the data the system itself is relying upon.
The resources used for social care inspection
QRPs may be a useful tool in a time of extremely constrained resources, but the QRP model cannot be used as an argument to reduce resources in its own right. I asked the CQC some time ago for data on their annual expenditure on social care inspection; the findings shocked me. Since 2005 the amount that has been spent on social care inspection has been falling every year:
But more than this, the proportion of the overall social care regulation expenditure that is spent on inspections has itself fallen:
It’s not just that the care regulators have been getting less money – it’s also that they are spending less of what money they do have on inspection. In recent years that is almost certainly a result of the high cost of introducing the new registration system, but one would have hoped that these set-up costs would have been built into the financial plan for the new Commission.
In the wake of the Winterbourne View scandal, Jo Williams (Chair of the CQC) gave an interesting interview
to The Guardian
newspaper. In it she stated:
“People have said to me: ‘Why aren’t you making a great fuss about more resources?'” she says. “But any claim for additional resources in the current climate, in any climate, has to be based on hard data and evidence about where a shortfall is and what we need to do to address that shortfall.” Might she make a great fuss if the CQC’s current analysis of its resource needs produces such hard evidence? “We might, absolutely.”
But we could equally turn this back on Williams, and her colleagues and predecessor executives and managers at CQC and CSCI and ask – where was the evidence base to cut resources for inspection in the first place? Where was the evidence that with such a high risk of undetected non-compliance they could promise of ‘the same level of assurance about the safety and quality of care’
? Subsequently the CQC did ask the Department of Health for a 10% increase in their budget (see Williams’ evidence to the Health Select Committee
last month). I sincerely hope they get it, but I remain unconvinced it will be enough when one looks at the overall reductions in budget since 2005, and take into account rising inflation.
What angers me about the CQC is not so much that they are using risk-based profiling to target their very scarce resources; it is the pretence that this system offers a similar level of protection and guarantee of quality as earlier models which relied upon high levels of inspection. Executives have been selling us all a dream, that by the power of statistics they can do more with less. They can’t. Non-compliant services will be missed. Serious failings in services will be missed. They would be missed anyway, on occasion, under any regulatory framework, but they are so much more likely to go undetected when the regulator cannot inspect all services with sufficient frequency and –additionally – cannot respond to individual complaints.
Furthermore, the low likelihood of inspection may make services themselves complacent. The infrequent visits of CQC officers will also make it hard for inspectors to develop the relationships with care providers that they could enjoy under previous systems; their role of leadership in care standards, at an interpersonal level, is diminishing within the care sector. And the timing for all of this couldn’t be worse. Williams herself has warned
that as this economic climate ravages the social care sector standards in care homes will fall. In some respects I feel sorry for Williams and other CQC managers in all this. I hardly think one could be appointed to the upper ranks of the CQC or its predecessor Commissions singing the unpopular tune that good regulation costs money. They are in danger of becoming scapegoats for a political culture that does not want to pay the price of good care; and so others will pay the price of poorly regulated care.