- A report to the Care Quality Commission's (CQC) Board in September 2015 noted that, of 8,170 services inspected, 34% were rated as requires improvement and 7% as inadequate.
- There is a renewed focus on inspection visits, backed up by intelligent monitoring, as the primary mechanism for verifying the quality and safety of registered services.
- If a focused inspection is carried out more than six months after a comprehensive inspection then a provider's ratings cannot be improved upon.
- Ratings can be increased (at key question and overall location levels) if a focused inspection takes place within six months of a comprehensive inspection.
- A rating review involves examining whether CQC has followed its published guidance in awarding ratings; it does not involve reconsidering the evidence unless there has been a defect in the process.
- Challenges to the judgements and the evidence they are based on should be made during the factual accuracy stage that follows the issuing of a draft inspection report to a provider.
There can be no doubting the extent of CQC's ambition to transform the way it works following its early, dysfunctional years. A new rating system for adult social care was introduced from October 2014 and new regulations (the fundamental standards) followed in April 2015. CQC describes the new inspection methodology as more robust, and it is apparent that more services are deemed to be failing. A report to CQC's Board in September 2015 noted that, of 8,170 services inspected, 34% were rated as requires improvement and 7% as inadequate. More enforcement action is being taken against providers compared with previous years. At the same time, it is apparent that there are a number of significant weaknesses within the current regulatory system that threaten its effectiveness and integrity.
CQC's initial view that it could somehow rely on 'intelligence' rather than inspections to regulate the sector has now been decisively rejected. There is now a renewed focus on inspection visits as the primary mechanism for verifying the quality and safety of registered services, backed up by intelligence. However, it is clear that the new inspection model is more onerous in terms of inspector time and that serious backlogs are developing. There have been major delays in publishing inspection reports. Equally – if not more – concerning is the failure to meet inspection targets. All 25,500 adult social care locations are due to be inspected by the end of September 2016. However, CQC's projection as at September 2015 was that some 7,742 inspections would be completed by the end of March 2016, compared with a target of 13,194 for 2015–16. While CQC is making efforts to catch up, it is highly unlikely that the whole sector will be rated by the end of September 2016. Indeed, CQC reported to the Board in September 2015 that 'if productivity does not improve soon, this will not be recoverable.'
Providers that were initially rated as 'requires improvement' or 'inadequate' are struggling to move above these ratings in subse-quent inspections in spite of significant improvements. Why is this?
One reason for CQC's refusal to increase a rating to 'good' is that it says it needs to see consistency of practice over time. However, it does not state what it considers to be an appropriate amount of time for a provider to demonstrate consistency. This lack of clarity leaves the 'consistency' point open to wide interpretation by CQC inspectors.
In its internal guidance for inspectors, CQC states that if a focused inspection is carried out more than six months after a comprehensive inspection then a provider's ratings cannot be improved upon. The reasoning for this is that CQC 'will not be able to make judgements about all aspects of the service at a reasonably similar time, which [CQC] must be able to do in order to award an overall rating'. However, CQC states that ratings can be improved (at key question and overall location levels) if a focused inspection takes place within six months of a comprehensive inspection. This in turn implies that there should be a period that begins somewhere between zero and six months where providers can be considered to display consistency of practice to enable a rating or ratings to be improved. However, the same internal guidance states, '...it is doubtful that a service would be able to achieve the consistency characteristics required to be "good" within a few weeks or months of being judged "requires improvement"'.
The whole consistency of practice issue is most unsatisfactory and potentially highly unfair. What CQC should do is rate a service as the inspectors find it on the date of the inspection, noting how long it has been since the previous inspection. Instead, what we have is a rating system that continues to rate services that clearly have improved to 'good' as 'inadequate' or 'requires improvement'. This is misleading both to the wider public and to statutory commissioners of services.
A rating review involves examining whether CQC has followed its published guidance in awarding ratings. It is emphasised in the CQC guidance that challenges to the judgements and the evidence these are based on should be made during the factual accuracy check that follows the issuing of a draft inspection report to a provider. A rating review does not involve reconsidering the evidence unless there has been a defect in the process. This may be difficult for a provider to establish given that CQC will not have disclosed: (1) the underlying inspection paperwork; or (2) the internal quality assurance paperwork relating to the production of the report.
As far as providers are concerned, another downside of the rating review process is that a request for a rating review can only be made after the inspection report is published. By then any damage from an unfair rating will have been done. Having a rating changed several months later is unlikely to gain much public attention or provide redress for lost reputation.
Even if a provider submits a request for a rating review it does not mean that an independent reviewer will automatically consider it. It is first 'triaged' by a member of the rating review team within CQC. Many requests are being rejected on the grounds that procedural defects have not been identified. Very often the problem is not that an inspector has failed to follow the CQC guidance but rather that they have applied it too rigidly. In this way, quality services can easily find themselves in the 'requires improvement' bracket. Such cases will not be reopened at the rating review stage even though the services may have been treated unfairly on inspection.
On request, the rating review team will be given the underlying records by the inspection team – the same records the provider will not have seen. There will no doubt also be a discussion with the CQC inspector. A cursory review of these records may suggest that CQC has followed due process at the various stages of the inspection and rating process. This would lead the rating review team to conclude that there is no basis for referring the matter to an independent reviewer. The reality may be very different but if the provider does not have an opportunity to review the internal paperwork and offer comments to the rating review team (both verbally and in writing), there is a risk that the team will simply accept what it is told by the inspection team.
At Ridouts, we are also concerned by the delays in getting responses from the ratings review team. Requests for indicative timescales are ignored and months pass without any updates. The lack of responsiveness is all the more surprising given that the team does not appear to be dealing with that many requests for rating reviews.
The above weaknesses in the rating review process can lead to injustices, particularly where a change to underlying ratings would take a service out of the special measures regime and/or lead to it being treated differently on reinspection. Poor ratings from one inspection are a check on CQC improving the provider's ratings at the next inspection, given CQC's preoccupation with consistency of practice (sometimes called sustainability).
Given the limitations of the rating review process, it is imperative that providers challenge aspects of the inspection process that they are not happy with, both during the inspection and immediately after it. Feedback should be recorded and agreed with the inspectors. Providers should also consider making an immediate request for the underlying inspection notes to help them prepare for the factual accuracy stage. If a provider leaves the request for the inspectors' notes until the draft report is received, it is unlikely that they will be disclosed within the 10 working days that CQC gives providers to return factual accuracy comments.
CQC should be reviewing the whole rating review system. It has indicated that it will do this once all providers have been rated. However, that date has been put back once before and the ratings are unlikely to be concluded by September 2016 due to the current backlog. Until then, CQC should change its reports to make it clear what the compliance position is in the service – that is, are there any breaches of the fundamental standards? If there are no breaches, this should be highlighted for all to see, even if the rating does not change because of the consistency of practice argument noted above. Ultimately, it is irrational to rate a service as inadequate or requires improvement where there are no breaches of regulations. CQC is vulnerable to legal challenge in such circumstances.
Inevitably, changes brought about to solve old problems lead to new ones. However, CQC should be refining its inspection and reporting methodology to iron out some of the more obvious wrinkles. Until that happens, injustices will persist and the image and reputation of CQC will be diminished across the sector.
About the author