13,000 needless deaths?

Mortality

Having written a widely-quoted systematic review on the (lack of) relationship between risk adjusted mortality rates and quality of care I think I can contribute a little to this discussion. Whilst it is clearly wise to monitor one’s mortality rate (crude and adjusted), even adjusted hospital mortality rates mean nothing in isolation.

Firstly we need to understand exactly what HSMR (hospital standardised mortality ratio) and similar statistics actually mean and how they are calculated. Essentially they are the ratio of “observed” (ie actual) deaths to “expected” deaths in the hospital. Someone (the Dr Foster company in this case) has gone and made a prediction, for every hospital, of how many deaths each hospital “should” have had if they had admitted patients who were characterised by having the same rates of disease, same age patterns etc, as the national average. They have then compared the number who actually died against the number who they think “should” have died and this presumably has led to a difference, over the past eight years, of 13,000 patients in the fourteen hospitals reviewed by Sir Bruce Keogh alone.

But we have to be extremely cautious about this figure. Quite apart from the distress it must be causing the loved ones of patients who died in these hospitals (“is Sir Brian Jarman really saying their death could somehow have been prevented if they had been admitted elsewhere?”), the frustration it causes staff who work there and the loss of confidence in the NHS of the wider public who may end up being admitted to one of these hospitals, there is a fundamental problem with the figure of 13,000.

It is actually nothing more than a statistical construct based on the fact that according to Dr Foster’s calculations, 13,000 more patients than average died there. They have no way of actually pinpointing which 13,000 patients who died as a result of poor care, unless they actually went through each patient’s casenotes, which they haven’t done. You might as well ask a nonsensical question: statistically just as many patients must NOT have died at the “best” HSMR-performing hospitals since overall the HSMRs average themselves out, but we aren’t going around trying to find people who really ought to have died in them but didn’t and asking why they didn’t die in the “best” hospitals.

The Keogh review also (in my humble opinion) also makes the fundamental error of identifying certain hospitals believed to have problems of quality care based on their HSMRs (a false assumption) then goes in with a fine toothcomb and unsurprisingly it finds mistakes are happening. Logically therefore (according to the world of Dr Foster) this proves HSMRs “work” in correctly identifying the worst hospitals – except of course it doesn’t, unless equally searching investigations had been undertaken in every other hospital, which they weren’t. So a few hospitals are being hyped up and labelled dangerous with no consideration of whether the extent of quality problems in them is any worse than the unvisited hospitals. In fact the evidence suggests that if you apply a poor screening test (HSMR) to identify poor quality hospitals you will actually end up missing most hospitals with genuine problems.

In their defence some of the hospitals under review by Sir Bruce Keogh have pointed out that their most recent mortality ratios are actually now within normal limits. But this forces a rather uncomfortable question for those portraying mortality rates as the way to identify bad hospitals: if their mortality rates are normal at the moment and Sir Bruce’s team are at the same time identifying dangerous problems in them, what does that tell us about the usefulness of the HSMR as a tool for identifying poor quality hospitals? In how many low HSMR hospitals are there quality of care problems?

This leads us on to a set of rather more fundamental problems: accurately measuring either quality of care or hospital mortality is actually close to impossible. And to jump to the apparently intuitive conclusion that high mortality hospitals must have worse care than low mortality hospitals ignores a fundamental observation: that hospital mortality is hardly, if at all, related to quality of care.

I was recently involved in examining a hospital not on the Keogh list but whose “high” Hospital Standard Mortality Ratio raised question-marks even before it strangely plummeted, prompting various conspiracy theories in the media about fudging figures. Having also been involved in palliative care commissioning I was able to pinpoint how the new hospice the PCT opened got up to full capacity exactly the time the mortality rates started falling. But from what I have seen of the Keogh review part on the Department of Health website, precious little attention is given to consideration that factors outside the hospital’s control but within the wider health economy are having. If you haven’t got any hospice beds locally, if the GP out of hours service has disintegrated, if there are no district nurses at weekends and social services takes the phone off the hook at 5pm on Friday afternoon, if your A&E is bulging at the seams and the medical wards are full of people who can’t go home as there aren’t any nursing home placements available, then of course frail elderly or terminally ill patients will find themselves being admitted to hospital and of course the hospital’s mortality ratio will be high. That the factors leading to the hospital’s high mortality ratio are largely outside the control of the hospital seems of no concern to those who are busily slating hospitals with higher than expected mortality ratios as though it’s somehow their fault.

In fact it is invariably premature to jump to the conclusion that a high mortality hospital has a poor standard of care. It is essential to realise that hospital mortality statistics are a consequence of a complex mix of factors including:

  •  case mix (how severely ill patients are who are admitted to hospital)
  •  lifestyle choices such as smoking and diet
  •  disease coding (how accurately the patient’s presenting conditions are categorised to one or more standardised disease categories, to allow comparison between patients with different conditions)
  • quality of care in primary care and community settings e.g. care homes
  • quality of care in hospital
  • availability of end of life care options e.g. local hospices
  • patient choice of where to die
  • chance variations

Therefore a high mortality rate or ratio in itself does not necessarily imply that there is any reason for concern about the quality of clinical care at the hospital; rather it is correctly described as a “smoke alarm” that should trigger examination of pertinent factors including clinical care to see whether they can explain the outlying mortality statistic. In fact often the majority of factors that cause a hospital to have a high mortality statistic are outside the direct control or influence of the hospital. Furthermore, within hospitals there may be areas of exceptional care and other less strong areas and in theory if a department had a high mortality rate as a result of poor care, this could be masked by stronger performance in other areas.

A recent very detailed review of a thousand deaths in a range of NHS hospitals in England found that on average around 5% had factors in their clinical care that might have contributed to the patient’s death, which as a headline figure is actually similar to research studies of avoidable hospital death in other industrialised countries. Mathematical modelling suggests that at this level of “preventable death”, avoidable mortality can only explain 8% of the variation in hospital mortality rates and only one in every eleven hospitals identified as having a significantly elevated mortality statistic may have problems with quality of care. The majority of hospitals with high Hospital Standard Mortality Ratios or Summary Hospital-level Mortality Indicators are unlikely to have more quality of care problems than hospitals with normal or low mortality statistics.

Two research papers illustrate the difficulty of linking poor quality care to high mortality rates.

Firstly, in the systematic review I referred to earlier, of all studies that attempted to measure quality of care and mortality rates (after adjustment for patient case-mix) across several hospitals, it was evident that there was a very tenuous link if any between quality of care and hospital mortality. For every study that showed that higher mortality rate hospitals delivered poorer care, another study found either no association or even a negative association (that is, the better quality hospitals had higher mortality rates).

Secondly a unique experiment in Massachusetts in 2008 cast grave doubt on the ability of commercially-calculated hospital mortality statistics to discriminate between “good” and “bad” hospitals. They provided a complete dataset of all admissions over a period of several years for 83 hospitals and invited four companies, including the British hospital mortality statistics company Dr Foster, to try to identify the “highest” and “lowest” mortality outlying hospitals. 12 of the 28 identified by one company as “significantly worse than expected” were simultaneously, and using the same dataset, identified as performing “significantly better than expected” by the other companies! The researchers duly advised that it was not safe to assume that hospitals could be identified as good or bad on the basic of mortality rates alone.

One might well ask at this point, why after risk adjustment, are there still such variations in hospital mortality? The answer comes back to that disarmingly simple formula for calculating hospital mortality ratios, observed deaths divided by expected deaths.

For observed deaths, you can easily dispute exactly who you should include. Should you include only patients who died during their admission (as Dr Foster does)? But that means patients who died as a result of incompetence but who survived to discharge only to die at home would not get picked up. So we have other measures of hospital mortality like the SHMI (used by the Health and Social Care Information Centre now in preference to HSMR). That counts patients who died after discharge, if they died within 30 days of discharge. But that’s a bit unfair to hospitals who discharge a patient say to a nursing home and the nursing home care is so poor that the patient dies soon after discharge. (Alternatively, if you are in Scotland you would calculate mortality rates based on deaths within 30 days of admission,  not 30 days of discharge. That will change the number of observed deaths yet again.)

For expected deaths you are reliant on Dr Foster’s formula, or the HSCIC formula for SHMI, or some other company’s formula, and they all take different factors into account when predicting how many patients “should” have died in hospital. So for instance when the SHMI was under development at Sheffield University the researchers tried experimenting with whether to take deprivation into account, or whether to factor emergency admissions separately to elective admissions. They tried several different formulae, and unsurprisingly the position of hospitals in the “league tables” this generated varied from one formula to another formula. Dr Foster have of course settled for one particular formulae which they use to claim that 13,000 people died avoidably at the 14 hospitals reviewed by Keogh, whilst another formula would find a totally different number of “avoidable deaths”.

A crucial factor in all this is accuracy of coding, since the way patients are coded affects their calculated probability of death. Patients in general do not conform to neat “tick boxes” with the exact condition they were admitted with written on their forehead. Let me give a simple example. Imagine three identical hospitals which each admit in quick succession ten identical frail old people in a state of confusion, with smelly urine and a fever. Each hospital treats the ten identical patients identically, and in each hospital five of the ten die. The first hospital codes them as being confused, which really shouldn’t lead to anyone dying, so for five of ten patients to die from confusion suggests a scandalously high mortality rate. The second codes them as having urinary tract infections, which is correct – and if five of them died, that’s probably what you would expect. The third recognises that they had a fever on admission, which is a sign of potential sepsis, so codes them as having sepsis, and when five of those patients survive the hospital’s apparently low mortality rate will seem miraculous. The important point is, all thirty patients received identical care, the way their diseases were coded were all appropriate yet the three hospitals have ended up with radically different mortality rates. One might then go further and say that in the first hospital several patients died unnecessarily since their adjusted mortality rate, for confused patients, was so bad (five out of ten, when you might expect only one out of ten confused patients to die during an admission) but of course you will now realise that is a quite ridiculous leap of deduction to make. Yet it’s exactly the same deductive leap that has been applied to arrive at the figure of 13,000 deaths at the Keogh review hospitals.

If measuring hospital mortality is difficult, measuring quality of care is even harder. The best we can go on is detailed review of casenotes of patients who died by an impartial and (at least partly independent of the hospital) team, and “mortality review groups” are an important safety measure being introduced in many hospitals. The sorts of things patients complain about (for example delays in being offered a cup of tea) may actually not represent safety issues, whilst safety concerns can go unrecognised, especially if one is moved to complacency by the fact one’s HSMR is low or normal.

A further issue is palliative care. Patients who are expected to die, and are given a palliative care code are essentially considered almost “inevitable” deaths by Dr Foster’s calculations whereas patients not given a palliative care code are somehow expected to survive and if they die, they count in the observed deaths category. There has been plenty of criticism levelled against certain hospitals that use palliative cares liberally, though if the same hospitals lack community facilities for dying patients then it is inevitable that many more patients will end up dying in the hospital instead. But even defining who should be given a palliative care code is far from straightforward. Should it only be used for patients who have been seen by a palliative care consultant? (In that case, if a hospital starts to employ a consultant in palliative medicine, they will suddenly be able to increase the number of patients given palliative carecodes and their mortality rates will appear to plummet). What about if a patient is well known to the palliative care team in the community, is admitted to hospital in a dying state but dies before they can be seen by the hospital palliative care consultant (I know of a case of that happening recently in a hospital – technically the patient should not have been coded as palliative, and technically Dr Foster should have considered their death potentially avoidable!) What if the patient’s condition is discussed on the telephone by the patient’s specialist with a palliative medicine consultant – does that count or not? It’s all very ambiguous, yet has a dramatic effect on HSMRs, and these are being used as screening tools to “identify” poor quality hospitals.

Deprivation is another interesting problem, since patients who live in less affluent areas may be more likely to live unhealthy lifestyles such as smoking, and that in turn increases their risk of death, In fact the SHMI doesn’t try to account for deprivation, whilst the HSMR does, but instead of using the Index of Multiple Deprivation, it uses the Carstairs index to allocate a deprivation risk to each patient, which bases deprivation on male unemployment, overcrowding, low social class and lack of car ownership. If it’s a puzzle to people that hospitals in central London in general had well below average mortality ratios, it is worth considering that even many affluent people in central London do not own cars. That begs an interesting question – when these hospitals are calculated to have exceptionally low mortality rates, is it because they are particularly “good” quality hospitals, or is it because their patients tend to come into hospital on the Tube?

There are three key mistakes that can be made when reviewing hospital mortality statistics:

  1. attempting to explain a high HSMR or SHMI solely through use of data on the grounds that they are imperfect indicators of quality of care – for example changing coding can bring mortality up or down without changing clinical quality of care
  2. being falsely reassured by a low or “within expected limits” mortality and not noticing clinical or organisational failings that are causing harm to patients
  3. only focussing attention on the hospital when attempting to understand hospital mortality statistics, and failing to consider the impact of external factors

In the face of the “13,000 avoidable deaths scandal” that is being hyped up on the back of the Keogh report it would be advisable to consider which these three mistakes have been made before jumping to conclusions that the NHS has systematic failings in a small number of hospitals.