Anyone who has been at the sharp end long enough in a senior enough position has had the experience of explaining what went wrong to grieving relatives.  You can apologise, and explain as honestly as possible what has happened, and sympathise.  It is true but of no comfort that the NHS is vast and highly complex and that most care is good most of the time; millions of transactions take place successfully every year.

hospital mortality

Recent research conducted for the Department of Health across ten hospitals (selected at random) has found that around 5% of deaths in hospital were avoidable; either something was done which led to the death, or something was not done which could have prevented it.  Other studies suggest a slightly higher level maybe around 8%.  Studies in other (comparable) countries appear to show similar results.

In the wake of the Francis Inquiry five Hospital Trusts that were highlighted after the most recent Summary Hospital Mortality Indicator (SHMI) [1] results are to be the subject of some unspecified kind of investigation, even though some have acceptable  Hospital Standardised Mortality Ratio (HSMR)[2]s, and some other Trusts with a long history of unacceptably high HSMRs are not being looked at.

It is against this background that you can reflect on headlines (yet again) that hospitals are killing hundreds of people a year, based entirely on the “evidence” that their published mortality rates are significantly above average.  Terms like avoidable deaths and excess deaths are bandied about, wholly bogus figures are repeated.

Many people who attend our hospitals are seriously ill and we have to accept some level of deaths will occur.  The concern is about variation.  If we looked at two large hospitals (A & B) with (apparently) the same volume and case mix we might find that one has 200 more deaths in a year than the other.  According to some elements in our media and some who gave evidence to a recent enquiry this can be interpreted to mean that hospital B has hundreds of excess deaths, is killing 200 patients, or that 200 patients in B died due to poor care.  Some claims are even worse, in that it is suggested that anyone who knew that A had a better record than B and failed to do something about it was responsible for killing hundreds of people. Is it really that simple?

We know some hospitals have consistently low mortality ratios (whether HSMR or SHMI) whist others have been consistently higher, and some move about in the table – but the major fluctuations between hospitals and in one hospital over time are not explained.

The reality is that it is far from clear why some hospitals appear to do better than others. One factor must be the obvious one – some hospitals are simply better at attracting and keeping better quality staff!  But how much variation is also accounted for by poor data recording and by the issues around adjusting for case mix variations?  We simply don’t know – although journalists and commentators assume we do.

We know there have been many studies which set out how outcomes can be improved with better processes, methods and protocols but how much difference does this make?  We don’t know.  We have guidance from Royal Colleges but how much difference does compliance make?

The only reliable evidence is from proper case notes based studies by expert clinicians with access to people as well as paper, as was used in the recent study – Preventable Incidents, Survival and Mortality Study.   If the much hyped investigations into the 5 Trusts fail to come up with some compelling answers, which actually show how system failures or departures from best practice has led directly to deaths, then it has to be seriously questioned as to what these league tables of mortality are actually telling us.

There is now a lot of data but no clear information.  The trust of the year (Cambridge University) has low death rates yet is in deep trouble with Monitor and with also with CQC.  A few trusts are good based on HSMR and bad on SHMI and vice versa.  Most trusts with poor death rates are smaller than average, but some are not.  Some trusts reduced reported mortality rates significantly whilst the trend in their actual level of deaths remained the same!  Some trusts have rising rates of HSMR but a sound trend of falling numbers of deaths.  Our ignorance about why this is possible remains profound.

Helpfully, we do know a little about what makes some organisations better than others in a more general sense.  The three leading predictors of a successful organisation would be long term stability of senior management team (Ferguson/Wenger), high level of staff engagement and staff satisfaction and high proportion of women in senior management posts!

If we just concentrated on having enough well qualified staff who were engaged and satisfied with their roles – that might be a good start.  And the best form of governance could be through a culture of openness and transparency with patients having access to their records and good quality information (which HSMR isn’t).


[1] Summary Hospital (level) Mortality Indicator – produced by Health and Social Care Information Centre

[2] Hospital Standardised Mortality Ratio – as provided by Dr Foster

SHMI takes all deaths in hospital and up to 30 days after discharge; HSMR is based on in hospital deaths only for a subset of common conditions.  Both (attempt to) adjust case complexity and case mix as well as some contextual factors.

Post a comment or leave a trackback: Trackback URL.

2 Comments

  1. Richard Grimes says:

    The First Francis Report (para50, p367) comments on the remarkable fall of HSMR from 127 to 89.6 (in 08/09). Antony Sumara, the current CEOP says:

    “I think there are four elements in why Dr Foster is different… which I have no evidence for and I can’t give you any detail. One is that the coding is just better now. The second one is we don’t do strokes any more. The third one is we don’t do MIs [myocardial infarctions] any more and the fourth one is actually because we have improved that emergency care pathway, your chances are you will get to see the right doctor quickly if you are medically ill. I think that will make a big difference to outcomes eventually. But I have got no evidence to say that has done the trick. In many ways do I care because all I am interested in is can I get it right every time? It is a bit of reassurance.”

    1) Coding: its known that “deep coding” can reduce HSMR.
    2) “we don’t do strokes any more”, surely HSMR is designed to ensure that a hospital’s standardised mortality takes into account whether it does strokes?
    3) “we don’t do MIs any more” (ditto)
    4) “we have improved that emergency care pathway”

    Arguably, only #4 should reduce HSMR because it is an action taken to improve quality of care and hence prevent “unnecessary deaths”, so it is worrying that the other three (changing case mix or coding practices) are important.

    There is a danger that standardised mortality rates have now become another target, with all the problems that working towards a target entails. If a hospital has a value over 100 will it hospital improve quality of treatment and care (which we want), or will it simply take action that will reduce the standardised mortality without improving quality (like “deep coding” or changing the case mix)?

    An interesting document on standardised mortality was produced by the Public Health Observatories:

    http://www.apho.org.uk/resource/view.aspx?RID=95932

    “Evidence so far suggests that the hospital mortality ratio, as a single indicator of hospital quality is, at best, akin to a smoke alarm; it may signal something serious, but more often than not it will go off for reasons unrelated to quality of care. But, like smoke alarms, Hospital mortality ratios should never be ignored.”

    They say:

    “[excess deaths] is sometimes interpreted by the media as deaths which were avoidable (i.e. that they should not have happened at all), unexpected, or attributable to failings in the quality of care. None of these can be directly inferred from an SMR – it can only signal that further investigation may be required.”

    The APHO says that you should look at the trends: whether SMR is consistently higher than 100 for six or more consecutive months, or six or more increases (regardless of starting value).

    Finally, on the often quoted figure of “400 to 1200 deaths” given in the media the First Francis Report says (para74, p23):

    “Taking account of the range of opinion offered to the Inquiry, including a report from two independent experts, it has been concluded that it would be unsafe to infer from the figures that there was any particular number or range of numbers of avoidable or unnecessary deaths at the Trust. However, there is strong evidence to suggest that these figures mandated a serious investigation of the standards of care being delivered rather than reliance on the contention that they had been caused by coding.”

    Basically, do not infer a figure of “unnecessary deaths” from HSMR, but *do* use the figure (with the APHO guidelines) to trigger an investigation

  2. Martin Rathfelder says:

    Any measure that depends on the difference between what happens and what was expected will always be very sensitive to the calculations of expectation

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 538 other subscribers.

Follow us on Twitter

%d bloggers like this: