Category Archives: Mortality

Measures of mortality andt heir significance

It is surprising that the Guardian gives any credence to the statistics of hospital mortality promoted by Sir Brian Jarman on Channel 4. Hospital death rates, particularly if followed over time, can give useful warning of problems, as Sir Bruce Keogh has stated. However comparison of such data internationally is fraught with difficulties, as we found when we first published the European Atlas of Avoidable Deaths in 1988. Since total mortality rates in the UK are similar, or lower, than in the United States, as a large proportion of the population in both countries die in a hospital such a difference seems unlikely.

momento moriTo compare hospital mortality rates between hospitals, whether in one or more countries , it is necessary to take into account such factors as length of time in hospital, availability of discharge facilities, e.g. hospices, hospital admission criteria and many other procedural and cultural factors. Comparison of outcomes for individual conditions is even more difficult because of differences in diagnostic and coding procedures which have been illustrated many times. Unfortunately the data presented by Jarman in both your paper and Channel 4 is inadequate to determine the validity of the conclusions or the methods used. This is a striking media story but, unfortunately has not been subjected to proper scrutiny.

Walter Holland.


Visiting Professor

London School of Economics

Tagged | 3 Comments

My interest in mortality goes back 40 years to when I trained as an actuary and carried out medical underwriting.  I have written before about how early on in my NHS career I suggested applying some of the rigour we used in the insurance industry to mortality studies. I was interested in mortality rates and Hospital Standardised Mortality Ratio; it was familiar territory.

Some years later I have grave concerns about the misuse of Hospital Standardised Mortality Ratio, based in part on the academic papers often cited which raise methodological issues.  I am also aware that the different methods to derive an Hospital Standardised Mortality Ratio using the same underlying data can produce different answers.

So I disliked the adoption of one flavour of Hospital Standardised Mortality Ratio as a commercial product, touted by the DH as vital to good management in its role as some kind of business partner.  I disliked the use of ratings for league-table like simplistic denunciation of trusts.  But I was warned (in 2008) for speaking publicly against the abuse of mortality rates.

In recent months through Mid. Staffs. to Keogh we have blatant political abuse with ludicrous claims in various media about excess deaths derived from abuse of Hospital Standardised Mortality Ratios.  So, for example, the only thing we know for certain about the often quoted Hospital Standardised Mortality Ratio figures from Mid. Staffs is that they were wrong – not something you ever hear in the “media”.

None of these many misuses were ever challenged by those who knew better.  The ridiculously incorrect use of inaccurate information was actually given credibility by those who knew better.

But even before that shameful episode my own work had shown me why the way Hospital Standardised Mortality Ratio is used was flawed.  First it is easy to use the whole country figures to compare one year with another, and when I did that it suggested an 6- 8% improvement in mortality rates across the whole NHS in one year; clearly impossible.  Second, the annual ratings tables showed there were a number of trusts that improved their Hospital Standardised Mortality Ratio significantly but other information showed both the number of actual deaths and the case mix did not change; clearly impossible. Something was wrong.  Things have improved and we now have Summary Hospital-level Mortality Indicator which is a better indicator – but the issues remain.

All the indicators like Hospital Standardised Mortality Ratio and Summary Hospital-level Mortality Indicator depend on coding.  I observed coding taking place and compared the process with the rigour used 40 years ago for underwriting and found it was poor.  Often lightly trained clerical staff were trying to interpret incomplete information to get to the codeable diagnosis was only one of several obvious sources of potential error, but there were no serious systemic or even spot checks applied. The level of coding error and its impact on Hospital Standardised Mortality Ratio could easily be studied, but isn’t.

It is regularly claimed that high Hospital Standardised Mortality Ratio is an indicator of poor care in the organisation as a whole.  And so it is claimed that a high Hospital Standardised Mortality Ratio means more deaths than should be expected given the adjusted case mix.  This may even be true but no study has ever confirmed this.

If there was another universally accepted method which indisputably ranked trusts for the quality of their care then we could see how well Hospital Standardised Mortality Ratiocorrelated with the reality. Such a baseline study would not be impossible but would require some investment, but there has never been any kind of attempt to do this (still one outcome from Keogh is we are promised one).  Can we honestly claim we know the underlying reasons for variations in mortality rates?

Maybe we would also find that the higher mortality rates were linked to care quality but are mostly just simply correlated with levels of funding as some claim.  We don’t know.

Which leads on to something that we do know derived from proper clinically led non statistical studies into avoidable deaths.  The outcome is not just a figure: it is also a lot of information about why the figure is at that level – what the most common causes of avoidable deaths were.  This is something that is immediately transferable into action. Such studies are clinically intensive and do include (in some cases) actual involvement with the clinicians responsible for the care and even relatives and carers. The studies have greater reliability as there is no coding error or opportunity for gaming.

What we do know from studies of this kind is that the level of avoidable deaths is around 5-6%.  The statisticians tell us that this is a figure which renders the use Hospital Standardised Mortality Ratio to detect variations in this segment of all deaths is unreliable; it can’t get at this 5-6%, which is where the variation due to clinical quality comes from, in a meaningful way.

For those that are responsible for managing clinical quality, we know there are many things which should be done.  You should always do the clinical audit studies, do mortality investigations in an open and transparent environment, make sure your data is accurate, invest in analysis skills, do proper case note investigations, look not just at all deaths but also at near misses.  These are all real investments in quality improvement. We could in a collaborative NHS do far more to share information and best practice.

Why bother with Hospital Standardised Mortality Ratio at all?  Well, I am convinced that we have to find ways to use information better to provide early alerts of things which may be going wrong or to identify where (with perhaps the best intentions) current practice is falling behind best practice.  But until we have much better data, a much more sophisticated set of analysis tools, and competent analysts we will just get the fog generated by misuse of one tool which may actually be valuable if used properly.


The general response to the Keogh review was positive; it showed how inspection and analysis could be used.   As many have already pointed out, it used a methodology similar to that used by the Commission for Health Improvement a decade ago (and found to be too expensive!).  The absurd pre publication hype about 13,000 avoidable deaths from the usual media sources was as predictable as it was disappointing.  It is about time some of those who passively condone this type of ridiculous reporting actually spoke out.

Bruce Keogh (Photo credit: Cabinet Office)

Bruce Keogh (Photo credit: Cabinet Office)

The review of 14 acute trusts with a recent history of high comparative mortality rankings (using either HSMR or SHMI) showed that in all of them there were serious issues; issues which had NOT been picked up by either regulators or performance managers.  It was accepted that many of the issues had been identified within the trusts but the rate at which remedial action was being taken was too slow.

Since there are no suitable benchmarks it is possible that such a thorough review would find issues at most trusts; and indeed may find trusts with, for example, weak governance but good mortality rankings.

The issues found in the 14 were a mixed bag – poor managerial and clinical leadership, weak governance processes and some technical failures (eg failure to maintain equipment) but the most obvious linking factor was lack of suitable staff at the right time in the right place.  The data appeared to show sufficient staff but the reality was they were not always in place: the most useful conclusion and the one most likely to be ignored.

But – the review did not say anything about excess or avoidable or unnecessary deaths.  The stories spun by the Tories and reported without critical analysis by some parts of the media were shown to be nonsense.

To further emphasise the point many of us have been making for years the report recommends that a proper study is done to see if there is any link between league table style mortality ratings and unnecessary deaths.  A proper evidence based study.

Then we need an evidence based study which shows why some trust have high ratings – we still do not know.  As a simple example one trusts of the 14 has had a higher than average crude mortality rate (deaths divided by activity – not affected by coding) for as long as records are available.  During this time it has had a complete change in management, completely different governance structures, completely different regulation and performance management – and it is still higher than average – so WHY?

The review also served as a test of methodology and it largely passed.  If sufficient resources can be mobilised then similar reviews could and should be made at every provider – not just acute trusts and not just NHS; with the results properly published.

So we get a more thorough approach using managerial and clinical expertise from within the NHS and also properly involving the public and patients.  It will not come cheap and you wonder how long such a regime will survive before someone says would it not be cheaper just to get a better quality of management and enough staff?

1 Comment

Guest Article by J Twunt

Jeremy Twunt

I welcome the Keogh Review as a vindication of our policies.  It clearly lays the blame for 13,000 needless deaths between 2005 and 2010 (May) on Labour and its policies.  Before 2005 there were no deaths as Labour had yet to set a target for them, but their policy from then of greater competition and more use of private providers was clearly the cause of this suffering.

The 3246 needless deaths since 2010 (May) are shown by Keogh to be due entirely to the incompetent local management in these rogue trusts. Poor management can never be tolerated so we have decided not to remove any of them if they were appointed after 2010 – which they mostly were.

Keogh has shown how hopeless the NHS has become.  The experts at the Daily Mail have used the Keogh evidence to prove a free at the point of need NHS inevitably kills the better off and depresses house prices.

As the Mail confirms all of these hospitals were in the NHS – none of them were private providers.  Keogh also showed that none were in London because London has far too many hospitals but healthy competition.  More competition and greater use of private providers is the way forward.  Fortunately we have passed an Act to make that not just easy but inevitable.  From 2016 all hospitals will be run by Virgin, Capita or G4S.

To speed the closure of failed hospitals, or to effect turnaround as we call it, we are sending in hit squads from the major consultancy firms.  They are ideally placed to help as they have advised the same trusts and got them to where they are today.

And picking up on the pointless slaughter theme Osborne’s team of forensic accountants have devised a top up charges scheme so patients at the best hospitals pay extra not to be unnecessarily killed.

It is reassuring that Keogh confirms that people die in hospitals as it supports our policy of closing them whenever there is an excuse.  We have to overcome the sentimental attachment communities have with their local services and frighten them in demanding closures.  We are using our new special administration powers to shut hospitals based on teams of consultants showing the work by previous consultants failed to solve problems.  So far we have only been able to apply it to hospitals with good death rates but that can change once we get the hang of it.

I am especially glad that Keogh nailed the lie that staffing levels matter. It is obvious to me that having less doctors, less nurses and less consultants will not affect care – otherwise how can we deliver the cuts that are necessary.  The idea that having less resources can impact on patient care was always nonsense.  Labour poured billions into more staff but only because they are slaves to their trade union masters and they still killed patients in their millions.

What I find sad is the carping from the NHS management bodies, NHS E, PHE, NHS C, Monitor and the rest.  As I told them when I gave them their independence they are not to use it in any way to criticise me, or my government or its policies – I am not sure I made that clear enough when I handed out the jobs.

But I do want to say a major thank you to all the dedicated and hardworking staff in the NHS, some of whom I once remember meeting.  It is so sad they are being let down by the incompetent managers, coasting clinicians and by nurses being absent through redundancy or working in the wrong way in the wrong places because their trade unions refuse to allow progress.

So I welcome the review and its endorsement of our recent successes in improving GP access in and out of house, lowering A&E waits, improving social care, reducing cancelled operations and increasing nursing numbers.

Thanks for your support.

Tagged | 2 Comments

Your chance of dying from a heart attack has about halved over the last ten years.

Mortality from acute myocardial infarction in England, 2002-10

Mortality from acute myocardial infarction in England, 2002-10

But is this down to prevention or treatment?

Number of heart attacks

Number of heart attacks

Your chance of having a heart attack is down about 30% Chance of dying after heart attack

Your chance of dying if you have a heart attack is down by about a quarter

Determinants of the decline in mortality from acute myocardial infarction in England between 2002 and 2010: linked national database study. ” both primary prevention and secondary prevention would have contributed to the decline in the rate of sudden deaths from acute myocardial infarction. In addition to reducing rates of sudden death, coronary prevention can reduce disease severity and therefore may contribute to the decline in case fatality for those who survive long enough to receive hospital care for acute myocardial infarction. Furthermore, changes in fatal outcomes among people admitted to hospital for acute myocardial infarction also reflect, at least in part, the contribution of improvements in acute medical treatment during the study time period.”

What is already known on this topic

Population based mortality rates from coronary heart disease and acute myocardial infarction have been declining in England and other developed countries since the 1970s.

The relative contributions of changes in event rate and case fatality to the decline in total acute myocardial infarction mortality vary by country and are not known for England and many other countries.

What this study adds

In England during 2002-10 the age standardised total mortality rate fell by about half and the age standardised event and case fatality rates both declined by about one third.

The determinants of the declining mortality rates differed by sex, age, and geographical region.

Overall, just over half of the decline in acute myocardial infarction mortality rate can be attributed to a decline in event rate and just less than half to a decline in case fatality.

Leave a comment

The Keogh Review turns out to one of the best documents produced for our NHS.  Clear, concise, brief and informed by evidence.  It doesn’t look to blame, doesn’t make overt recommendations that can be tracked and ticked off.  It just makes sense.

It does not portray 14 further Mid Staffs and its forensic approach probably should make us think again about what exactly we learned from a review by lawyers.

It bears no resemblance to the media coverage it received before it was published.  It draws lessons only when justified from the evidence from the 14 trusts that were examined – it makes no links at all to politics or policies.  It is about now, not about years ago.

It actually talks to an NHS before the Lansley reforms!  It makes no allowance for competition and markets and assumes parts of the NHS will be willing to help out other parts; commissioners will work with providers and even puts a strategic body into play focused on quality.  It talks as if the old levers were still there to pull!  It hardly refers to commissioning, money flows, targets, the drive for FT status don’t get much traction.

It will greatly disappoint the various conspiracy theorists and those who claimed it should show the whole NHS was awful; how long before the accusations of a cover up or use of whitewash begin.

It makes clear that the process used, let’s call it peer review, is far more effective than anything regulators can deploy and that it found things they would not. It sets out that they found poor management, poor leadership including clinical leadership and many examples of issues caused by inappropriate levels of staffing; and it found too many trust specific issues that needed urgent attention.  It leaves that as problems for local management to resolve with some external help – it does not say the problems were caused by particular policies or particular external forces.  It does not castigate regulators for not being able to find as much.  It does not call for sackings and retribution or apologies.

It puts the use of HSMR and SHMI where it should be, pointing out they give different answers, and refuting in strong terms the stupid and dangerous claims about excess deaths, as did Francis – for all the good it did.  The proposed development of proper measures, based on case notes review, and measuring avoidable deaths is one we have long argued for.

The ambitions set out are not for magic or instant solutions – it will take years to achieve them.  But they sound right.

  • Moving on from statistical arguments (often backed by commercial interests) to actually reducing avoidable deaths.
  • Making sure those who manage can actually understand and analyse the wealth of data that is already produced.
  • Making far better and more creative roles for patients, carers and public – convincing them they are listened to; making them partners.
  • Having a CQC that we can have some confidence in because it involves clinicians and the patients.
  • Reinforcing the coherence of the NHS – stopping the idea of trusts as separate isolated (competing?) entities.
  • Having the right skill mix at all times matching the caseload, not in theory but in place 24/7.
  • Making best use of the small army of junior doctors (one we haven’t advocated!).
  • Valuing the staff and understanding that happy and engaged staff will deliver the best patient outcomes (always our favourite!).

This is good solid sensible stuff.

Tagged | 4 Comments

Despite being told by officials and academics that generalisations from mortality rates are not sensible our media are at it again, using the information for blatantly party political purposes.

Mortality rates like HSMR and SHMI are about as useful as all other similar headline measures which attempt to sum up the quality of something as complex as a hospital.  Despite years of use and abuse what we know about WHY some hospitals have higher rates is very little.  At best we know some are better than others because they have more skills and resources and trusts with higher levels of staff satisfaction provide better care – not really a surprise.

Saying hospitals with higher mortality rates are needlessly killing people is as stupid as saying those with lower rates are heroically savings lives – although all hospitals do indeed save lives.

Here are some facts about mortality measures.

  • The only thing we know about the much discussed published mortality rates at Mid Staffs is that they were wrong – many suggest that if the data was now correctly analysed the rate would have been below average.
  • They depend on “coding” carried out within the trust and it is well established that this gives rise to significant errors as well as possible “gaming”.  Even if “coding” were perfect the methods still cannot adjust for all possible causes of variation that are nothing to do with care quality.
  • According to one leading brand mortality across the whole NHS improved by 8% in a year – nobody believes this.
  • When mortality rates became of great interest a number of trusts dramatically improved their rate – some by up to 30%, but there was no noticeable change in the number of actual deaths.
  • Looking at the most recent data from the two leading mortality measures shows a couple of trusts are significantly above average (scary dangerous) on one measure and significantly below average ( brilliant exemplary) on the other.
  • In a controlled experiment, exactly the same data on a sample of hospitals was analysed by four organisations using different methods and they got totally different answers.
  • The trust of the year according to the leading “mortality” analysis vendor is in special measures with the regulators and in breach of its terms of authorisation.
  • Two trusts castigated for poor mortality one year got praise the next for dramatic improvements and two years later were back on the naughty chair and under investigation.
  • Such studies as have been carried out show no direct correlation between poor care as measured in some way which is not contested (such as case notes review) directly and mortality ratios.

Rates of unnecessary deaths can be defined and measured and are not subject to statistical manipulation and show a rate of around 6% across the NHS and this is incompatible with the claims made about the relevance of crude mortality rates.

By the way – in any year there will always be 14 trusts in the worst 14 category and whilst you have 150 plus trusts there will always be trusts whose mortality rates are above average and some will be significantly above average.

Using crude measures as a signal to trigger investigation is possible but as HCC, CQC and others found using this kind of data to predict where to look for poor care was inadequate and could be misleading; only pervasive on site competent in-depth investigation is of any real value. Its time the Royal Colleges, NHS Confederation and all the other bodies spoke out and stopped the nonsense.

Tagged | 2 Comments

The review looked at 14 Acute (Hospital) Trusts, see Table 1, which had a significantly high SHMI[1] or HSMR[2] over both the last 2 years.  The use of these measures remains contested.  Given the nature of statistical investigation there will always be a group at the bottom of the distribution.  The Keogh review used clinicians to look beyond the statistics and exaggerated claims of excess deaths.

The review did not look at other trusts to establish any kind of baseline.  There are trusts not in the Keogh 14 that have high mortality rates, poor finances, weak governance and quality issues.  Table 3 at the end shows all the trusts identified on various measures over the last 4 years.

The review used mortality from the last 2 years; SHMI was not used prior to this period.  The 14 trusts with the highest HSMR’s for the two years prior to those studied (see Table 3) are a completely different group.  Looking at a five year period and a wider range of indicators shows trusts move in and out of categories.

It is striking that none of the 14 are London trusts.  The 14 are generally smaller than average size, with 2 (Tameside and George Eliot) being small.   Just over half of all acute trusts are now Foundations Trusts (FT) but in the Keogh group 9 of 14 are FTs which is significantly more than would be expected since they are assumed to be better performers.

Table 1

Turnover £m




Basildon and Thurrock University Hospitals NHS FT





Blackpool Teaching Hospitals NHS FT





Buckinghamshire Healthcare NHS Trust





Burton Hospitals NHS FT





Colchester Hospital University NHS FT





East Lancashire Hospitals NHS Trust





George Eliot Hospital NHS Trust





Medway NHS FT





North Cumbria University Hospitals NHS Trust





Northern Lincolnshire and Goole Hospitals NHS FT





Sherwood Forest Hospitals NHS FT





Tameside Hospital NHS FT





The Dudley Group NHS FT





United Lincolnshire Hospitals NHS Trust





Spells indicates the number of in inpatient procedures carried out and turnover is the income of the trust. The mortality figures are from Dr Foster Good Hospital Guide 2012.


Research into the 14 trusts was carried out by Prof Sheena Astana and Alex Gibson at University of Plymouth and the results published in the Health Service Journal on 4 March.  The work looked at what significant differences could be found between the 14 trusts under investigation and those that were not.

The results showed that those in the Keogh group had 56.1 doctors (of all grades) and 19.7 consultants per 100 beds compared to 67.5 and 24.0 respectively for the rest.  They also had fewer cleaners! with 18.0 and 23.1 respectively.  In terms of nurses there was a clear difference 136.8 compared with 143.4 but this was not significant at the 99% confidence level used.

Further research by the same team also identified that there was a clear link between resources available the mortality rate – confirming statistically the common sense idea that hospitals with access to a higher level of resources (income) will be able to employ more staff.

This also confirms the established position that there is a clear correlation between staff satisfaction and quality of care.

Private Finance Initiative

6 of the 14 trusts have a PFI (some have two) but in only one case – Sherwood Forest – is there a likelihood that the PFI is having a detrimental effect.  Colchester (uniquely) cancelled a £185m PFI scheme just prior to contract close.  The Capital Value and Current Unitary Charge level are shown in Table 2 alongside the turnover.

Table 2

T/O £m

Capital Value £m

Unitary Charge £m





East Lancashire




North Cumbria




Sherwood Forest








Dudley Group




There are trusts that have large PFIs but which have low SHMI and HSMR.  Examples are Leeds with a £265m scheme and SHMI/HSMR of 92/92; Newcastle £299m, 95/97; Portsmouth £256m, 98/99. Central Manchester (£512m), Derby (£313m), Oxford (3 schemes ~ £300m total), Coventry (£379m) and Peterborough (£336m) all have large PFIs but SHMI and HSMR within normal limits.

Variation & Anomalies

The various attempts to assess trusts using crude headline figures, such as mortality rates, throws up many anomalies.

  • Two of the most financially challenged Trusts, both in London, South London and Barking, Havering & Redbridge have SHMI and HSMR below average.
  • Barnet & Chase Farm has been trying to reconfigure for many years and has many challenges but has SHMI/HSMR of 87/91.
  • Peterborough has an unaffordable PFI scheme and unsustainable financially and is under scrutiny by the PAC but has good mortality rates.
  • Stafford Hospital (part of Mid Staffs) had very high mortality rates 6 years ago but has now got amongst the lowest.  It has taken on many additional staff and in doing so became financially unsustainable and is being broken up!
  • Cambridge University Hospitals (Addenbrooks) is highly regarded and has SHMI/HSMR of 83/81 – amongst the lowest.  Yet it has regulatory issues with finance, governance and quality and is in breach of its terms of authorisation!
  • Medway and George Eliot in the Keogh 14 both had poor HSMRs in 2008 but in 2009 both were congratulated on the steps they had taken to bring the figures down significantly and by 2012 they were back in the worst category!

Foundation Trusts

  • 9 of the trusts are FTs.  Of these 5 have a RED current rating from the regulator Monitor, 3 have an AMBER/RED rating and one (Blackpool) has an AMBER/GREEN rating have recently come out of special measures.
  • 5 FTs (Basildon, Burton, Medway, Sherwood Forest and Tameside) are regarded by Monitor as in breach of their terms of authorisation.  In total 19 FTs are in breach.  Burton, Sherwood Forest and Tameside have financial risk ratings indicating serious concerns.
  • Blackpool was in breach until recently but is judged by Monitor to have recovered its position.

NHS Trusts

Of the 5 NHS Trusts East Lancs is currently being considered by Monitor for FT status.  North Cumbria and Medway are in discussions over mergers.  George Eliot is looking for a merger partner or other solution but accepts it cannot stand alone in its current form.  United Lincs has no resolved position in its FT journey status.

[1] Summary Hospital (level) Mortality Indicator – produced by Health and Social Care Information Centre

[2] Hospital Standardised Mortality Ratio – as provided by Dr Foster

SHMI takes all deaths in hospital and up to 30 days after discharge; HSMR is based on in hospital deaths only for a subset of common conditions. Both (attempt to) adjust case complexity and case mix as well as some contextual factors.

Continue reading »

Tagged | Leave a comment

Having written a widely-quoted systematic review on the (lack of) relationship between risk adjusted mortality rates and quality of care I think I can contribute a little to this discussion. Whilst it is clearly wise to monitor one’s mortality rate (crude and adjusted), even adjusted hospital mortality rates mean nothing in isolation.

Firstly we need to understand exactly what HSMR (hospital standardised mortality ratio) and similar statistics actually mean and how they are calculated. Essentially they are the ratio of “observed” (ie actual) deaths to “expected” deaths in the hospital. Someone (the Dr Foster company in this case) has gone and made a prediction, for every hospital, of how many deaths each hospital “should” have had if they had admitted patients who were characterised by having the same rates of disease, same age patterns etc, as the national average. They have then compared the number who actually died against the number who they think “should” have died and this presumably has led to a difference, over the past eight years, of 13,000 patients in the fourteen hospitals reviewed by Sir Bruce Keogh alone.

But we have to be extremely cautious about this figure. Quite apart from the distress it must be causing the loved ones of patients who died in these hospitals (“is Sir Brian Jarman really saying their death could somehow have been prevented if they had been admitted elsewhere?”), the frustration it causes staff who work there and the loss of confidence in the NHS of the wider public who may end up being admitted to one of these hospitals, there is a fundamental problem with the figure of 13,000.

It is actually nothing more than a statistical construct based on the fact that according to Dr Foster’s calculations, 13,000 more patients than average died there. They have no way of actually pinpointing which 13,000 patients who died as a result of poor care, unless they actually went through each patient’s casenotes, which they haven’t done. You might as well ask a nonsensical question: statistically just as many patients must NOT have died at the “best” HSMR-performing hospitals since overall the HSMRs average themselves out, but we aren’t going around trying to find people who really ought to have died in them but didn’t and asking why they didn’t die in the “best” hospitals.

The Keogh review also (in my humble opinion) also makes the fundamental error of identifying certain hospitals believed to have problems of quality care based on their HSMRs (a false assumption) then goes in with a fine toothcomb and unsurprisingly it finds mistakes are happening. Logically therefore (according to the world of Dr Foster) this proves HSMRs “work” in correctly identifying the worst hospitals – except of course it doesn’t, unless equally searching investigations had been undertaken in every other hospital, which they weren’t. So a few hospitals are being hyped up and labelled dangerous with no consideration of whether the extent of quality problems in them is any worse than the unvisited hospitals. In fact the evidence suggests that if you apply a poor screening test (HSMR) to identify poor quality hospitals you will actually end up missing most hospitals with genuine problems.

In their defence some of the hospitals under review by Sir Bruce Keogh have pointed out that their most recent mortality ratios are actually now within normal limits. But this forces a rather uncomfortable question for those portraying mortality rates as the way to identify bad hospitals: if their mortality rates are normal at the moment and Sir Bruce’s team are at the same time identifying dangerous problems in them, what does that tell us about the usefulness of the HSMR as a tool for identifying poor quality hospitals? In how many low HSMR hospitals are there quality of care problems?

This leads us on to a set of rather more fundamental problems: accurately measuring either quality of care or hospital mortality is actually close to impossible. And to jump to the apparently intuitive conclusion that high mortality hospitals must have worse care than low mortality hospitals ignores a fundamental observation: that hospital mortality is hardly, if at all, related to quality of care.

I was recently involved in examining a hospital not on the Keogh list but whose “high” Hospital Standard Mortality Ratio raised question-marks even before it strangely plummeted, prompting various conspiracy theories in the media about fudging figures. Having also been involved in palliative care commissioning I was able to pinpoint how the new hospice the PCT opened got up to full capacity exactly the time the mortality rates started falling. But from what I have seen of the Keogh review part on the Department of Health website, precious little attention is given to consideration that factors outside the hospital’s control but within the wider health economy are having. If you haven’t got any hospice beds locally, if the GP out of hours service has disintegrated, if there are no district nurses at weekends and social services takes the phone off the hook at 5pm on Friday afternoon, if your A&E is bulging at the seams and the medical wards are full of people who can’t go home as there aren’t any nursing home placements available, then of course frail elderly or terminally ill patients will find themselves being admitted to hospital and of course the hospital’s mortality ratio will be high. That the factors leading to the hospital’s high mortality ratio are largely outside the control of the hospital seems of no concern to those who are busily slating hospitals with higher than expected mortality ratios as though it’s somehow their fault.

In fact it is invariably premature to jump to the conclusion that a high mortality hospital has a poor standard of care. It is essential to realise that hospital mortality statistics are a consequence of a complex mix of factors including:

  •  case mix (how severely ill patients are who are admitted to hospital)
  •  lifestyle choices such as smoking and diet
  •  disease coding (how accurately the patient’s presenting conditions are categorised to one or more standardised disease categories, to allow comparison between patients with different conditions)
  • quality of care in primary care and community settings e.g. care homes
  • quality of care in hospital
  • availability of end of life care options e.g. local hospices
  • patient choice of where to die
  • chance variations

Therefore a high mortality rate or ratio in itself does not necessarily imply that there is any reason for concern about the quality of clinical care at the hospital; rather it is correctly described as a “smoke alarm” that should trigger examination of pertinent factors including clinical care to see whether they can explain the outlying mortality statistic. In fact often the majority of factors that cause a hospital to have a high mortality statistic are outside the direct control or influence of the hospital. Furthermore, within hospitals there may be areas of exceptional care and other less strong areas and in theory if a department had a high mortality rate as a result of poor care, this could be masked by stronger performance in other areas.

A recent very detailed review of a thousand deaths in a range of NHS hospitals in England found that on average around 5% had factors in their clinical care that might have contributed to the patient’s death, which as a headline figure is actually similar to research studies of avoidable hospital death in other industrialised countries. Mathematical modelling suggests that at this level of “preventable death”, avoidable mortality can only explain 8% of the variation in hospital mortality rates and only one in every eleven hospitals identified as having a significantly elevated mortality statistic may have problems with quality of care. The majority of hospitals with high Hospital Standard Mortality Ratios or Summary Hospital-level Mortality Indicators are unlikely to have more quality of care problems than hospitals with normal or low mortality statistics.

Two research papers illustrate the difficulty of linking poor quality care to high mortality rates.

Firstly, in the systematic review I referred to earlier, of all studies that attempted to measure quality of care and mortality rates (after adjustment for patient case-mix) across several hospitals, it was evident that there was a very tenuous link if any between quality of care and hospital mortality. For every study that showed that higher mortality rate hospitals delivered poorer care, another study found either no association or even a negative association (that is, the better quality hospitals had higher mortality rates).

Secondly a unique experiment in Massachusetts in 2008 cast grave doubt on the ability of commercially-calculated hospital mortality statistics to discriminate between “good” and “bad” hospitals. They provided a complete dataset of all admissions over a period of several years for 83 hospitals and invited four companies, including the British hospital mortality statistics company Dr Foster, to try to identify the “highest” and “lowest” mortality outlying hospitals. 12 of the 28 identified by one company as “significantly worse than expected” were simultaneously, and using the same dataset, identified as performing “significantly better than expected” by the other companies! The researchers duly advised that it was not safe to assume that hospitals could be identified as good or bad on the basic of mortality rates alone.

One might well ask at this point, why after risk adjustment, are there still such variations in hospital mortality? The answer comes back to that disarmingly simple formula for calculating hospital mortality ratios, observed deaths divided by expected deaths.

For observed deaths, you can easily dispute exactly who you should include. Should you include only patients who died during their admission (as Dr Foster does)? But that means patients who died as a result of incompetence but who survived to discharge only to die at home would not get picked up. So we have other measures of hospital mortality like the SHMI (used by the Health and Social Care Information Centre now in preference to HSMR). That counts patients who died after discharge, if they died within 30 days of discharge. But that’s a bit unfair to hospitals who discharge a patient say to a nursing home and the nursing home care is so poor that the patient dies soon after discharge. (Alternatively, if you are in Scotland you would calculate mortality rates based on deaths within 30 days of admission,  not 30 days of discharge. That will change the number of observed deaths yet again.)

For expected deaths you are reliant on Dr Foster’s formula, or the HSCIC formula for SHMI, or some other company’s formula, and they all take different factors into account when predicting how many patients “should” have died in hospital. So for instance when the SHMI was under development at Sheffield University the researchers tried experimenting with whether to take deprivation into account, or whether to factor emergency admissions separately to elective admissions. They tried several different formulae, and unsurprisingly the position of hospitals in the “league tables” this generated varied from one formula to another formula. Dr Foster have of course settled for one particular formulae which they use to claim that 13,000 people died avoidably at the 14 hospitals reviewed by Keogh, whilst another formula would find a totally different number of “avoidable deaths”.

A crucial factor in all this is accuracy of coding, since the way patients are coded affects their calculated probability of death. Patients in general do not conform to neat “tick boxes” with the exact condition they were admitted with written on their forehead. Let me give a simple example. Imagine three identical hospitals which each admit in quick succession ten identical frail old people in a state of confusion, with smelly urine and a fever. Each hospital treats the ten identical patients identically, and in each hospital five of the ten die. The first hospital codes them as being confused, which really shouldn’t lead to anyone dying, so for five of ten patients to die from confusion suggests a scandalously high mortality rate. The second codes them as having urinary tract infections, which is correct – and if five of them died, that’s probably what you would expect. The third recognises that they had a fever on admission, which is a sign of potential sepsis, so codes them as having sepsis, and when five of those patients survive the hospital’s apparently low mortality rate will seem miraculous. The important point is, all thirty patients received identical care, the way their diseases were coded were all appropriate yet the three hospitals have ended up with radically different mortality rates. One might then go further and say that in the first hospital several patients died unnecessarily since their adjusted mortality rate, for confused patients, was so bad (five out of ten, when you might expect only one out of ten confused patients to die during an admission) but of course you will now realise that is a quite ridiculous leap of deduction to make. Yet it’s exactly the same deductive leap that has been applied to arrive at the figure of 13,000 deaths at the Keogh review hospitals.

If measuring hospital mortality is difficult, measuring quality of care is even harder. The best we can go on is detailed review of casenotes of patients who died by an impartial and (at least partly independent of the hospital) team, and “mortality review groups” are an important safety measure being introduced in many hospitals. The sorts of things patients complain about (for example delays in being offered a cup of tea) may actually not represent safety issues, whilst safety concerns can go unrecognised, especially if one is moved to complacency by the fact one’s HSMR is low or normal.

A further issue is palliative care. Patients who are expected to die, and are given a palliative care code are essentially considered almost “inevitable” deaths by Dr Foster’s calculations whereas patients not given a palliative care code are somehow expected to survive and if they die, they count in the observed deaths category. There has been plenty of criticism levelled against certain hospitals that use palliative cares liberally, though if the same hospitals lack community facilities for dying patients then it is inevitable that many more patients will end up dying in the hospital instead. But even defining who should be given a palliative care code is far from straightforward. Should it only be used for patients who have been seen by a palliative care consultant? (In that case, if a hospital starts to employ a consultant in palliative medicine, they will suddenly be able to increase the number of patients given palliative carecodes and their mortality rates will appear to plummet). What about if a patient is well known to the palliative care team in the community, is admitted to hospital in a dying state but dies before they can be seen by the hospital palliative care consultant (I know of a case of that happening recently in a hospital – technically the patient should not have been coded as palliative, and technically Dr Foster should have considered their death potentially avoidable!) What if the patient’s condition is discussed on the telephone by the patient’s specialist with a palliative medicine consultant – does that count or not? It’s all very ambiguous, yet has a dramatic effect on HSMRs, and these are being used as screening tools to “identify” poor quality hospitals.

Deprivation is another interesting problem, since patients who live in less affluent areas may be more likely to live unhealthy lifestyles such as smoking, and that in turn increases their risk of death, In fact the SHMI doesn’t try to account for deprivation, whilst the HSMR does, but instead of using the Index of Multiple Deprivation, it uses the Carstairs index to allocate a deprivation risk to each patient, which bases deprivation on male unemployment, overcrowding, low social class and lack of car ownership. If it’s a puzzle to people that hospitals in central London in general had well below average mortality ratios, it is worth considering that even many affluent people in central London do not own cars. That begs an interesting question – when these hospitals are calculated to have exceptionally low mortality rates, is it because they are particularly “good” quality hospitals, or is it because their patients tend to come into hospital on the Tube?

There are three key mistakes that can be made when reviewing hospital mortality statistics:

  1. attempting to explain a high HSMR or SHMI solely through use of data on the grounds that they are imperfect indicators of quality of care – for example changing coding can bring mortality up or down without changing clinical quality of care
  2. being falsely reassured by a low or “within expected limits” mortality and not noticing clinical or organisational failings that are causing harm to patients
  3. only focussing attention on the hospital when attempting to understand hospital mortality statistics, and failing to consider the impact of external factors

In the face of the “13,000 avoidable deaths scandal” that is being hyped up on the back of the Keogh report it would be advisable to consider which these three mistakes have been made before jumping to conclusions that the NHS has systematic failings in a small number of hospitals.

Tagged , | 5 Comments

The reasons for the wide difference in mortality rates across English local authorities is a continuing source of controversy.  The Black Report, published in 1980, began a lively debate as to the reasons for these  inequalities in mortality rates (DHSS, 1980). The causes of inequalities in health and mortality are not yet clear, but one thing is certain. These causes are complex and there is no simple or quick way to equalise mortality rates across England.

Part of the answer is for some individuals to change their habits and eg eat more healthily and take part in regular physical activity. That in itself, though, would not solve the problem. The behaviour of individuals is influenced by the community they live in, their friends, family and colleagues. Regional factors like the availability of stable employment and affordable housing also influence health. At the national level policies on tobacco control, air quality and food standards affect the health of individuals.

Public Health England (PHE) appears to see the situation differently. The new organisation has launched a website, called Longer Lives, aimed at supporting local authorities in their work to reduce mortality rates (Public Health England, 2013). The website states that   “every community faces its own challenges. PHE has been created to help communities decide on steps they can take to improve their collective health. The Longer Lives Project gives them the tools to help do this.”

The causes of ill health

The information on the website concentrates on mortality from the four most common causes of mortality in England. These are heart disease and stroke, lung disease, liver disease and cancer. A user can type in a postcode and will find the mortality rate for each of these conditions for the local authority in which that postcode is situated. The same page also names the local authorities with the highest and lowest mortality rates for each illness.

Next to each condition is a list of the common causes of that condition. For example cancer is attributed to smoking, alcohol and a poor diet. This is curious since many cancers which result in premature mortality are not caused by any of these risk factors. Heart disease and stroke are listed as being the result of high blood pressure, smoking and a poor diet.

These lists of causes are so simplistic as to be of little value to policy makers. The causes of these illnesses are seen by PHE as the consequence of inadequate behaviour by individuals. Much of the research into health inequalities has found that the social and economic environment in which an individual lives can increase or decrease the risk of developing one of these fatal conditions. That environment can have a damaging influence on individual choices of behaviour.

Most of the interventions PHE is suggesting to reduce mortality are to change the behaviour of individuals. For example, to reduce high blood pressure local authorities are encouraged to give “Advice to reduce intake of salt and processed food, which is high in salt and is linked with high blood pressure”. Would it not be more efficient and effective for food manufacturers to reduce the amount of salt they add to processed food?

Ironically, the suggested interventions could increase inequalities. A study by Simon Capewell and Hilary Graham found  “there is evidence that cardiovascular disease prevention strategies for screening and treating high-risk individuals may represent a relatively ineffective approach that typically widens social inequalities. In contrast, policy interventions to limit risk-factor exposure across populations appear cheaper and more effective; they could also contribute to levelling health across socioeconomic groups” (Capewell and Graham, 2010).

Seeing the bigger picture

The mortality rates PHE presents on the website relate to the years 2009-11. At the level of a local authority mortality rates can change from year to year, sometimes widely. On the website local authorities are ranked in a league table from Wokingham with the lowest under 75 mortality rate in 2009-11 to Manchester, with the highest rate. The range of values is from 200 deaths per 100,000 population in Wokingham to 455 in Manchester. However if we consider death rates during 2008-10, using data from the NHS Information Centre website, the picture is slightly different . Manchester still has the highest death rate, with a figure of 469, but the lowest is the London Borough of Kensington and Chelsea, with a figure of 196. A difference of one year gives an indication of a trend which deserves further exploration. Regrettably it is no longer possible to access easily data on trends in mortality rates. Until relatively recently the NHS Information Centre website listed mortality rates from 1991 onwards for every local authority. Most of this historic information has gone from the Information Centre website and it is only possible to see figures for the most recent three years for which information is available.

Between 2008-10 and 2009-11 mortality rates in Westminster and in Kensington and Chelsea increased, whereas in most of the rest of the country they declined.  The table below is an extract from data provided by the NHS Information Centre and Public Health England (NHS Information Portal,2013). Between 2008-10 and 2009-11 the death rates in Brent and Newham showed substantial improvement, with a 13% reduction in Brent and a 17% reduction in Newham. The death rate in Salford went down by 8% and that in Liverpool by 7%. If we had available a set of trend data we could see if this is a blip or part of a long term change. PHE should make available trend data so policymakers are forming strategies based on a wider perspective than a snapshot.

Premature mortality among a selection of local authorities

Under 75 directly standardised mortality rate
Premature mortality among a selection of local authorities   
Under 75 directly standardised mortality rate
Local Authority2008-102009-11% change
Kensington and Chelsea1962138.5%
Richmond upon Thames208202-2.7%

The questions raised in this note are also discussed in a comment available at


Capewell, S and Graham, H  2010   Will Cardiovascular Disease Prevention Widen Health Inequalities?  PLoS Medicine  August 2010 Vol 7 Num 8

DHSS 1980 Inequalities in Health (The Black Report)

NHS Information Portal 2013

Public  Health England, 2013  Longer lives 

Tagged , | 1 Comment

An assessment of the Public Health England report

Public Health England has launched a website which presents a collection of information on premature mortality among 150 English local authorities (Public Health England, 2013). The information is provided with the aim of giving local authorities an insight into the leading causes of avoidable early death. However the website displays serious shortcomings and is inadequate as a basis of decision making for resource allocation or political action. The data presented in the Public Health England report are, for each local authority in England, the total death rate under the age of 75 and also rates grouped by four major causes – cancer, heart disease and stroke, lung disease and liver disease.

The Longer Lives website provides some background to the data. It states “The indicators in Longer Lives use records of deaths provided for each year by the Office of [sic] National Statistics. The disease considered to be the underlying cause of each death is recorded using the International Classification of Diseases”.  (Why is the Office for National Statistics named incorrectly?)

Where the data comes from

The causes of death which are listed on a death certificate are not objective measures but are the interpretation by a clinician based on the information available (Bingham, 2012). From April 2014 independent medical examiners will certify all deaths in England other than those requiring a coroner’s investigation (Department of Health, 2011).  As a pilot of the change in procedure for classifying causes of death there was a study of 548 deaths between November 2010 and January 2011 in four areas of England (Vickers et  al, 2011). The cause of death given on each of the 548 death certificates was compared with a medical examiner’s assessment of the cause of death. For 406 out of 548 the original certifier and the medical examiner gave the same cause of death. However, the original certifier found 144 deaths caused by cancer while the medical examiner decided there were 150, an increase of 4%. While the certifier found 128 deaths from circulatory disease the medical examiner found 137, an increase of 7%. Both these groups of conditions are in the Longer Lives list of premature deaths. The figures Public Health England is using are therefore not totally reliable, and are subject to an unknown level of variation.

When analysing data a confidence interval gives an indication of the level of uncertainty around an estimate of eg a mortality rate. The data presented in Longer Lives relate to the years 2009-11. The Longer Lives website includes a link to a spreadsheet listing the overall rate of premature mortality for each local authority. The spreadsheet includes confidence intervals, but these are not mentioned on the web pages. Regrettably there is no data on the four specific causes of death mapped in the report.

Mortality rates at the local authority level vary over time, sometimes by a wide margin. Generally speaking the smaller the population of an area the wider can be the fluctuation in mortality rates over time. One death in an area like Bury with a population of around 185,000 has a greater influence on its mortality rate than one death in Lancashire, with a population of around 1,172,000. Nowhere on the website is there a mention of confidence intervals, or trends over time in mortality rates. Without considering trend data it is not possible to judge if the mortality rates in an area are increasing or decreasing. This is a major omission and highlights a lack of rigour in the study.

What the colour key indicates

The website states that “The maps and local authority data pages use a red, orange, yellow and green colour key to indicate how the individual premature mortality rates in local authorities compare.”  Public Health England uses league tables to compare and classify local authorities. League tables are a very misleading means of comparing performance. Statisticians advise that because league tables have shortcomings as a means of presenting data on comparative performance their use should be accompanied by a warning about the need to be cautious in drawing conclusions from a set of rankings (eg Goldstein and Spiegelhalter, 1996).

A more informative and intuitively appealing method of presenting comparative data is the funnel plot. Figure 1 below shows the under 75 directly standardised mortality rate for males during 2008-10 in English local authorities, presented as a funnel plot. The highest dot on the map is Manchester, showing how much higher the Manchester male mortality rate is than anywhere else in England. A league table does not give the message as effectively.

Figure 1

Funnel plot of under 75 male all causes mortality in local authorities in England  2008-10

under 75 male all causes mortality in local authorities in England 2008-10

Source ; NHS Information Portal

Male and female mortality rates differ across areas. It is an unexplained omission by PHE that male and female mortality rates are not presented separately on the website.  Figure 2 below shows the under 75 directly standardised mortality rate for females during 2008-10 in English local authorities. Once again the highest dot on the map is Manchester. It is striking, though, that female mortality rates are considerably lower than male. Presenting male and female rates combined into a single figure is not helpful to decision-makers.

Figure 2

Funnel plot of under 75 female all causes mortality in local authorities in England  2008-10

under 75 female all causes mortality in local authorities in England 2008-10

Source ; NHS Information Portal

Figure 3 is a funnel plot of the data provided on the Public Health England website. The graph is a more effective means than a league table of showing the data. The two dots at the top of the chart represent Blackpool and Manchester.

Figure 3

Premature mortality – All persons aged under 75  –  Local authorities in England  –  2009-11

Premature mortality – All persons aged under 75 - Local authorities in England - 2009-11


Public Health England is using the Indices of Multiple Deprivation as a measure of the socio-economic status of an area (DCLG, 2011). The PHE says that their classification of socio-economic deprivation divides “local authorities into five groups according to their index of multiple deprivation – an estimate of local affluence or poverty” (PHE, 2013).  The Indices are a set of data produced by a series of complex calculations. As the DCLG description of the Indices declares   “It is important to note that these statistics are a measure of deprivation, not affluence, and to recognise that not every person in a highly deprived area will themselves be deprived.”  They are NOT therefore a measure of socio-economic status. To use them for this purpose is to misinterpret the data.

Most of the data in the 2010 Indices have been calculated from actual or modelled information relating to 2008, the year the economic downturn began. Since then the socio-economic characteristics and health outcomes of the population have changed. Local authorities are therefore being classified by their situation in 2008 but judged by their performance in 2009-11.

There is some double-counting in using the Indices of Deprivation to account for mortality. Within the Indices are a group of variables relating to illness and mortality. The mortality indicator measures premature mortality during 2004-08. It would be surprising if mortality during 2004-08 did not show some relationship with mortality during 2009-11.


The Longer Lives website is a curiosity. One of its pages presents a list of local authorities grouped by their score on the Index of Multiple Deprivation and ranked by their under 75 directly standardised mortality rate. The website states, correctly, that “generally, more deprived areas have worse premature mortality.”  There is, however, no evidence cited to suggest that deprivation causes premature mortality or that by reducing the level of deprivation in an area its mortality rate will improve. Without establishing that link it is not clear why local authorities with a similar level of deprivation should have similar mortality rates.

It is regrettable that the Indices of Multiple Deprivation are used as a proxy for measures of poverty and affluence. The Indices were not designed to measure affluence, and a more robust index of socio-economic status could be employed to investigate explore the relationship between deprivation and premature mortality.

As human beings we are all at risk of heart disease, cancer and all the other conditions listed in the International Classification of Diseases. The difference across the population is that the level of risk of developing any of these conditions varies substantially. While an individual’s income and wealth may modify their risk of becoming ill, deprivation in itself will not cause the illness, and reducing an individual’s level of deprivation will not necessarily prevent illness.

Among the many causes of premature mortality are an individual’s genetic inheritance, their environment eg air pollution and standard of housing, and socio-economic characteristics like income and occupation.  In any individual the causal chain of an illness begins at the molecular and cellular level before over time becoming identifiable as signs and symptoms of morbidity. Deprivation can be one link in that chain but is not in itself the cause of premature mortality.

The Longer Lives website appears to be suggesting, although it does not say so, that cancer, heart disease, lung disease and liver disease are associated with aspects of deprivation. The causes of many of the cancers which result in premature mortality are far from clear. For some cancers eg brain tumours and lymphomas, the causes are yet to be identified but probably have no connection with deprivation. Others eg breast and skin cancer, show a negative relationship with deprivation (Shack et al, 2008).

Individually local authorities can have limited success in reducing heart disease. As Capewell and O’Flaherty note “Over 80% of cases of premature cardiovascular disease can be prevented through population-wide control of tobacco and governmental policies that promote a healthy diet” (Capewell and O’Flaherty, 2009). Capewell and Graham add that “there is evidence that CVD prevention strategies for screening and treating high-risk individuals may represent a relatively ineffective approach that typically widens social inequalities. In contrast, policy interventions to limit risk factor exposure across populations appear cheaper and more effective; they could also contribute to levelling health across socioeconomic groups” (Capewell and Graham, 2010).

Air pollution is known to be associated with diseases of the heart and lungs.  A recent study found that air pollution in London causes  4267 deaths annually. Air pollution accounted for 9% of deaths in the City of London, and 8.3% in each of Westminster and Kensington and Chelsea boroughs (Air Quality News, 2012).  These are not the most deprived areas of London.

The canned and processed food sold in shops and supermarkets contains excessive amounts of salt and sugar. Although these are necessary for life, taken in excess they are associated with cardiovascular conditions. If manufacturers added less salt and sugar to those products there would in time be a measurable reduction in premature mortality.

Alcohol consumption varies by social class. The Health Survey for England has found that  “People living in higher income households, and those in the least deprived areas, were the most likely to drink above the threshold for risk of harm” (Craig and Mindell, 2012). Another study found that    “While deprivation may explain a large proportion of the variability in mortality for alcohol-related liver disease, it does not account for all local authority level variation for other liver diseases” (Beynon and Hungerford, 2012).

There are two ways in which the Longer Lives website could support local authorities in their aim of reducing premature mortality. Public Health England could provide a critical assessment of the evidence for the known and suspected causes of premature mortality. Local authorities could use this evidence to build robust strategies to improve life expectancy. Kate Pickett and Richard Wilkinson state in ‘The Spirit Level’ that “inequality has a damaging effect on health” (Pickett and Wilkinson, 2010). Furthermore “reducing inequality in the OECD countries alone would prevent upwards of 1.5 million deaths per year.” Public Health England could discuss the extent to which inequality within and between local authorities is associated with premature mortality.

Secondly Public Health England could model the effect in terms of deaths prevented of modifying the levels of risk factors for premature mortality. Examples that lend themselves to modelling are the reduction of the average amount of salt consumed per person and an improvement in air quality. Reducing salt intake would reduce cardiovascular disease. Improving air quality would reduce a wide range of illnesses. Quantitative models of this type would provide evidence for local authority initiatives to improve health.

In conclusion although the Longer Lives website presents valuable data, there is a gap in the evidence for the thinking which led to the choice of data and its method of presentation. Public Health England could provide a further service to local authorities by making explicit that evidence.

This article has been revised and expanded from the original version.


Air Quality News 2012 December 10   Air pollution responsible for 4,000 London deaths a year

 Association of Public Health Observatories: Analytical Tools for Public Health: Funnel plot for rates (including directly standardised rates)

Beynon, C and Hungertford, D  2012  Burden of Liver Disease and Inequalities in the North West of England    NHS North West

Bingham J 2012 One in four deaths ‘not properly recorded‘ The Telegraph

Capewell, S and Graham, H  2010   Will Cardiovascular Disease Prevention Widen Health Inequalities?  PLoS Medicine  August 2010 Vol 7 Num 8

Capewell, S and O’Flaherty, M  2009 Trends in cardiovascular disease: Are we winning the war? Canadian Medical Association Journal June 23, 2009 vol. 180 no. 13

Craig R, Mindell J (eds) (2012) Health Survey for England 2011, London: The Information Centre

Department for Communities and Local Government,2011 English indices of deprivation 2010

Department of Health 2011  Death certification reforms: New duty on local authorities

Goldstein, H  and Spiegelhalter, D 1996   League Tables and Their Limitations: Statistical Issues in Comparisons of Institutional Performance  Journal of the Royal Statistical Society. Series A (Statistics in Society), Vol. 159, No. 3. (1996), pp. 385-443.

Public  Health England, 2013  Longer lives

Shack L et al 2008  Variations in incidence of breast, lung and cervical cancer and malignant melanoma of skin by socio-economic group in England BMC Cancer 2008 8.271

Spiegelhalter, D  2002 Qual Saf Health Care 2002;11:390–392 Funnel plots for institutional comparison

Vickers L et al   Death certification  Potential impact on mortality statistics ONS 2011


Map blamed for millions of premature deaths

Hunt blames councils for poverty and ill-health

Tagged | 3 Comments

I had hoped I had written enough about this but yet again we get the use of “excess deaths” – a meaningless term – but one used over and again, often by those parts of the media eager to rubbish our NHS.  I also wish we could focus on how to keep Stafford Hospital open because of the value it can bring to the local community rather than see it sacrificed because of a history that is regularly re-written.

The hundreds of “excess deaths” we keep reading about is derived from comparisons of various standardised mortality statistics.  The use of these and what exactly they measure is controversial – since it relies on a process of coding which we know is subject to error rates as well as to gaming.  Despite this, it is asserted routinely that if a trust has a higher than average standardised death ratio (say 127) then 27% represents excess deaths.  You could equally well assert that an average trust with an average rate of 100 has 25% excess deaths over the likes of Addenbrookes with a ratio of 75.  Of course on this logic every trust should have below average death rates or it is killing patients unnecessarily.

Even if you just take the extent to which the ratio exceeds some confidence interval the argument is the same, excess deaths is meaningless.  What is useful is the concept of avoidable deaths – where a death occurs because something was done which should not have been done, or something that should have been done was not done.  In either case it has to be of sufficient seriousness to contribute directly to the death.  There have been studies into avoidable deaths and these show the rate across the NHS as a whole to be around 6%.   Such studies do not depend on coding (or other guesswork) – they are based on reviews of case notes and discussions with those responsible – and are carried out by independent (clinical not management) consultants.  So in 6% of deaths a better outcome – prolonging life by an estimated average of 18 months – was possible.

And these studies also tell as a lot about why these avoidable deaths happened.  The most common causes are incorrect or late diagnosis (possibly because no senior clinician was available at the time), prescribing errors and a collection of failures around monitoring.

In the case of Stafford Hospital nobody knows what the correct standardised mortality ratio was at the time it was subject to intervention.  It is clear from the record that coding was not being done correctly so any mortality rate was wrong.

It is also clear that the only attempt that was made to do a case note review into deaths at Stafford showed little if any evidence of unnecessary deaths.  There are no comparative cross hospital statistics about avoidable deaths so nobody knows where Stafford would have appeared.

The evidence shows clearly that in parts of Stafford there were appalling and totally unacceptable standards of care but poor care and unnecessary deaths are not the same thing. The major enquiries actually looked at a part of the hospital and it is accepted that other parts of the hospital had services of high quality.  Sadly there were, and still are, cases of poor care and avoidable deaths in hospitals.

In Stafford the evidence is that cuts in staffing levels and a reconfiguration of some wards were carried out as part of an expenditure reduction exercise.  Similar exercises to reduce expenditure (cost improvements) were also happening at every other trust in the country whether they were in the pipeline to become a Foundation Trust, already a FT or just struggling.  In Stafford the suggestion is that the risk involved in doing what many others were also doing had not been properly evaluated and the impact of changes on patients was not properly monitored. Information provided to decisions makers, regulators, commissioners and performance managers was woefully inadequate and often incorrect or incomplete.  Information that indicated problems was either discounted or offset again other information flows.

Since no evidence was taken about what happened in other hospitals I am very wary about any generalisations from what happened in one part of one small hospital.

Having studied far more than just what is in the set piece reviews I am confident that I still do not fully understand why such poor care happened and what why problems were not picked up earlier.

I am confident that a higher than average mortality ratio for one year may well be of no significance.  But some trusts have a higher ratio or higher death rate, however it is measured, year after year.  Why?  The only study I have seen to explain this is that the link is to resources.

Those hospitals which have better (relative ) levels of funding will have better death rates. Is this a surprise?

Tagged | 4 Comments
%d bloggers like this: