March 2018 

The NHS and healthcare systems in general face a constant challenge of balancing the needs of patients with the need to balance the books. However, achieving clinical and financial sustainability is by no means an easy feat. As organisations constantly vie to become more efficient and more effective by driving out costs, achieving performance benchmarks is often seen as a solution to this.

Benchmarking is a process of comparing business processes and performance metrics to industry bests and/or best practices from other industries…Improvements from learning mean doing things better, faster, and cheaper.

In many of the organisations I have worked alongside, there is a real desire to understand what others are doing in terms of performance on measures such as length of stay, bed occupancy, A&E admission rates, referral conversion rates, first to follow up rates, theatre throughput and utilisation etc. This is absolutely the right thing to do – but the focus should be about more than just the numbers – it needs to be about the process - how such performance is achieved and whether there is a pay-off elsewhere.

Benchmarks are widely quoted in clinical bed models, clinical strategies and business cases, focusing on numbers. There is absolutely nothing wrong with aspiring to get better, but there needs to be greater emphasis placed on the context, outcomes and process rather than the benchmark figures in isolation. If you do the right things, for the right reason, in the right way, performance and outcomes are more likely to improve. This approach is also likely to be more sustainable than a 'quick fix' which doesn't take a more rounded view.

A situation that I have come across is that where current performance is better than the benchmark, this can leave those responsible for driving change, with little incentive to improve, or worse, introduce a level of complacency which undermines the longer-term ability to continuously improve.

As previously noted, there can often be a lack of emphasis on context and local circumstances. Without context, clinical practices and processes may not necessarily transfer well. Even within defined ‘peer groups’, clinicians and managers often tell us that their patient case mix is different; that they serve patients from a more deprived demographic; that they don't have enough senior staff to deliver changes. These factors, whether they are anecdotal or factual, cannot be ignored, thus regard for the unique needs of each organisation is hugely important.

There is still a lack of transparency and consistency in terms of measuring and benchmarking health outcomes. I have seen little evidence which correlates performance and outcomes with performance in other areas of the healthcare system. For example, if outpatient follow up rates are higher, does this result in lower A&E or Primary Care attends (and vice-versa)? If an organisation is failing on the 4 hour A&E target, is there a correlation with admission rates from A&E? As a patient, I would rather spend 5 hours whilst waiting for diagnostic and pathology results and be discharged home rather than being admitted at 4 hours as there is then a much greater likelihood of being held up in the system. Consider also an organisation that has a low length of stay, but high readmission rate - surely not a good measure of performance? I would therefore advocate a balanced scorecard approach rather than focussing on a single metric.

ask yourself whether you would you rather spend 5 hours in A&E whilst awaiting diagnostic and pathology results and be discharged home or be admitted at 4 hours and risk being held up in the system

Are we comparing apples with apples?

A potential challenge with benchmarks is that they're not always comparing like with like. Every patient is different, therefore the casemix for providers can and does vary. Unfortunately providers can, and do, record data inconsistently – depth of coding varies from organisation to organisation, as do systems for recording data and information. All these point to a cautious use of benchmark based targets.

There are often good reasons for variation in how organisations perform. In a Teaching Hospital, for example, the duration of a procedure may often be longer, admission rates may be higher and outpatient follow-up rates may be higher. This is often because of the training element, which requires more time and the blend and experience of staff.

Example: Benchmarking acute length of stay

A bed model which I recently reviewed for a client was predicated on the assumption that in the future, they would achieve upper quartile length of stay performance in every specialty. There is an apparent cultural fixation with upper quartile targets in the UK, which are often unrealistic as there is typically no process or realistic plan as to how to achieve these. Putting casemix issues to one side, achieving upper quartile length of stay across all specialties would likely mean the provider in question would become one of, if not the best performing Trust in the country. Is this realistic or fantastical?

Creating an unrealistic aspirational view of the future can potentially cause more harm than good.

 

 

 

There is absolutely nothing wrong with aspiring to improve, but if such targets cannot be converted to reality, they can risk demotivating and frustrating those challenged with delivering change and thus performance may deteriorate as a result, not improve. In an already stretched service, benchmarks risk becoming a stick with which to beat people, when surely a carrot would be better.

Example: Benchmarks for outpatient performance

In an outpatient setting, the most commonly quoted benchmarks are around DNA rates and follow up rates. The first thing to check is that we're comparing like with like - do all organisations record and count data in the same way? Different providers can often record the same type of activity in different ways an outpatient setting, therefore any benchmarks can be flawed. In terms of follow up rates, consider whether fewer is always better? What if seeing patients less frequently, but over a sustained period avoids hospital admissions or deflects activity from another part of the system?

Example: Theatre Utilisation

One measure I often reference with the potential to be misused and misinterpreted is theatre utilisation, which in the context of this article is simply defined as the proportion of funded theatre time used to operate on patients.

Consultant (A) can do three procedures in a 4 hour session, but is quite slow and as a result uses 90% of the funded theatre time. Consultant (B) is super-efficient and can do four of the exact same procedure in less time with the same outcomes, with a utilisation of 80%. Consultant (B) is clearly more effective and efficient, but focussing on utilisation as the lead benchmark indicator would indicate Consultant (A) is better. This clearly demonstrates the importance of context and taking a balanced view of various measures - in this example it is more important to consider throughput and outcomes as the leading measures.

So is there still a place for benchmarks?

Despite the limitations of certain benchmarks, I still believe they have a place in healthcare as they provide a useful yardstick in identifying and prioritising areas with the greatest opportunity in terms of performance improvement. They do however need to be underpinned with realistic and well thought out action plans as to how change will be managed and realised.

Benchmarks should be used as a signpost, not a detailed route map. 

Benchmarks will continue to be used liberally across healthcare systems as they are embedded and entrenched in the culture. Where used, it should be with a sense of caution, realism and common sense. A balanced scorecard approach may offer a deeper level of insight on overall performance rather than focussing on and cherry picking specific measures.

In Summary...

Careful judgement must be exercised when interpreting benchmarking outputs. It is important to note that any benchmark can only ever represent a performance reference point – either achieved or desired – at a given point in time. Also consider, that as performance improves towards the goal of best practice, so the benchmark might need to be re-evaluated and re-defined, rather than being seen as a fixed line in the sand.

It is also important not to draw conclusions about organisations based purely on the data. Different organisations will have different clinical models, quality of services, complexities, nature and size of units. The real value in this benchmarking work comes from its subsequent use in the peer group discussions and analysis. The data and information gathered will provide for evidence based discussions and decision making. 

 

This article was written by This email address is being protected from spambots. You need JavaScript enabled to view it., with contributions from This email address is being protected from spambots. You need JavaScript enabled to view it., Consulting Manager and Senior Consultant for GE Healthcare Partners specialising in data analytics.

 

> <

Share This: