Tuesday 27 October 2015

Antibiotic sensitivity testing - are current methods leading to bad stewardship?

We did an interesting exercise in the lab today trying to quantify the degree of 'uncertainty' in a lab report. We did what many people already do for internal quality assurance and just put 20 urine specimens through the lab independently. We looked at culture report, sensitivity report, pyuria report, and whether 'extra' tests were done. It was good to see surprisingly high levels of certainty, even in things I thougeh might be a bit subjective, such as assessment of level of pyuria.

The uncertainty around sensitivity testing however made me revisit some basic assumptions. We are looking for a zone of inhibition around an antibiotic impregnated disc. If that zone falls below a certain size we say the organism is resistant to that antibiotic. This cut off, or breakpoint, is defined by committee and is based on a combination of pharmacokinetic and epidemiological criteria. When you consider the science behind the technique, we are making big calls based on small interpretative differences. And this matters - clinicians will, quite rightly, not use an antibiotic that has been called resistant by the lab. So if we call an antibiotic resistant we are denying it to the patient. For urinary tract infections we only have about five oral antibiotics we can routinely use. If we lose some of these we cut down our options - we see increasing number of patients with no oral options left - these patients are often admitted to hospital for intra venous therapy. Perhaps worse still, we push clinicians towards using 'last resort' antibiotics, such as meropenem. Each meropenem prescription brings the day of antibiotic Armageddon ever closer, when we have no good options left.

So what happened when we looked at uncertainty of antibiotic resistance testing? Small numbers, but we saw 3 out of about 48 zones were reported discrepantly. ie. One lab scientist called resistant, the other called sensitive. When we looked at why this was, all the zones were on the breakpoint - it would have been easy to read either way. So there are some people who will respond to this by saying things like "standardise better, and get some calibrated measuring calipers, or an ISO accredited ruler, and beat your scientists harder."

But in response to that, is worth viewing the EUCAST website. They've got loads of data on zone sizes.  We can see a few things :

1. If we look at the distribution of zone sizes in bacteria we often see a normal distribution, with a separate population of resistant organisms with very small zone sizes. But the (somewhat arbitrary) breakpoint often sits at one end of the normal distribution, so including many sensitive organisms.
2. If we look at the distribution of zone sizes in organisms that are known to be resistant, they generally have very small zone sizes (with some notable exceptions)
3. If we look at the inherent variability of zone sizes measurement, by putting the same organism though the same process every day (what we do for internal quality control) we see generally about a 5-10mm variation. That's about 20% expected variation in zone sizes. And we make calls based on less than 5% differences - is this really a good idea, or even scientifically correct?

So this is my current thinking. Breakpoints are useful for epidemiological information and studying trends over time. But they are perhaps massively unhelpful in clinical practice in many situations. I wonder whether we should report 'sensitive' if a big zone, 'resistant' if no/very small zone, and call everything else intermediate/uncertain. But looking at the data, chances are most people will respond just fine in these situations. Saving them, and us, from yet more unnecessary broad spectrum antibiotic use.