Thursday, 12 November 2015
Why I love Antimicrobial Stewardship in Primary Care
1. Data. It's so easy to get data on things in primary care, in ways that are completely impossible often in secondary care. Prescriber, time of prescription, nature of consultation (face to face vs telephone), basic patient demographics, co-morbidities. The one thing that's a bit hard is indication for antibiotics, but that's often not that important if you approach the data in the right way - this is just a starting point for conversation. So we see some interesting things coming out that are the starting points for challenging (and often supporting) practice. So for instance,
- 20% of young people (aged 20-39) get an antibiotic each year. This is mainly amoxicillin. And it's mainly for coughs and colds.
- 10% of antibiotic prescriptions occur within 2 weeks of a previous prescription. Why is this? Is it treatment failure? Intolerance to first choice? Never infection in the first place? If someone is not better in 3 days in primary care, how often is there an indication for a second course? I don't know the answer to this - but I'd quite like to think about it a bit more. Do any microbiologists understand this group of patients?
- If you get given fluclox, then the follow on antibiotics are doxycycline, co-amoxiclav and clarithromycin. What could these antibiotics be doing that fluclox wasn't doing? Is it a dosing issue? Is it non-infective? Is it pilonidal sinuses involving gram negatives? And how actually do you manage these nowadays in primary care, especially now that access to surgical specialities is apparently more difficult?
Nursing prescribers are the biggest prescribers by far. They see all the minor ailments. Are we giving them the support they need?
2. Trusted relationships and continuity of care. GPs really know their patients and their lives. They work in close knit teams and have a shared purpose. And, having worked closely with them for a number of years, it feels that I have developed a culture of trust with them. We can talk about anything without fear of judgement. They can talk with me about specific issues in context, not abstract 'guideline' derived approaches which often fail to address the difficult situations which do not 'fit' our medical archetypes. It's no good saying "Don't ever give prophylaxis for urinary tract infections" if you haven't sat with them and talked in a specific way about just what you would do, and show that you understand the problem from the perspective of the patient. I gained 'validity' here by doing joint clinics with GPs talking to patients with recurrent infection.
We had a great discussion today about different techniques GPs use to talk to patients. Again, this sort of peer led discussion is a key part of any 'norming' part of a behaviour change methodology, and can only be done within a culture of trust. We talked about how it's really helpful to draw a graph of 'symptoms' vs 'time' and show patients where they are on the line. We then thought about how the 'Listen To Your Gut' message could fit in after this. Then talk about red flag symptoms and whether the patient has any. And then really allow the decision to prescribe antibiotics to sit with the informed patient. It's no longer a battle, but a conversation.
3. The detail is important. So developing this concept of validity, you have to be able to talk in detail about things in a way that connects with those asking the question. If you can't have a trusted opinion on how to manage a patient who's had a cough for 3 weeks (even if it's "I don't know - there is no evidence; my feeling is that this is appropriate") then there is not a huge point in having a meeting.
4. It's a two way thing. These meetings in primary care are not about me ("the expert") giving information. It's partly about this ("frankly I only really worry when they're rigoring and dropping their blood pressure...you can probably watch and wait in most other situation"; "co-amoxiclav adds no significant additional cover to fluclox for skin infection and has worse pharmacokinetics and more side effects" "I know we use multiple antibiotics to treat TB to stop emergence of resistance. But it doesn't seem to work that way in most infection, and you just increase side effects.")
But for me, it's more about giving me a sense of the demand as it present to primary care, and thinking about how we help. Just what do you do in a confused old person? In hospital they get chest X-rays, blood tests by the gallon and we then sort of have a plan. This is just so much harder in primary care. How can we support people to do this better. Urgent bloods? In someone with otitis media not responding to amoxicillin, is it reasonable to give co-amox? I have to confess I don't have a strong opinion on this - I can explain the bacteriology of why this might be appropriate because of resistant Haemophilus - but actually, do I know what the natural history of this disease is, and what investigations are appropriate, if any? If we don't hear these stories as they present, how can we have an opinion?
Three next steps for primary care infection optimisation
1. Is stewardship a bit too negative? Our common purpose is to optimise the care of the patient in front of us with possible infection. I would suggest that we should talk more in this positive manner. Again, I would note how the Listen To Your Gut message fits well with this approach.
2. We need to develop antimicrobial stewardship teams that work across health communities. It is important I know about how infection is managed in both primary and secondary care. We need to develop teams that can work in this way.
3. We need to work with all stakeholders - and we clearly need much more focus on non-medical prescribers; but also HCAs and nursing homes. They may not be able to prescribe, but they can strongly influence prescriber behaviour by the way they relate the patient story or request investigations.
Tuesday, 27 October 2015
Antibiotic sensitivity testing - are current methods leading to bad stewardship?
We did an interesting exercise in the lab today trying to quantify the degree of 'uncertainty' in a lab report. We did what many people already do for internal quality assurance and just put 20 urine specimens through the lab independently. We looked at culture report, sensitivity report, pyuria report, and whether 'extra' tests were done. It was good to see surprisingly high levels of certainty, even in things I thougeh might be a bit subjective, such as assessment of level of pyuria.
The uncertainty around sensitivity testing however made me revisit some basic assumptions. We are looking for a zone of inhibition around an antibiotic impregnated disc. If that zone falls below a certain size we say the organism is resistant to that antibiotic. This cut off, or breakpoint, is defined by committee and is based on a combination of pharmacokinetic and epidemiological criteria. When you consider the science behind the technique, we are making big calls based on small interpretative differences. And this matters - clinicians will, quite rightly, not use an antibiotic that has been called resistant by the lab. So if we call an antibiotic resistant we are denying it to the patient. For urinary tract infections we only have about five oral antibiotics we can routinely use. If we lose some of these we cut down our options - we see increasing number of patients with no oral options left - these patients are often admitted to hospital for intra venous therapy. Perhaps worse still, we push clinicians towards using 'last resort' antibiotics, such as meropenem. Each meropenem prescription brings the day of antibiotic Armageddon ever closer, when we have no good options left.
So what happened when we looked at uncertainty of antibiotic resistance testing? Small numbers, but we saw 3 out of about 48 zones were reported discrepantly. ie. One lab scientist called resistant, the other called sensitive. When we looked at why this was, all the zones were on the breakpoint - it would have been easy to read either way. So there are some people who will respond to this by saying things like "standardise better, and get some calibrated measuring calipers, or an ISO accredited ruler, and beat your scientists harder."
But in response to that, is worth viewing the EUCAST website. They've got loads of data on zone sizes. We can see a few things :
1. If we look at the distribution of zone sizes in bacteria we often see a normal distribution, with a separate population of resistant organisms with very small zone sizes. But the (somewhat arbitrary) breakpoint often sits at one end of the normal distribution, so including many sensitive organisms.
2. If we look at the distribution of zone sizes in organisms that are known to be resistant, they generally have very small zone sizes (with some notable exceptions)
3. If we look at the inherent variability of zone sizes measurement, by putting the same organism though the same process every day (what we do for internal quality control) we see generally about a 5-10mm variation. That's about 20% expected variation in zone sizes. And we make calls based on less than 5% differences - is this really a good idea, or even scientifically correct?
So this is my current thinking. Breakpoints are useful for epidemiological information and studying trends over time. But they are perhaps massively unhelpful in clinical practice in many situations. I wonder whether we should report 'sensitive' if a big zone, 'resistant' if no/very small zone, and call everything else intermediate/uncertain. But looking at the data, chances are most people will respond just fine in these situations. Saving them, and us, from yet more unnecessary broad spectrum antibiotic use.
Sunday, 13 September 2015
Targets, KPIs and why we should be wary
In a 2006 paper, Bevan discusses the problems with targets. In summary these are twofold. One is synedoche - the assumption that the part represents the whole. The other is gaming. Gaming leads to 3 phenomena.
The first is the ratchet effect where things get 'better' year on year, often as a result of setting future targets against benchmarks of previous performance. We see this a lot in healthcare. It is rife in infection control - 95% hand hygiene compliance rates (when the best studies achieve only about 60%) is one example. No hospital wants to look like its hand hygiene is worse than that of the next door hospital.
The second is the threshold effect, where managers alter the system to deliver to the target...and no more. There is no reward for doing better. In the paper, Bevan sites the evidence that ambulance trusts redistributed response centres to urban areas. This had little effect on urban response times, beyond bringing them below an 8 minute threshold, but had a profound effect on rural response times. But as these were relatively small numbers, the effect on the target could be ignored.
The third type of gaming is manipulation of the output data. There are many ways to do this. Clinical coding is a minefield of variability to be exploited here. And we are all familiar with the stories of A+E trolleys becoming beds. I recently heard about an out of hours GP service that performance manages against time to triage. Very worthy. But clinicians quickly learn they can stop the clock by entering a single full stop into the clinical record, getting the managers off their backs, while they get on with doing the work they were trained to do.
But more worrying is actual distortion of clinical practice. So, for example, surgeons refusing high risk cases. And I know of one laboratory that doesn't load blood cultures if clinical details are vague in order to hit MRSA targets. There is a valid clinical argument here, but it's quite weak, and surely this decision is now open to considerable criticism in the face of the target culture.
Bevan discusses how the target culture in the USSR led initially to large productivity gains over the first couple of decades. But this was followed by stagnation and ultimate failure. Perhaps we see the same in health. Targets initially are well meaning and often focus activity on issues of concern. But quickly, the target becomes the point, and the purpose of the target as an agent of change becomes traduced.
Bevan argues that we can improve things. More random checks perhaps, using more random measures. But ultimately we probably just need more face to face peer led assessment. And for me, this probably leads us away from the comfort of the target, to a more nuanced narrative assessment written in collaboration between assessors and those doing the work.
Wednesday, 19 August 2015
Why clinicians shouldn't think about test costs
There is evidence that making clinicians aware of test costs reduces requesting, and this has been used as justification for including costs at the requesting stage as a means of reducing unnecessary testing. This would go into the typical arsenal of 'demand management'.
Clinicians are already under pressure to make complex management decisions that are in the best interest of the patient. How does adding more information into this equation at this stage help?
The way costs are interpreted will be dependent on how they are framed. So I could say that it costs £50 to manage a possible infection. Adding a CRP into the mix, at about £5, is relatively trivial. Or I could say that a CRP is about 50 times more expensive than a standard biochemical test, and that this is now a considerable burden on lab expenditure.
The clinician will either then choose to do the test (it's not that expensive, it doesn't really matter) or choose not to do the test (I need to do my bit to save the health economy money). There are at least two problems with this approach.
1. It leads the clinician away from their primary purpose, which is to optimise care for the patient in front of them. They cannot be expected to make an accurate economic assessment on the basis of one piece of information.
2. True costs of testing will be hidden. So there is less pressure to reduce high volume but low cost testing. This adds up, but an individual clinician working in isolation cannot be expected to understand or evaluate this.
There must be an optimum level of testing. What we need are ways to understand the utility of diagnostics across whole pathways. We need to understand how these tests benefit (or harm) patients. We need ways of assessing the true costs of tests to the health service, with transparency of how labs price their tests, and with inclusion of downstream costs. We then need to find ways to help clinicians order these tests accurately.
All this must include an assessment of cost effectiveness. But keep this out of the clinic and do it with proper informed debate.
I will add that this almost certainly needs new ways of working between labs and users, and needs approaches to contracting that break the insidious link between activity and income. But that's another blog.
Tuesday, 7 July 2015
The Kings Fund on Better Value and Pathology Optimisation
So it is exciting that tomorrow we will launch a new pathway for DVT management in primary care, which has been arrived at through close collaborative working between pathology, pharmacy, physicians and primary care. And even more excitingly, we will also introduce the new optimisation team, which consists of two of our North Devon biomedical scientists, a pathologist, a GP, a GP trainee, and a public health doctor. We have also secured funding for link GPs in all practices who will work with the optimising team to a) understand demand b) study the gaps between demand and delivery and c) work towards closing this gap.
It is sometimes a little disconcerting to be doing things that few others see as possible. Although we have heard many good things about the work we have been doing, few others seem to be trying to replicate it. So it is reassuring to read the recently released Kings Fund report "Better Value In the NHS." This document is a call to arms for clinicians to lead the way on improving value in the NHS, and sees this as the way to ensure the future sustainability of the service. Many of the things it calls for are in our Pathology Optimisation service.
1. We need to tackle overuse and underuse of services. This is optimisation. Overuse, in particular, is hugely expensive. We have seen that this is not just financial, but also through opportunity cost. And in services that are stretched, where demand exceeds capacity (and this is almost everywhere, but particularly in primary care) it is this opportunity cost that is slowly killing the sort of healthcare that people actually want. The use of diagnostics has skyrocketed over the last decade. This has been associated with negligible benefits (as we have posted previously) but considerable harms, some physical, some mental. 5% of test results lie outside reference ranges. We have seen how this leads to activity that is usually of no benefit to patients, but that sucks the lifeblood from the NHS.
2. Teams delivering better care. We cannot design services in isolation. This is a traditional problem for pathology, which produces highly accurate results, but often throws up its hands in despair when asked to consider whether the tests were actually appropriate or results acted on appropiately. "These are not things we can control." "The standard of education these days is just not what it was." But we have shown that we can act on the pre- and post - analytical pathways; and that the only way to do this is through close engagement with all stakeholders, with the purpose of the pathway, as defined by the citizen through their stories, as the compass which keeps us on track.
Our latest work, on the DVT pathway, was blocked by silo thinking. We could see no way to get an urgent D-dimer test performed in primary care. We could see no way of dealing with anticoagulation in low risk (below knee) potential DVTs if an ultrasound was not available immediately. And yet we heard the patient stories of care that did not seem to care - patients shunted around the system being treated in ways that were certainly sub-optimal at best (such as having to travel 40 miles to have a blood thinning injection that was not actually necessary).
These problems were unblocked when we got together as a team and understood the problems from others' perspectives, challenging the limits of what was possible. For our pathway, the key enablers came when the laboratory showed that a D-dimer was stable in a refrigerated citrated blood for 24 hours; and the physicians said it was safe to wait 24 hours before making a treatment decision on a low risk (below knee) DVT. We must not be complacent that we have 'got it right', and the optimisation team will be important players in embedding this pathway into practice, and monitoring its efficacy.
I will leave the last words to the Kings Fund, :
"The challenge facing the NHS over the coming years is fundamentally about improving value rather than reducing costs. Framing the debate in these terms emphasises the role of quality and outcomes in meeting the challenges facing the health system, as well as providing the right language to engage clinicians and frontline staff in making change happen."
Thursday, 2 July 2015
Pathology supporting chronic disease management: It's not all about the numbers!
Sunday, 28 June 2015
Some thoughts on leading measures - with education as an example
The trouble is that the lagging measures are so engrained in all that we do in healthcare that it's very difficult to start thinking differently. So I find it helpful to think about things in which I have only a rudimentary understanding of the process, but I have quite strong views (as a citizen) of what matters. Education is one example, and so here are some thoughts on what matters, and how I might measure these as a school governor. With thanks to @Primary_Ed for the structure around growth mindsets.
1. It matters that my child enjoys school
Ask a child when they turn up for school in the morning if they are looking forward to it.
2. It matters that my child has a 'growth mindset'.
Note I, personally, am not interested in whether my child has learned any facts - this is what Wikipedia is for. But it does matter that they know how to learn, and they know how to access facts, make sense of them, and use them to solve real problems and be interested in the world around them.
I think I would want every child to show me a balanced example of these things about their work, either in books, or in the classroom:
a. This work is OK - but is it my best work?
I know what I am going to do next to make this work better
I understand what I am doing at the moment and am now practising making sure I can do it well.
b. I have made a mistake - and this is good because I can learn from it
I have made a mistake and I know what I need to do next to learn from it
I find this work hard but I am working hard to understand it
c. This work is awesome - I'm on the right track to being the best that I can.
This is work I didn't think I could do before, and I have worked hard to get here.
I am good at what I am doing now and I am enjoying using my new skills
I really think it is very important we ask children about their attitudes to mistakes :
I am happy when I make a mistake
I won't be told off if I make a mistake.
My teacher helps me know what to do next if I make a mistake
I like to help my friends if they make a mistake and I know how to do it.
There are other things in the growth mindset that look at how children approach problems ("This is too hard"; "I can't do French"; "I'll never be as good as her"; "I can't get any better at this"; "I give up") that might be measurable. I am hoping they are, to some extent, captured in the measures above (so for instance, the measure of 'awesomess' is a personal one, and reflects, to me, the extent to which the teacher knows the child and what constitutes challenge and success for them)
3. It matters that my child has enjoys a rich variety of experiences
I'm not sure how I would measure this. How about something like "Number of things my child does that are led by a specialist who is not their usual teacher."