Author Archives: Carl Heneghan

About Carl Heneghan

Carl is Professor of EBM & Director of CEBM at the University of Oxford. He is also a GP and tweets @carlheneghan. He has an active interest in discovering the truth behind health research findings

The Good, The Bad & The Unsure

Happy 2015! I hope Christmas 2014 was a good one. It was my daughters 2nd Christmas and I loved watching her running around excitedly with all her new wrapping paper, ignoring the presents it was covering of course!

I also love the classic films that were rolled out over the festive period. This year I found myself watching arguably the best – certainly the most famous – spaghetti Western of them all starring Clint Eastwood: ‘The Good, the Bad and the Ugly’ (why are they called ‘spaghetti’ by the way?).

Not that am I huge fan of Westerns but its the best link that I could come up with for the topic of this blog: discussing the Good, the Bad and the Unsure around the evidence for teaching evidenced-based medicine (EBM) – or evidence-based practice (EBP) or health care (EBHC) should you prefer. I’ll refer to EBP to cover all these from here on in.

EBP as a recognised discipline within the field of medicine is relatively young (there’s a nice podcast with Richard Smith discussing the history of EBP with its founding fathers here). Dave Sackett, a founding father, defines EBP as ‘the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients.’

He also describes what EBP is not: ‘Evidence based medicine is neither old hat nor impossible to practice. The argument that “everyone already is doing it” falls before evidence of striking variations in both the integration of patient values into our clinical behaviour and in the rates with which clinicians provide interventions to their patients.’

In terms of teaching EBP there has been growing recognition that healthcare professionals need to be trained in the methods of EBP, with the General Medical Council advocating that graduates must ‘…be able to integrate and critically evaluate from all these sources to provide a firm foundation in medical practice.’

Medical schools – at least in the UK – are attempting to address the challenge, although much work is needed.

So the long and short of it is that EBP is important for the care of patients, health care workers might not be practicing it and therefore they should be taught about it at medical school. But what’s the evidence (good, bad and unsure) that teaching EBP actually makes a difference to the practice of EBP, and more importantly to health care outcomes?

The Good.
In a 2002 edition of JAMA, Rose Hatala and Gordon Guyatt discussed the evidence for teaching EBP and the difficulties in generating it. They point out a 2001 Cochrane systematic review looking at the evidence for teaching EBP found one eligible study which showed a 19% improvement in critical appraisal knowledge of health professionals randomised to critical appraisal teaching compared to a control.

The 2011 updated version of the same Cochrane review found 2 additional eligible studies that confirmed an improvement in knowledge and critical appraisal skills of interns and practising physicians

A recent overview of reviews (basically a systematic review of systematic reviews) by Young and colleague published in PLoS ONE included systematic reviews assessing at least one or more element of teaching EBHC to provide an ‘overall’ assessment of its effectiveness. The overview included 16 reviews published between 1993 and 2011 with a total of 81 separate studies overlapping across the reviews (37 studies were included in more than one review). Four reviews were classed as high quality.

That’s a considerable increase in the number of primary studies (and systematic reviews of these) since Hatala’s and Guyatt’s 2002 paper and acknowledges a marked response to the need for evidence in this area.

The overview found, perhaps predictably, that multifaceted (i.e. delivered using multiple formats e.g. lectures, workshops, small-groups, tutorials, online modules), clinically integrated (real life scenarios) interventions, with assessment, led to improvements in knowledge, skills and attitudes.

Another good thing is that EBM enthusiasts are aware of the issues with teaching and practicing EBM and are happy to discuss these openly and honestly. A nice example is evident in the June 2014 issue of the BMJ. Trish Greenhalgh and colleagues discuss whether EBM is a movement in crisis, highlighting that there is a need to return to the founding principles of EBM – “to individualise evidence and shared decisions in the context of a humanistic and professional clinician-patients relationship”. 

The authors highlight the good work in teaching the basics of EBM – asking relevant, focused questions, finding the best evidence and critical appraisal of it – but call for a move towards the use of real case examples, encouraging intuitive reasoning in the clinic and at the bedside, and teaching how to share both evidence and uncertainty with patients using decision aids, adapting to individual needs, circumstances, and preferences.

The Bad.
The Hatala paper highlighted 2 shortcomings of the evidence of teaching EBP available at the time. First was a lack of validated outcome measures, with too much reliance on self-reports and learner satisfaction questionnaires. Second, was a struggle to assess learners behaviours and patient outcomes.

12 years on and both the Cochrane review and recent overview indicate that these problems still remain, particularly as Young et al identify that none if the 81 primary studies across the 16 systematic reviews reported practice related outcomes. The difficulty with assessing these outcomes lies in the size and the duration of study required to observe any meaningful effect.

The quality of the 12 of the 16 systematic reviews was reported as poor, with a high risk of missed studies and reporting and publication bias. However, it is important to note that using patient health measures may not be feasible – unless the scale of programmes is very large and sustained with follow-up over many years – hence the lack of rigorous data on patient outcomes.

Jonathan Mant and Nicholas Hicks suggests we concentrate on immediate measures (knowledge, skills attitudes, behaviour) and then seek to establish a relationship with health outcomes through observational data. Sounds good to me.

The Unsure?
Notwithstanding the issues around practitioner behaviour and patient-related outcomes already mentioned, there are a number of areas of uncertainty that I think exist within EBP, both in terms of the practice and teaching thereof.

There are likely to be areas within medicine where performing the 5A’s of EBP is not feasible. An example here might help. Lets say an interested paramedic has recently completed an EBM course and as a result has a new confidence in applying the 5A’s of EBP. She see’s on the BBC website that a new trial is being conducted to assess if giving adrenaline after cardiac arrest actually does more harm than good. She agrees this is a good clinical question (1st A of EBP) makes a note to look out for the ensuing publication.

She is one of the first to find the related paper when it’s published (2nd A of EBP) and uses her critical appraisal skills (3rd A of EBP) to determine that the statistically and clinically significant finding of a 24% absolute reduction in risk of death is a real, unbiased, finding. Armed with this knowledge, she now wishes to apply this to her practice. So for the next cardiac arrest patient she see’s, she decides to withhold the giving of adrenaline. What do you think will be the consequences of her actions?

I know the example above is extreme (and doesn’t take into account the other aspect of EBP – patient values or preferences [note the difficulty of this element for this example]) but it is not unimaginable. There are perhaps a number of areas where similar restrictions of applying the 5A’s would be equally difficult (e.g. black listed pharmacotherapies). Equally, there are likely areas of medicine where the EBP rules can be fully applied. It would be useful to identify and address these when teaching EBP.

So where do we go from here? Unfortunately its a case of ‘more and better research needed’ I’m afraid. To coin a phrase from another Eastwood classic “Go ahead, make my day.”

 

Also posted on David Nunans Blog & www.cebm.net

Register now for Evidence Live 2015

Don’t miss out register for Evidence Live 2015 and join a growing community of Researchers and Health Care Professionals pushing the boundaries and developing the evidence to enable the provision of better health care.
The call for abstracts has closed with a tremendous effort from all who submitted.  We have accepted over 50 oral presentations and over 80 poster presentations.  If you missed the call and would still like the opportunity to share your research you can send a 300 word abstract to Ruth Davis NB: this will only be considered for poster presentation.

Evidence live 2015 offers

  • A full programme of international speakers from the world of Evidence -Based Health Care
  • Panel sessions to encourage audience questions and debate
  • Discounted delegate rates for students (limited places available).  How to apply
  • FREE Skills Workshops (limited places available)
  • Breakfast sessions (limited places available)
  • Open Drinks Reception / Poster Session
  • An opportunity to influence the theme(s) of the next evidence live event

 

Carl Heneghan

Carl Heneghan: Evidence based medicine on trial

Carl HeneghanEvidence based medicine (EBM) should form the foundation of effective clinical decision making; however, growing unrest—and an awful lot of criticism—suggests the evidence bit of EBM is increasingly part of the problem, and not the solution.
Read full blog here

The best tweets of Evidence Live 2013

Here are some of our favourite picture embedded tweets from #Evidencelive 2013, which reflect the diversity of the event.

Tweets inlcude, Mike Rawlins concluding thoughts, Peter Wilmhurst and the MIST trial disclosures, as well as the tweet journal club and the Hierarchy of Sin!

See Eventfier EvidenceLive 2013 page for more on the event :

 

RT @OMTVenezuela: #evidencelive – Rawlins concluding thought- we get the pharma we deserve. http://t.co/esqzzjZsnB http://t.co/WRar3n9Qct

BGOH7SzCAAAixtD

RT @mgtmccartney: MIST trial: 2 investigators didn’t declare they were shareholders #evidencelive http://t.co/v0GgyMm93H

BGRo_59CQAA494q

RT @Trisha_the_doc: “@silv24: Me with the @twitjournalclub poster – v excited to present it at #evidencelive tomorrow morning! http://t.co/Wia8B3YoKN” Awesome!

BGOuVTVCEAA4Q4T

RT @OMTVenezuela: Hierarchy of sins #evidencelive http://t.co/yg22q1jRVp http://t.co/lmsYakyHCO

BGPdKw9CYAAPU7i

see http://eventifier.com/event/evidencelive/EvidenceLive for more:

 

Where next for evidence based healthcare?

I have taken this opportuntiy to post the 2013 editorial I co-wrote with Fi Godlee, Editor in chief  of the BMJ

(see the BMJ 2013; 346 doi: http://dx.doi.org/10.1136/bmj.f766)

A BMJ conference aims to inspire a new generation of evidence creators and consumers

Evidence based healthcare has taken root as one of the central pillars of modern medicine. Arguably, the delivery of healthcare based on evidence has never been more important as we grapple with unexplained variations in practice and spiralling healthcare costs. But despite its widespread acceptance as a mechanism for rational decision making, evidence based healthcare remains in many ways an ideal rather than a fully fledged reality. On top of its well rehearsed limitations,1 new challenges are undermining its potential to improve healthcare outcomes.

First though, let us acknowledge the progress that has been made since evidence based medicine was first articulated in the early 1990s.2 Systematic review was then in its infancy, and methods for searching, selecting, and meta-analysing data have advanced almost beyond recognition. Our expectations of the quality of reporting for clinical trials and other study designs have risen sharply thanks to the widespread adoption of checklists, such as those created by the Equator network (www.equator-network.org). Trial registration and protocol deposition now provide a means of tracking trials from their outset, with the potential to chase up trials whose results have not been published. And the growth in open access to research means that more and more studies are available in full online.

Yet these undoubted advances have been accompanied by less welcome realisations. The exponential growth in the number of reported studies stretches our ability to retrieve relevant reliable evidence and to keep synopses and clinical guidelines up to date.3 And it seems that the more we scrutinise the available results of this growing body of research, the more sceptical about its credibility we become. Much published research is of poor or uncertain quality, and an unknown proportion of research is never published. A recent review of research funded by the National Institutes of Health found that only a small number of trial reports are analysed in up to date systematic reviews, and a seemingly obvious requirement—that all relevant evidence should be available for analysis when trying to answer a clinical question—remains unmet.4

This leaves us with increasingly sophisticated methodological tools but not the raw materials (reliable data) to answer with certainty some common clinical questions, such as is cholesterol lowering effective for primary prevention of cardiovascular disease, is breast cancer screening cost effective, what target blood pressure should we aim for when treating hypertension, how do we deliver care for chronic diseases in the developing world, and are antivirals effective for preventing and treating influenza?

Better diagnosis is one key to unlocking unrealised health gains. Earlier diagnosis could be achieved by making some tests more available and easier to access, but how can we do this without increasing the burden of false positive results? Diagnostic tests need to be cheaper and more accurate. Yet the evidence base and the methods for evaluating diagnostic strategies continue to lag behind the far better resourced research on treatments. Without progress on this front, and on the communication of risk, the usefulness of decision aids for evidence based partnership between patients and their clinician advisers will be severely limited.

Nor do we yet have adequate infrastructures to protect the evidence base from avoidable bias. Healthcare is the fastest growing business in the world and is beset with commercial interests. Yet it is becoming increasingly obvious that current legislation and regulations on the safety and efficacy of drugs and medical devices are not fit for purpose.5 The structures and culture of academic research also have ways of introducing bias.6

Then there is perhaps the most difficult question facing every health system in the world: how will we pay for healthcare? Only by rigorous application of the best evidence can we be sure that health systems will deliver true value. But what constitutes the best evidence and how do we apply it effectively to clinical practice and health policy?

 

Now in its fourth year, a partnership between the BMJ and the Oxford Centre for Evidence Based Medicine aims to provide a forum for exploring, if not answering, some of these questions. EvidenceLive, a conference in Oxford from April 13th to 14th, will bring together 500 participants including some of the world’s most distinguished, informed, and argumentative evidence experts for two days of animated debate. A programme for students and junior doctors means that a new generation of creators and consumers of evidence will also have their say. For those who can’t be there in person, there will be Twitter (use #evidencelive), videos, and coverage in the BMJ.

Hopefully See you there Carl.

Thanks to everyone who participated in Evidence Live 2013!

peter-and-braden First of all: thank you to everyone who came to Evidence Live 2013 and made it such a success. A conference like this only happens because people come from around the world, taking time out of busy schedules and money out of already-stretched budgets, to participate and to share their passion for evidence-based healthcare.

It’s no secret that healthcare is at somewhat of a crossroads these days. People are waking up- not just those ‘in the know’, but the general public too- and realizing that if healthcare is going to be sustainable and continue to improve people’s lives, that radical changes are necessary in research and policy. Jack Wennberg’s work on unwarranted variation in healthcare can rightly be considered one of the first great revolutions in health service research, and when looking back at what was discussed this week in Oxford, his description that there continue to be large variations in the quality and quantity of care provided across the U.S. highlights that there is still a long way to go. Ensuring equitable access to an appropriate level of service- not too much, not too little- is essential. So too is the publication of all clinical trials- if you haven’t already done so, sign up now to the AllTrials campaign.

The Cochrane Collaboration has been one of the greatest developments in healthcare ever, ensuring that high quality evidence is generated and disseminated for more than two decades. As ever, it was a privilege to have one of its founders, Sir Iain Chalmers, participating in the conference. What was most exciting, though, is that even with Sir Iain, the Editor-in-Chief of The Cochrane LibraryDavid Tovey, and the Director of the UK Cochrane Centre Martin Burton sitting in the room, speakers were unafraid to provide poignant, thoughtful criticism of Cochrane’s work in order to push it forward and make it even better. Speakers like Jack Cuzick, Peter Gøtzsche, and Tom Jefferson, among others, highlighted areas where they believe Cochrane can do better, and set out challenges for the Collaboration to refine and evolve their methods in keeping with the latest evidence and methods. It is a testament to the professionalism and scientific integrity of all delegates that Evidence Live was a safe space in which these critiques could be raised.

While the ‘good and the great’ of evidence-based medicine were all there at the conference (indeed, prompting Chairman of the National Institute for Health and Clinical Excellence Sir Michael Rawlins to answer a question telling a delegate to ignore the good and the great and do whatever they thought was right!), it was also about the future of EBM- those younger delegates to whom the reins are rapidly being handed. Students and early career researchers came from all over the world- from places like Iran, and Bangladesh, and China- to participate in the debate, present posters on cutting-edge research, and to hear from distinguished speakers. A rapid-fire session on the second day witnessed some of the most original (even controversial) presentations of the conference. Five speakers took their ‘favourite paper of all time’ as the starting point, and used this to discuss the future of EBM and how students can contribute. It was an effective reminder that we have come a long way, but there is still much to do.

Forcing the spring towards a new era in evidence-based medicine

“But, by the words we speak and the faces we show the world, we force the spring.”

The beginning of Bill Clinton’s first inaugural address seems an odd place to start a discussion about epidemiology, I admit. For us, though, it reflects the developments over the past few months that have changed how evidence-based medicine is practiced, and how it’s going to look in the near future. And we’re excited about all of it.

It’s no secret that there is massive pressure on drug companies to fundamentally change how they operate. This push for a new era of accountability is due to the efforts of many people, including Ben Goldacre, whose Bad Pharma has become an international phenomenon, and Tom Jefferson and Peter Doshi, whose campaign to obtain all the data on Tamiflu has been a major driver towards exposing withheld data. The current state of pharmaceutical research is a little like when you were in school doing an experiment, and something didn’t quite turn out right, and you ‘forgot’ to write down the results for that one part of the experiment. It may have worked in high school, when the worst that could happen is that you would have to stay after class. But when billions of pounds and thousands of lives are at stake, the stakes are quite different.

The All Trials campaign- an online petition that’s rapidly gaining publicity and support- now has over 80000 signatures, including major medical journals (our EvidenceLive partner, the BMJ, is one of the instigators of the campaign).

The philosopher Thomas Kuhn suggested that the history of scientific progress has been one of paradigm shifts, in which research continues along in a certain paradigm until someone comes along to break out and move on to another radically different way of looking at phenomena. We can’t help but wonder if we are in the midst of a paradigm shift in healthcare- one that values transparency, scientific progress, and responsibility to patients above all else. If you haven’t already signed up, come to EvidenceLive, and help “force the spring” towards a new era in evidence-based medicine.

First Posted on 17th February 2013 by Braden O’Neill