Author Archives: Carl Heneghan

About Carl Heneghan

Carl is Professor of EBM & Director of CEBM at the University of Oxford. He is also a GP and tweets @carlheneghan. He has an active interest in discovering the truth behind health research findings

Teaching EBM – what’s the evidence?

Annette Plüddemann

Annette Plüddemann

When teaching evidence-based medicine to undergraduate students, postgraduate students or professionals I encourage all my students to think critically and ask for the evidence for whatever clinical questions they might have. However, I am continually thinking about different approaches to my teaching, to ensure that my sessions influence knowledge, behaviour and attitudes to EBM.  With this in mind, I thought it might be useful to reflect on what the evidence says about teaching EBM, particularly in terms of what approach is likely to be the most effective. These were some of the questions I had as well as some of the evidence that pointed me to some answers:

What are the effects of teaching EBM to healthcare professionals?

An overview of 16 systematic reviews evaluating interventions for teaching evidence-based health care to health professionals found that “multifaceted, clinically integrated interventions, with assessment, led to improvements in knowledge, skills and attitudes”. The 81 separate studies, included within the 16 systematic reviews, reported a diversity of methods using varied interventions directed at both students and practicing healthcare professionals. However as the outcome measures varied considerably between the included studies, it was not possible to provide an overall effect estimate. The overview also highlighted the issue that none of the included reviews found studies that reported on practice outcomes, process of care or patient outcomes. However these long-term outcomes present several challenges, particularly since there are several factors that influence translating evidence into action.

What might be the most effective method of teaching EBM?

A mixed methods study with 497 participants using an RCT and focus groups compared the effectiveness of a blended learning (lectures and tutorials combined with online activities and “bedside teaching” on the wards) versus didactic learning approach of teaching EBM to medical students. The study found that while there was no difference in the students’ EBM competency between those that had received the blended learning compared to didactic learning approach, amongst those who had received blended learning the perceived self-efficacy and application of EBM in the clinical environment were significantly higher. However the study reported a low completion rate (less than 30% of students completed the outcome assessment), which may have underpowered the RCT and may mean the results are not generalizable, as it is likely that students with more of an interest in EBM would have completed the assessment.

What other methods can I use to share knowledge about EBM?

Social media has become an important way in which we can exchange and share knowledge and ideas. A multi-country mixed-methods study of 317 clinicians assessed the use of Twitter and Facebook to deliver evidence-based practice points, including key findings with links to journal articles or podcasts by clinical experts. The study found that social media may be relatively effective for improving EBM knowledge and the use of research evidence in clinical practice. However these findings would need to be validated in an RCT. The study also highlighted the key limitation regarding the effectiveness of social media for disseminating knowledge, which is that it relies on the user’s ability to discern the relevance and quality of the information.

These are just a few examples of the research looking at how we might teach EBM, and it appears we certainly need more research in this area. However, so far the take-home message for me is that using a variety of approaches and also constantly reviewing and reflecting on what I do coupled with trying new ways to engage and communicate with students is the way to go.

A non-evidentiary role for expertise in Evidence-Based Medicine

Leading up to Evidence Live 2016, we will be publishing a series of blog posts highlighting projects, initiatives and innovative ideas from future leaders in evidence based medicine.
Please read on for the second in the series from Sarah Wieten, Ph.D. candidate from Durham University.


If you are interested in submitting a blog post, please contact alice.rollinson@phc.ox.ac.uk. Stay tuned! 

 

Blog featured image

What is the role of expertise in evidence-based medicine?  For part of the movement’s history, expertise was taken as a component of authority-based medicine, the status quo that EBM aimed to replace. However, more recently EBM has embraced the importance of expertise and suggested three possible models for its use in medical research and practice.

An especially important improvement of EBM is the most recent EBM model of expertise. This third model of expertise that debuted in an article by Haynes et al. (2002) keeps the structure of interlocking rings of influence for different EBM components included in previous models, but adds an additional component and shifts the role for expertise. In this model, three main interlocking rings are labeled “Patients’ Preferences and Actions,” “Research Evidence,” and “Clinical State and Circumstances.” Additionally, a fourth component, “Clinical Expertise,” overlies the other three components, holding them together. Haynes et al. write of clinical expertise, “Clinical expertise includes the general basic skills of clinical practice as well as the experience of the individual practitioner. Clinical expertise must encompass and balance the patient’s clinical state and circumstances, relevant research evidence, and the patient’s preferences and actions if a successful and satisfying result is to occur.” (Haynes et al., 2002). This model of expertise emphasizes an external-to-evidence role for expertise, but specifies that the role of expertise is the bringing together of other factors, rather than being one of the factors brought together.

While this “newest” conception of expertise in EBM is not very new, the role of expertise in EBM has reemerged as a critical issue for the movement (Greenhalgh et al., 2014). This is an important idea, because translating evidence into better-quality health services will require more than just effort in improving the quality of the evidence produced, although that is a large part of it. It will also require expertise and judgement to pull that evidence together and apply that evidence to particular patients. It was not always clear that EBM has understood this additional need over and above improved evidence (EBMWG, 1992), but this model provides a good start for thinking about clinical expertise in EBM.


References:

Evidence-Based Medicine Working Group. 1992. “Evidence-based medicine: A new approach to teaching the practice of medicine.” JAMA. 268(17):2420-5.

Greenhalgh Trish, Howick Jeremy, Maskrey Neal. (2014) Evidence based medicine: a movement in crisis? BMJ; 348: g3725.

Haynes, R. Brian, Devereaux, P.J., and Gordon H. Guyatt. 2002. Clinical expertise in the era of evidence-based medicine and patient choice. Evid Based Med;7:36.

 

How to get published?

In the run up to Evidence Live 2016, The BMJ are running a series of blogs by the speakers at the conference discussing what they will be speaking about…

Trish_Groves_resized-150x150

Trish Groves

David-M_blog-150x150

David Moher

The highlight of last year’s excellent Evidence Live was, for me (Trish Groves), a short, private conversation. Two doctors from Pakistan (a husband and wife) sought me out to say they had taken part in my Evidence Live workshop two years earlier, on how to publish research. They went on to complete their research and, for the first time, to successfully publish two papers. “BMJ helped us broaden our vision, and changed our lives” they said.

Similar stories, and a growing realisation that we all need to tackle the huge challenge ofwaste in research, inspired BMJ to develop Research to Publication. This is a comprehensive eLearning programme for early career researchers.

Research to Publication is aimed at researchers and their institutions worldwide, with a special focus on building health research capabilities and supporting research integrity inlow and middle income countries. We have partnered in this with Professor Deborah Grady and colleagues at UCSF’s Clinical and Translational Science Institute.

The programme helps researchers and students to develop and polish their skills in clinical and public health research, and in reporting and publishing studies in a timely manner, transparently, and ethically. Research to Publication includes two free modules: one from BMJ on developing and publishing study protocols (particularly clinical trial protocols) and, from our partners UCSF, an introduction to clinical trials.

David Moher and I will be bringing all this together in a workshop at this year’s Evidence Live. I’ll be sharing insights from Research to Publication… and now over to David: I will be talking about the quality of published clinical research and how reporting guidelines can help prospective authors prepare their reports, helping to ensure they are complete, accurate, and transparent. Such reports will likely make peer review easier and might decrease the number of rounds of manuscript revisions. The EQUATOR Network keeps a comprehensive list of all reporting guidelines. I will also review the Network and the library of reporting guidelines. I will also briefly talk about a potpourri of other publication science topics, including systematic reviews and how to register them, an update on the REWARD alliance, and peer review.

And remember today (Friday 20th May) is International Clinical Trials day. Go hug a clinical trialist and any patient you know who’s participated in a clinical trial.


Trish Groves is Director of Academic Outreach and Advocacy at BMJ. Follow Trish on Twitter @trished.

David Moher is a senior scientist at the Ottawa Hospital Research Institute and associate professor in the School of Epidemiology, Public Health and Preventive Medicine, Faculty of Medicine, University of Ottawa, where he holds a university research chair.

This blog was originally posted on BMJ Blogs: http://blogs.bmj.com/ce/2016/05/20/how-to-get-published/ 

Pearls of wisdom for future leaders

By Peter Gill

Recently I had the opportunity to attend an annual research competition which brought together paediatric trainees from across Canada. It was an inspirational event for several reasons: 1) the quality and scope of research being conducted by trainees is impressive; 2) it provided a break from clinical responsibilities to reflect; and 3) it provided an opportunity to network with leaders in evidence-based medicine, including Dr. Terry Klassen, founder of the Cochrane Child Health Field.

The event started with a keynote speech by Dr. Stephen Freedman, a paediatric emergency physician who transformed the management of gastroenteritis in children by leading several large multi-centre RCTs published in NEJM and JAMA. Learning about the career path of successful researchers is invaluable. As a junior trainee, it often seems that well-known academics had a simple, linear path to success. The reality could not be farther from the truth. I wanted to summarize a few simple (and perhaps self-evident) pearls of wisdom that were passed on to me at the event.

  1. Be curious: don’t be afraid to ask why
    Meaningful research provides answers to everyday clinical questions (e.g. why should we prescribe antibiotics for 10 days instead of 5?). As a trainee, time is a luxury, and we can fall into a trap of doing rather than thinking. Yet, it is imperative to remain sceptical and curious (e.g. how will ordering this blood test change management?). Be forewarned, the answer is often not reassuring (“I always do it this way” or “it’s hospital policy”). By asking questions (and searching for answers), research gaps will appear. Not only will this help guide your research, it helps you provide evidence-based care to your patients.
  1. Take risks: get out of your comfort zone
    It is tempting to remain at the same institution: it is comfortable and familiar. However, by leaving your ‘safe’ zone for an elective, fellowship or exchange, you will see variation in clinical practice. I have been to countless talks from academics who cite observing variation as a trigger for a research career. Dr. Freedman, for example, completed his paediatric emergency fellowship in Chicago where children were routinely managed with promethazine unlike dimenhydrinate in Toronto: the rest is history. But, such opportunities rarely fall on one’s doorstep: actively seek them out, apply broadly, keep an open mind and be prepared to learn. Different can be good, and better.
  1. Be creative: think ‘outside the box’
    Some emergency departments rehydrate children with IV fluids rapidly over 20-30 minutes while others infuse the same volume of fluid over 60 minutes. Which is better? Dr. Freedman wanted to conduct a double-blind RCT to compare the two methods of IV rehydration but did not know how to keep clinicians blinded. What did he do? He went to his backyard, found some wood, and built a box to cover the IV pump. While the first version was crude, eventually the bioengineering department designed a sound-proof box and the trial was a success (i.e. faster is not better). If you have an interesting idea, don’t fret if it seems impossible. Be creative, talk to others – think ‘outside the box’.
  1. Be persistent: more is learnt from failure than success
    Recently, a Princeton professor posted a CV of failures which outlined his rejections. Failure is part-in-parcel with any career, particularly one in academia. If you fail, try again: submit your manuscript to another journal or apply for another job. Learn from failure – ask for feedback but remain objective, and do not take it personal. Failure is part of the process, and you often learn more from failure than from success.
  1. Break down silos: think outside your specialty
    Everyday care is delivered by multiple healthcare professionals, including GPs, emergency physicians, consultants, nurses, and allied health (e.g. physiotherapist). Yet, research is often confined to one specialty or discipline. This traditional model of research is antiquated and artificial. Similar to thinking outside the box, think outside your speciality and work with other clinicians and research groups, especially patients.
  1. Network, network, network: make yourself known
    Relationships form the bedrock of a successful research career: multi-centre studies require collaborators at each site while grants and manuscripts require peer review. Go to conferences and events where you will meet colleagues in your area of interest. Get to know people in the field: go for (non) alcoholic drinks, socialize and become friends. Offer to review manuscripts, grants and papers. Peer review relies on not just taking, but giving. Good reviews get recognized by authors who may contact you to say thank you, or you may get asked to write an editorial by the journal.
  1. Have fun
    Research is hard work, and is a long-term game. If you are going to be putting in the hours, it should be for a topic you believe in and with people you enjoy working with.

Lastly, mentorship is critical. Kamal Mahtani covers this topic brilliantly in his blog post Evidence based mentoring for “aspiring academics”.

There is so much left to do in medicine. Students, trainees and junior researchers will play a key role in tackling the important questions. While it is important that these individuals are well-supported, it is equally important for future leaders to step forward. The Roman philosopher Seneca sums it up nicely: “Luck is what happens when preparation meets opportunity.”

Peter J Gill is a paediatric resident at The Hospital for Sick Children, University of Toronto and an Honorary fellow at the Centre for Evidence-Based Medicine, University of Oxford. He is a member of the Evidence Live 2016 steering committee which this year includes a Future Leaders theme.

You can follow him on Twitter at @peterjgill

Competing interests: I have read and understood BMJ policy on competing interests. I have no other competing interests to declare.

Disclaimer: The views expressed are those of the author and not necessarily of any of the institutions or organisations mentioned in the article.

Disentangling too much and too little medicine

Js

Jack O’Sullivan, DPhil Student

Around the world, healthcare is rapidly becoming unaffordable. In the US, for example, per capita Medicare spending grows at an at average of 3.5% annually(1). In universal health services, health expenses are also growing exponentially; NHS England and others estimate that by 2020/21 an annual £30 billion mismatch between resources and patient need will exist(2).

While drivers of increasing expenses are complex, one truth is universally accepted: waste exists within all healthcare systems (3,4). Rather than in a material sense – throwing away a recently opened packet of unused equipment – significant and costly waste exists more substantially within unnecessary practices (tests and treatments) and care pathways(4).

Unnecessary practices are tests and treatments which cause no net benefit for patients. These practices are the most common opportunities to cause patient benefit or patient harm. An inappropriate test can harm a patient through a direct adverse reaction (e.g. contrast nephropathy) or harm indirectly, by diverting resources from necessary interventions. Thus, tests that cause no direct harm (and no benefit) are still inappropriate because the resources utilised on these tests detract from other potentially beneficial interventions. Inappropriate treatments can cause harm by the same mechanisms. In total, approximately 20% of clinical practice is estimated to be unnecessary, causing no net benefit to patients(3).

On a population level, identifying practices that cause no net benefit is challenging. With the emergence of evidence-based medicine over the last 20 years, most accept that healthcare decisions should be based on high quality evidence. Thus, practices ordered not in line with high quality evidence, with some exceptions, are potentially unnecessary.

Furthermore, variation in clinical practice has become a surrogate for potentially unnecessary care. In regions with similar disease burden and demographics, differences in the number of surgeries performed or tests ordered implies that one region is doing too many or one region is not doing enough (or both). In the UK and the US(5,6), we know lots of variation in practice exists. For instance, in the UK, there is a 1,000 fold difference in the rate of fasting blood glucose tests ordered from primary care(7). Variation such as this implies that not everyone in the UK nor in the US receives high quality care.

How can we disentangle this current state of too much and too little medicine? There is no simple answer, and there is definitely no straight forward solution. However, as always, we rely on high quality data and evidence. One solution that has emerged is the identification of practices of no net benefit. Tests or treatments that are performed at varying rates across geographical regions or are commonly performed not in line with high quality evidence need to be identified. Identification can facilitate removal, which can reduce unnecessary harm to patients and lessen the burden on stretched healthcare systems.


References
1. Fisher ES, Bynum JP, Skinner JS. Slowing the growth of health care costs–lessons from regional variation. N Engl J Med [Internet]. 2009 Feb 26 [cited 2016 May 3];360(9):849–52. Available from:
http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2722744&tool=pmcentrez&rendertype=abstract
2. NHS. Five Year Forward View. 2014;(October).
3. Berwick D, Hackbarth AD. Eliminating Waste in US Health Care. JAMA [Internet]. American Medical Association; 2012 Apr 11 [cited 2015 Oct 8];307(14):1513. Available from: http://jama.jamanetwork.com/article.aspx?articleid=1148376
4. Colleges A of MR. Protecting resources, promoting value: a doctor’s guide to cutting waste in clinical care [Internet]. 2014 [cited 2016 May 18]. Available from: http://www.aomrc.org.uk/dmdocuments/Promoting value FINAL.pdf
5. Fisher ES. The Implications of Regional Variations in Medicare Spending. Part 1: The Content, Quality, and Accessibility of Care. Ann Intern Med [Internet]. American College of Physicians; 2003 Feb 18 [cited 2016 May 3];138(4):273. Available from: http://annals.org/article.aspx?articleid=716066
6. Right Care. Reducing unwarranted variation to increase value and improve quality [Internet]. 2015 [cited 2016 May 18]. Available from: http://www.rightcare.nhs.uk/atlas/RC_nhsAtlas3_HIGH_150915.pdf
7. NHS Right Care. Diagnostics: The NHS Atlas of Variation in Diagnostic Services. 2012;(November):220.
8. Alderwick H, Robertson R, Appleby J, Dunn P, Maguire D. Better value in the NHS The role of changes in clinical practice. 2015;(July).

Better Decisions Require Research that Matters: part 4

Poor quality evidence, lack of affordability and uninformed patients suggest an awful lot of research doesn’t actually matter. However, for informing better decisions when presented with a piece of evidence there are three questions that I use to identify and weed out most research that doesn’t matter: 1) does this research apply to my patient; 2) is the research of sufficient length to inform the outcome given the clinical course of the disease, and 3) will this evidence make a difference to my patient’s outcome?

1. Does this research apply to my patient?

External validity is the extent to which we can generalize the results of a trial to the population of interest, whereas internal validity refers to the extent a study properly measures what it is meant to. The issue is that most interventions in clinical trials don’t apply to real world patients and so have poor external validity.

An analysis of 20,000 Medicare patients with a principal diagnosis of heart failure reported that only 13–25% met the criteria for 3 of the pivotal RCTs. A further review of 52 studies, which compared baseline characteristics of RCT patients with real world patients, found that many trials are highly selective and have lower risk profile patients than those seen in the real-world: 37 (71%) of the studies concluded the populations were not representative. The patients we are often most interested in applying evidence to – the elderly and those with comorbidities – were most often excluded. In only 15 (29%) studies were the RCT samples generally representative of real-world populations. Furthermore, amongst 155 RCTs of drugs frequently used by elderly patients, with chronic medical conditions, only three studies exclusively included elderly patients. Similar problems have also been observed in cancer trials; there have been recent calls to expand the inclusion criteria, not least to increase the number of participants and improve the generalisability.

A systematic review of the eligibility criteria of 283 RCTs published between 1994 and 2006, in high impact general medical journals, reported that common medical conditions led to exclusions in 81% of trials and commonly prescribed medications in 54%. Similar problems have also been seen in alzheimer trials: information on comorbidities and drugs is often lacking, as a consequence there is a significant difference between trial participants and the real world populations with Alzheimer’s. Some of the blame is due to reporting bias: one cause of poor quality evidence, which is easily rectified.

2. Is the research of sufficient length to inform the outcome given the clinical course of the disease?

There are two problems when it comes to trial length and informing outcomes: 1) trials that are stopped too early, and 2) trials of insufficient length that often use surrogate outcomes and therefore do not reflect the outcomes of interest for the real course of the disease.

Trials stopped early, on average, will overestimate treatment effects. These overestimates are larger in smaller trials. A review of 143 trials stopped early (STOP-IT 1) found they are on the increase (0.5% in 1990-1994 to 1.2% in 2000-2004 (P<.001 for trend); they recruit on average 63% of the planned sample; often they do not report important trial features and they report larger treatment effects – particularily when  the numbers of events is small.  A further comparison of RCTs stopped early (STOP-IT 2) with those that weren’t – in the same meta-analysis – found that the tru=ncated RCTs also have greater effect sizes.

The excellent ‘Absolutely Maybe Blog,’ by Hilda Bastian, on ‘The Mess That Trials Stopped Early Can Leave Behind’ reports on a leukemia trial testing courses of treatment analysed every year. Figure 1 shows the annual analyses, and whilst early results suggested significant benefits this subsequently disappeared over time. The results of RCTs stopped early – particularly with small sample sizes and small number of events – should be viewed with a healthy dose of skepticism.

Figure 1. Attempts to optimize induction and consolidation treatment in acute myeloid leukemia: results of the MRC AML12 trial.

Part4

There is a growing body of trials that simply do not reflect the course of the disease, and as a consequence insufficient evidence exists around many current interventions to determine if they are effective. As an example, a 2009 Cochrane systematic review of antidepressants versus placebo for depression in primary care found 14 studies (10 examined TCAs, 2 SSRIs and 2 included both, all compared with placebo) including 1364 participants in the intervention group and 919 in the placebo group. Nearly all studies were of short duration, typically 6-8 weeks, there was no dose information on SSRIs, and the authors were unable to comment on the appropriate duration of treatment. Given the paucity of evidence you would think there would have been an increase in the number of longer term trials. A 2015 updated systematic review including a total of 66 studies found most trials were poor quality ‘because there was a small number of studies with observation periods of longer than 12 weeks, reliable comparative analysis of long-term effects was not possible….and the effects size compared with placebo is frequently considered rather small.’

3. Will this evidence make a difference to my patient’s outcome?

When using evidence to inform patient care what we mostly do is asses statistical significance. Only then do we consider the issue of clinical significance. But, if we asked what effect size would we consider sufficiently important enough to implement this treatment before looking at results we might discard a significant amount of evidence irrespective of the statistical significance.

This is sometime referred to as the minimally clinically important difference (MCID), which is the smallest difference you would be willing to accept. In some areas there have been calls to develop a catalogue of MCIDs. As an example, in alcohol behavioural interventions there is need to rethink relevant outcomes and the evidence that might contribute to recommendations for MCIDs. However, MCID measures may be too conservative as they reflect minimal values. In an analysis of 8931 rheumatoid patients < 65 years of age improvement consistent with a “really important difference” (RID) was reported by patients to be 3 to 4 times greater than the MCID.

There are a number of available methods to develop MCID and although no one method is better than another – and there are shortcomings – they are still useful and certainly better than nothing. A COPD symposium assessing MCID stated, ‘clinical opinion and patient subjective response should trump statistical theory,’ which fits with the definition and ethos of EBM, which may help you when next using evidence to make a better decision.

The next in this series will look at better decisions require less conflicts of interest.

Carl Heneghan is professor of EBM at the University of Oxford, director of CEBM, which co-organizes the EvidenceLive conference with the BMJ

His research interests span chronic diseases, diagnostics, use of new technologies, improving the quality of research and investigative work with The BMJ on drugs and devices that you might stumble across in the media.

I declare that I have read and understood BMJ Policy on declaration of interests and I hereby declare the following interests: CEBM jointly runs the EvidenceLive conference with The BMJ and is one of the founders of the AllTrials campaign. He has received expenses and payments for his media work and has received expenses from the World Health Organization (WHO) and and holds grant funding from the NIHR, the National School of Primary Care Research, the Wellcome Trust, and the WHO.

Better Decisions Require Better Informed Patients: part 3

The first two articles in this series pointed out we need better and more affordable evidence. Yet, even if affordable high quality evidence is forthcoming it is imperative that patients can make informed decisions and that doctors have the tools to actually inform patients it in practice.     

There is, however, growing unease that the current system is not serving patients information needs. Sally Davies, the UK’s Chief Medical Officer (CMO), recently requested a review to restore public trust in the safety and effectiveness of medicines, because patients increasingly see doctors as over-medicating and clinical scientists who are afflicted by conflicts of interest: the CMO therefore considers it is difficult for the public to trust either.

What is clear is that informed patients require an unbiased presentation of reasonable options to consider the benefits and harms of their treatment options. Yet, despite the significant growth in RCTs over the last twenty years there have been few robust studies that have evaluated shared decision making – actually there are none.

A 2015 systematic review of shared decision making strategies, including at least one patient outcome, found 39 studies. But none of  these were randomised controlled trials:  28 were cross sectional or before and after surveys, and whilst 8 RCTs were included in the review, the analyses were secondary to the main trial hypotheses, and were therefore conducted irrespective of the group assignment (ie., they weren’t randomised comparisons).

Furthermore, there is a serious under-representation of shared decision-making evidence in many disease areas: fourteen studies in the review were cancer related (10 breast cancer), five each for mental health and diabetes and only two were based in primary care. A further Cochrane systematic review of interventions that aimed to improve the adoption of shared decision making by healthcare professionals found only 5 RCTs, three were done in primary care and two in specialist care.

Hence, it is difficult to advise which strategy, if any, to adopt when it comes to informing patients in real world practice.

While there is little evidence to inform shared decision making strategies there is considerably more evidence for decision aids: over 500 have been developed (an inventory is available here) and 115 randomised trials involving 34,444 participants were included in a recent updated cochrane systematic review. This review concluded that there is ‘high-quality evidence that decision aids improve people’s knowledge regarding options, and reduce their decisional conflict related to feeling uninformed and unclear about their personal values.’ However, there was less evidence for effects on clinical outcomes and adherence to treatments.  A further systematic review of the impact of patient decision support interventions and costs and savings including 7 studies and 8 analyses found that there is some evidence patients choose more conservative approaches when they are better informed; but there is little evidence as to whether this generates any actual savings.  

When it comes to informing patients needs there is an obvious dearth of information and evidence in the shared decision making space. What we now need to do is divert some, if not a lot (if not all for one year) of the research funds that are going to waste into this extremely important area that affects all of us all of the time in health care

Example of shared decision making strategies by Victor Montori speaking at EvidenceLive in 2016:

Transforming the Communication of Evidence for Better Health

16:00 wednesday June 22nd

Victor Montori – How do we make evidence care?

The campaign starts at EvidenceLive 2016 – with  an open meeting to prioritize and explore the potential solutions to better evidence for  better decisions.

The next in the series will look at better decisions require treatments that matter.

Carl Heneghan is professor of EBM at the University of Oxford, director of CEBM, which co-organizes the EvidenceLive conference with the BMJ

His research interests span chronic diseases, diagnostics, use of new technologies, improving the quality of research and investigative work with The BMJ on drugs and devices that you might stumble across in the media.  

I declare that I have read and understood BMJ Policy on declaration of interests and I hereby declare the following interests:  CEBM jointly runs the EvidenceLive conference with The BMJ and is one of the founders of the AllTrials campaign. He has received expenses and payments for his media work  and has received expenses from the World Health Organization (WHO) and and holds grant funding from the NIHR, the National School of Primary Care Research, the Wellcome Trust, and the WHO.

Better Decisions Require More Affordable Treatments: part 2.

Part 1 of this series pointed out we need better research to support better decisions. Market forces, though, may not be helping decision-making as new treatments – particularly drugs – are increasingly unaffordable and out of the reach of payers.

Estimates suggest the development of a newly approved drug currently costs around $2.6billion.  A high proportion of current costs are driven by high failure rates, the spiralling costs of clinical trials and competition with existing treatments that already have substantial effectiveness. As an example, Astra Zeneca’s five year drug development pipeline analysis reported only 2% of their products made it to market in this period: 59% of drugs completed Phase 1; only 15% completed Phase II, where most failures occurred and most improvement is required, and 60% completed Phase III.  Whilst R&D costs have increased almost exponentially output has flatlined over the same time period.

As a consequence, in some diseases cost are prohibitive: the cost of MS drugs in the US has risen from $10,000 in the 90s to current costs of $60,000 per year: seven times higher than the inflationary rise in prescription drug costs over the same period. A recent analysis of 32 Cancer drugs reported that 2014 drug costs were on average six times higher than those in 2000: costing on average $11,325 compared to the $1,869 per month in 2000.

Analysis of the National Institute for Health and Care Excellence cost-effectiveness thresholds suggests that the QALY for existing NHS treatments are about £13,000.  But, the QALY price for new medicines is much higher.  The ramifications are that new drugs are potentially displacing cheaper more effective treatments. That is unless you have an unlimited pot of money: for every healthy year gained by the UK drug cancer fund  up to five QALYs might have to be displaced from existing NHS activities.

Furthermore, manufacturers of orphan drugs are inflating prices massively – no wonder approvals are at record numbers.  A recent analysis of 74 orphan drugs, by our group, demonstrated annual costs of these drugs varied from  £726 to £378,000.  For the 10 drugs where generic alternatives were available, they  were 1.4 to 82,000 times more expensive. Branded intravenous ibuprofen (Pedea), for closure of patent ductus arteriosus in preterm infants, was a staggering 82 000 times more expensive than its generic oral equivalent.

As a consequence of market forces, serious distortions in the production of evidence are also happening. As an example, diseases in high-income countries are eight times more likely to be researched in clinical trials than those in low and middle-income countries.

Current estimates suggest 85% of research spending currently goes to  waste. This  figure may be too conservative. If you take into account only 2% of drugs in the pipeline make it through to the approval stage, and, let’s say a half of these, are no better than existing treatments; we are also poor at implementing effective treatments, another half lost; and when  treatments are implemented into practice about half of all patients don’t take their treatments as prescribed. We are therefore left with a miserly  0.25% of all research funding delivering effective treatments and 99.75% of research funding goes to waste. Hence, in terms of the 40,000 trials published each year, about a hundred might make a difference to patient care.  No wonder it costs so much for the few treatments that do make a difference.  

Improving the Quality of Research Evidence  – The campaign starts at EvidenceLive 2016 – we will be holding an open meeting to prioritize and explore the potential solutions to better evidence and better decisions.

Carl Heneghan is professor of EBM at the University of Oxford, director of CEBM, which co-organizes the EvidenceLive conference with the BMJ.
His research interests span chronic diseases, diagnostics, use of new technologies, improving the quality of research and  investigative work with The BMJ on drugs and devices that you might stumble across in the media.

I declare that I have read and understood BMJ Policy on declaration of interests and I hereby declare the following interests:  CEBM jointly runs the EvidenceLive conference with The BMJ and is one of the founders of the AllTrials campaign. He has received expenses and payments for his media work  and has received expenses from the World Health Organization (WHO) and and holds grant funding from the NIHR, the National School of Primary Care Research, the Wellcome Trust, and the WHO.

Better Decisions Require Better Evidence: Part 1

The campaign starts at EvidenceLive 2016 – with  an open meeting to prioritise and explore the potential solutions to better evidence for  better decisions.

Carl Heneghan

Carl Heneghan

At the core of evidence-based medicine is the integration of patient values and high quality evidence. Informed patients should also understand their treatment options and actively participate in making decisions about their own health, and  to achieve this, clinicians require better research evidence. However, there are growing concerns that a sizeable amount of current published research is irrelevant, wasteful and detrimental to patient care.

In 1994, Doug Altman, pointed out in an editorial on the scandal of poor medical research we need less research but better research. Has anything therefore changed – for the better – in the last 20 years? Searching PuBMed Trend, which displays the number of articles in  Medline by year, reveals 9,613 trials were published in the same year as the Altman editorial.  Twenty years later this number has risen, more than threefold, to 31,648 trials. So much for less research.

Whilst trials have been accruing we have not observed better research. Instead, significant increases in poor quality practices have occurred: John Yudkin and colleagues argue that our ‘obsession with surrogates is damaging patient care:’ they are easier to measure are increasingly used instead of important patient outcomes  and are on the rise. A systematic review by Cordoba and colleagues concluded that composites are often unreasonably combined, poorly defined and reported, and often leading to exaggerated outcomes. An analysis by Jamie Kirkham and colleagues of 283 cochrane reviews revealed that outcome reporting bias is an under-recognised problem and affects over half of cochrane reviews and often affects outcomes. Eve Tang and colleagues, analysis of  300 trials, with serious adverse events reported on ClinicalTrials.gov. found a quarter did not have a corresponding publication, and an analysis by Rodgers and colleagues of confidential and published data on rhBMP-2 for spinal fusion concluded the  “reporting of adverse event data in trial publications was inadequate and inconsistent to the extent that any systematic review based solely on the publicly available data would not be able to properly evaluate the safety.”

Outcome switching: The Compare Projects analysis of outcome switching in clinical trials of the top five medical journals worryingly reports that on average each trial only reports 62% of its specified outcomes. Finally, Andreas Lundh and colleagues analysis of industry sponsorship concludes that an “industry bias exists that cannot be explained by standard ‘Risk of bias’ assessments.

Poor quality research evidence is now so commonplace it is imperceptibly ingrained in the culture. As an example, in a report to David Sackett, John Ionnadis considers evidence-based medicine has been hijacked, and as a consequence, drug trials are now mainly done by industry for their benefit.  

In a recent BMJ editorial on fixing the problems in medical research  Goldacre and myself  called  for independent trials, reducing research costs, incentivising better evidence, declaring conflicts of interest and a thorough transformation in the public understanding of evidence to support healthcare decision making.   

There is little wrong with the fundamental principles of EBM. But for too long we have tolerated an evidence-base distorted by systematic bias and conflicts of interest. If ignored, the inherent problems in current research production and publication may become entrenched and insolvable, that is, if some of the problems aren’t already. What we now need is firm action to develop better evidence to support better decision making.

The next in this series will look at better decisions require more affordable treatments.    

 

Carl Heneghan is professor of EBM at the University of Oxford, director of CEBM, which co-organizes the EvidenceLive conference with the BMJ

His research interests span chronic diseases, diagnostics, use of new technologies, improving the quality of research and  investigative work with The BMJ on drugs and devices that you might stumble across in the media.  

I declare that I have read and understood BMJ Policy on declaration of interests and I hereby declare the following interests:  CEBM jointly runs the EvidenceLive conference with The BMJ and is one of the founders of the AllTrials campaign. He has received expenses and payments for his media work  and has received expenses from the World Health Organization (WHO) and and holds grant funding from the NIHR, the National School of Primary Care Research, the Wellcome Trust, and the WHO.