Author Archives: Carl Heneghan

About Carl Heneghan

Carl is Professor of EBM & Director of CEBM at the University of Oxford. He is also a GP and tweets @carlheneghan. He has an active interest in discovering the truth behind health research findings

Of Cabbages and Kings: the Better Evidence for Better Healthcare Manifesto

In an address to oysters, the Walrus in Lewis Carroll’s ‘The Walrus and the Carpenter’ proclaims:

The time has come, [the Walrus said],

      To talk of many things:

Of shoes — and ships — and sealing-wax —

      Of cabbages — and kings —

And why the sea is boiling hot —

      And whether pigs have wings.

I’m not sure where the Walrus was going with this but what is clear is that wherever he was headed, action was needed on a broad front.  Discussing cabbages alone simply would not do.

The same is true of evidence-based medicine.  While it is hard to argue with the basic fact that all healthcare decisions would do well to be informed by some research evidence, there are growing rumblings of discontent about EBM, and sometimes hostility.  Why?

I’m not a health professional, I’m a trial methodologist.  I don’t have to make daily use of evidence for healthcare decisions, except my own and those of anyone else who invites me into their decision.  But it seems to me that the world of healthcare evidence is both overwhelming and underwhelming.  Anyone who cares to look will find a lot of evidence out there, some of it packaged into systematic reviews or guidelines, some of it standing alone hoping someone will notice.  This is the overwhelming part: if I’m travelling hand-luggage only I pause before printing out most Cochrane systematic reviews.  Some guidelines need GPS to navigate them.

The underwhelming part comes when you take a closer look, especially when you look at the trials that form the heart of much of the evidence we hope will inform treatment decisions.  To cut to the chase, many trials are poorly informed by earlier research, poorly designed, badly executed, reported opaquely and of little relevance to the people whose decisions they were supposed to be supporting.  In other words they are rubbish.  I helped write a summary a year or so ago for a systematic review that asked a question about which of two types of catheters was least likely to lead to infection1.  After looking at 42 trials involving over 4500 people the reviewers concluded that they hadn’t a scooby which is best, if you’ll forgive the slip into Scottish technical jargon.  A wonderful but sad piece of work by Céline Habre and colleagues2 updated another systematic review and concluded that two-thirds of the 136 trials done over the 10-year period since the previous version were clinically irrelevant. Two-thirds, wow.

Research evidence should have a role in healthcare decisions; it will rarely be the only thing to consider and it might be ignored. The point is that it should be on the table.  No serious person wants evidence-free healthcare.  But the role of evidence in healthcare needs help.

Like the Walrus, the authors of the Better Evidence for Better Healthcare Manifesto suggest work is needed on many fronts.  They want work on greater roles for patients in research design, better research methods, better reporting, more informative systematic reviews, better use of evidence in policy, reductions in unnecessary medicalisation and improvements in the ability of health professionals to recognise poor quality evidence to mention but some.  I like it.

But for my part, the area we really need to work on is how we do research: we should do more research on research.  In my world of trials, it is astonishing how little attention is often paid to existing research when designing a trial, how little evidence we have to support our own decisions about design, conduct and reporting and how little thought we sometimes give to what the intended users of our trials actually need from them.

Manifestos should be a spur to action, although what they lead to may not always be what was envisaged when they were written, and manifesto promises do have a habit of quietly disappearing.  Sometimes it’s hard to deal with sealing-wax and shoes when there are also boiling seas to deal with.  But if I was forced to choose my king over a cabbage, I’d choose to take aim at improving the way we design research, especially trials.  The Walrus would be pleased I’m sure, whatever he was talking about.


Shaun Treweek is Professor of Health Services Research, University of Aberdeen. He is working on initiatives to improve the efficiency of trials, particularly Trial Forge and is active in pragmatic trial design, the design and pre-trial testing of complex interventions and interventions to improve recruitment to trials.

This blog is also posted on the BMJ Opinion.

References

  1. Kidd EA, Stewart F, Kassis NC, Hom E, Omar MI. Urethral (indwelling or intermittent) or suprapubic routes for short- term catheterisation in hospitalised adults. Cochrane Database of Systematic Reviews 2015, Issue 12. Art. No.: CD004203. DOI: 10.1002/14651858.CD004203.pub3.
  2. Habre C, Tramer MR, Popping DM, Elia N. Ability of a meta-analysis to prevent redundant research: systematic review of studies on pain from propofol injection. BMJ 2014;349: g5219–9.

What can we learn from the Better Evidence for Better Healthcare Manifesto?

The Better Evidence for Better Healthcare Manifesto (EBM manifesto) has been launched in order to improve the implementation of evidence-based interventions by pulling together a clear set of achievable goals and a strong overview of the strategies that work best, to help deliver change better and faster.

In some areas, such as the treatment of illicit drugs-related problems, evidence-based medicine struggles to be firm. Ideologies, political views, and advocacy agendas complicate the picture. In this area, some suggestions from the manifesto also help to address other difficult areas like mental health, obesity, and behavioural-related health problems.

In the treatment of illicit drugs-related problems, the EBM approach is taking time to be accepted and implemented. Some of the lessons learned can contribute to the manifesto.

The evidence base has been formalised through the creation of the Cochrane Drugs and Alcohol Group in 1998 (Davoli 2000). Today, using the evidence base has become a requirement in many drugs policy documents at both European and national levels (Ferri and Bo, 2012). This is particularly important in a field where most of the patients in need of publicly financed treatment have a low socioeconomic status and may not be able to demand effective treatments and quality interventions (Galea and Vlahov, 2002).

However, the extensive use of the term: “evidence base” creates potential “side effects,” which are of interest for the manifesto.

Many people from a variety of backgrounds use “evidence base” to mean different things. Evidence is used to justify decisions. Rather than identifying a question, searching for the evidence, and then taking decisions, the process is inverted. The decision comes first, followed by the opportunistic choice of supporting evidence (“cherry picking”).

More commonly there is a misunderstanding of what a systematic review actually is. For example, rather than being based on systematic reviews of studies—in agreement with standards set by the Cochrane and Campbell collaborations—recommendations are based on a much simpler narrative synthesis of published reviews. These “reviews of reviews” combine the conclusions of several primary reviews, often irrespective of their quality. In addition, the primary reviews may be based on the same sets of individual studies, resulting in artificially inflated conclusions.

The common confusion between lack of evidence and evidence of non-effectiveness exacerbates defensive rejections of EBM, rather than encouraging advocacy for more investment in research.

Professionals and decision makers are uncertain on how to implement and monitor evidence-based interventions and can be tempted by simplistic approaches.

What change do we want to achieve?

We need a shared understanding of what evidence-based medicine is and how to apply it in one’s daily life. We must encourage greater participation: front-line carers, patients, and their families should become EBM knowledge brokers for their peers. This is particularly vital for marginalised patients and for conditions with low research investment. I recommend that projects like the James Lind Alliance should be piloted in more European countries; projects like Sense about Science should be implemented in all schools in order to increase the numbers of those able to advocate for evidence-based interventions.

What actions are currently underway to achieve this change?

Avoiding research waste by enhancing the availability of timely systematic reviews and targeting research priorities is crucial. We should make these activities and results available across all health conditions and geographical settings. Examples of gap analysis using systematic reviews to engage carers, patients, and families should be replicated (Ferri, 2013).

What new actions do you think would achieve this outcome better?

We need investment in promoting partnerships among decision-makers, health professionals, patients, and families in order to identify both knowledge needs and strategies for the dissemination and implementation of evidence.

In the illicit drugs-related problems area we should implement a three-step participatory exercise:

  • Join resources to carry out an evidence gap analysis;
  • Identify feasible research methods (well-conducted observational studies of implementation aspects) and promote it;
  • Train the trainers to reproduce successful interventions in different local contexts.
How will we know if we have succeeded?

Drug strategies are adopted at European and national levels. They typically include the principles that inspire both the policy and the actors involved. In addition, they include action plans for implementation.

These documents might be complemented by an interventions matrix where each objective should correspond to a quantitative indicator of success including the independent source of data from where the indicator should come. In case of an evidence gap, the matrix should indicate what action is expected (commissioning research or fostering participation in European Funded initiatives). These matrices could be used to identify progress over the years and to trigger quantifiable change.


Marica Ferri is currently the Head of Sector in best practice, knowledge transfer and economic issues at the European Monitoring Centre for Drugs and Drug Addiction (EMCDDA). She is a member of the Cochrane Collaboration and author of a number of systematic reviews. She is also a panelist in the development of evidence-based guidelines and quality standards for the improvement of interventions. She is interested in evidence base developments including methods and implementation studies.

Marica is contributing to this blog in her personal capacity. The ideas here expressed do not necessarily represent the view of the EMCDDA.

Follow Marica on Twitter: @marica.ferri

Thank you to Marie-Christine Ashby for her editorial support.

This blog was originally posted on the BMJ: http://blogs.bmj.com/bmj/2017/01/24/marica-ferri-what-can-we-learn-from-the-evidence-based-medicine-manifesto/

References:

The Better Evidence for Better Healthcare Manifesto: /manifesto/

Amato L, Mitrova Z, Davoli M; Cochrane Drugs and Alcohol Group (2013)Cochrane systematic reviews in the field of addiction: past and future. J Evid Based Med. 2013 Nov;6(4):221-8. doi: 10.1111/jebm.12067.

Davoli M, Ferri M. The Drugs and Alcohol Cochrane Review Group. Addiction 2000; 95(10): 1473-4

Ferri M, Bo A. Best practice promotion in Europe: A web-based tool for the dissemination of evidence-based demand reduction interventions. Drugs: Education, Prevention and Policy 2013;20(4):331

Ferri M, Davoli M, D’Amico R. Involving patients in setting the research agenda in drug addiction. BMJ. 2013 Jul 16;347:f4513. doi: 10.1136/bmj.f4513. PubMed PMID: 23861429.

Galea S, Vlahov D. Social determinants and the health of drug users: socioeconomic status, homelessness, and incarceration. Public Health Reports. 2002;117(Suppl 1):S135-S145.

Fixing evidence based medicine

helen_macLove it or hate itwe must all consume evidence. Now is your chance to have your say on what its future should be like. Yesterday the Centre for Evidence Based Medicine at Oxford University launched a new manifesto calling for better evidence for better healthcare.

The BMJ team is partnering with them. Writing to launch the manifesto The BMJ says:

“There are huge shortcomings in the way that evidence based medicine operates today: bad quality research, evidence that is withheld, piecemeal dissemination, a failure to respect patients’ priorities, and more. There is also a long history of people, and organisations, trying to fix these problems. We want to pull together a clear set of achievable goals, and a strong overview of the strategies that work best, to help deliver change better, and faster. This is the EBM manifesto.”

Over the coming months the manifesto will be a living document, open to comments, and edited to reflect the thoughts and ideas submitted. The BMJ and CEBM will also be going on the road in search of key groups of people who make, work with, use, or are consumers of evidence. We will pick their brains on what is currently happening, and how things could be better.

So far we have been to Barcelona to speak with experts at the Preventing Overdiagnosis conference. Plans are underway to speak with other groups of policy makers, patients, clinicians, and researchers. We will finalise the document in the run up to Evidence Live 2017 and launch it there.

Helen Macdonald is clinical editor for education and research, The BMJ

Originally posted 11 Oct, 16 | by BMJ

Insights for the Next Generation of Leaders

PeterGill

Peter J. Gill

Evidence Live 2016 was a resounding success, bringing together global leaders in evidence-based medicine along with 300 delegates. An important conference theme was Training the Next Generation of Leaders in Applied Evidence which included events targeting Leaders of Tomorrow in Evidence-Based Medicine. The top five trainee submissions were provided free admission to Evidence Live and were published in the Student BMJ. Over 30 delegates attended a networking session to flesh out potential opportunities to create a formalised network of future leaders (more on this in the coming weeks).

But the Future Leadership Showcase, featuring Kamal Mahtani, An-Wen Chan, Howard Bauchner and Hilda Bastian, was one of the most memorable conference sessions. We asked this diverse group of individuals to elaborate on their ‘untold story,’ reflect on pivotal career moments and share pearls of wisdom for early career researchers.

Several years ago, a patient asked Kamal about a new medical device which claimed to lower blood pressure by helping patients control their breathing. Kamal sought to answer the question, was unsatisfied with what he found, and ended up doing a systematic review on the topic. Somewhat to his surprise, Kamal became the national expert on device-guided breathing for hypertension, and his review was widely cited in multiple guidelines.

Kamal’s example reminds us why we conduct clinical research: to resolve unanswered questions for patients. It elaborately illustrates the principles of evidence-based medicine – being sceptical, questioning the scientific literature, and seeking the truth. Kamal concluded his talk with five key things that he learnt from his experience:

  • Focus clinical research on what matters to patients/practice;
  • Build confidence in research skills and don’t be afraid to use them;
  • Doing a systematic review is an essential part of training;
  • Don’t work in an island – collaborate and learn from others; and
  • Being a health services researcher is a privilege.

An-Wen, a leader in clinical trial quality and chair of the SPIRIT initiative, confessed that his first foray into research was ironically conducting a failed clinical trial. While this may be a surprise to many, it illustrates the ‘untold story’ of failures of many successful researchers. The botched trial led An-Wen to realize that he needed formal training in research methods, and he went on to start a DPhil with Doug Altman at the University of Oxford. Yet, his struggles continued.

An-Wen sought to compare the primary outcomes in published clinical trials with the primary outcome submitted to research ethics committees. Two years into his DPhil, due to various road blocks, he had no data. But, An-Wen persisted and eventually, thanks to (Peter Gotzsche, got access to submissions in Denmark which turned out to be a better dataset. An-Wen explained the importance of resiliency: ideal plans often fail but more often than not a silver lining emerges. An-Wen’s closing comment is worth repeating: rather than work on documenting problems, help find solutions. For example, he is currently working on creating an online tool to help researchers draft trial protocols.

Howard Bauchner, editor-in-chief of JAMA, emphasized the importance of having both research and career mentors. Research mentors provide guidance and support during a research project or thesis (i.e. help select proper methodology, assist to find funding, etc.). Career mentors, on the other hand, assist mentees in making broader career or life decisions (i.e. whether one should move to another city or pursue specialty training). While one mentor can provide both functions, Howard advises having multiple mentors. Career mentors are not necessarily in your discipline either; they are like life coaches.

Howard provided additional pearls of wisdom by drawing on his expertise as an editor. He (not surprisingly) encouraged early academics to volunteer to peer review articles, and to focus on nurturing a relationship with a small number of journals. By regularly reviewing, one can become known to the editorial staff. Once known, Howard boldly suggested asking to join the editorial board. Why not?

Hilda closed the session with a heartfelt reflection of her career journey, leaving the audience contemplative. Throughout her career, Hilda chose to follow a path based on what she felt was most important, which led her into consumer advocacy. She challenged the audience to ask oneself what success is defined as. It can be tempting to pursue a path that will lead to ‘academic success’ but to what end? Find out what engages you, and passionately pursue this topic or idea.

Hilda poignantly ended her talk reflecting on failure. While it may seem that successful academics brush off failure (or never experience it), in reality, everyone struggles with failure. But, what defines each of us is how we cope with failure. It is important to get up off the ground, and try again. Hilda reminded us that we are not alone: seek help and support, and in doing so, learn how others have dealt with failure.

In short: learn from patients, get formal research training, conduct a systematic review, peer review for journals, seek out mentors, and be persistent. Failure is intrinsically a part of taking risks, but is accompanied by resiliency, wisdom and personal growth.

Peter J Gill is a paediatric resident at The Hospital for Sick Children, University of Toronto and an Honorary fellow at the Centre for Evidence-Based Medicine, University of Oxford. He is a member of the Evidence Live steering committee which includes a Future Leaders initiative.

You can follow him on Twitter at @peterjgill

Competing interests: I have read and understood BMJ policy on competing interests. I have no other competing interests to declare.

Disclaimer: The views expressed are those of the author and not necessarily of any of the institutions or organisations mentioned in the article.

DOI: 10.13140/RG.2.1.3892.3769

Beware evidence “spin” : an important source of bias in the reporting of clinical research

Evidence Live 2016 begins this week, with 3 full days of discussion and learning around 5 main themes including  “Transforming the Communication of Evidence for Better Health”. Here the CEBM deputy director Kamal R. Mahtani discusses the problem of evidence “spin”.

 

Spin [WITH OBJECT] Draw out and twist (the fibres of wool, cotton, or other material) to convert them into yarn, either by hand or with machinery: “they spin wool into the yarn for weaving”

4878492-1791583648-malco

Does the name Malcolm Tucker ring a bell? The Malcolm Tucker I am referring to is the fictional character from the BBC political satire The Thick of it. Tucker (played by Peter Capaldi) was a government director of communications, skilled in propaganda, more specifically in the art of “spinning” unfavorable information into a more complimentary, approving (and sometimes even glowing) public facing message. Whether the show accurately reflects real life governmental politics, or whether real life politicians ‘copy’ the show, remains a topic of discussion. Either way, “spin” in the political arena feels like something we are increasingly getting used to, almost expect.

“Spin” in reports of clinical research

For many researchers, the number of publications, and the impact of those publications, is the usual currency for measuring professional worth. Furthermore, we are increasingly seeing researchers discuss their work in public through mainstream and social media, as more of these opportunities arise. With this in mind it probably won’t come as such a shock to imagine that researchers might be tempted to report their results in a more favorable (again, even glowing) way than they deserve i.e. to add some “spin.

According to the EQUATOR network, such practice constitutes misleading reporting, and specifically the misinterpretation of study findings (e.g. presenting a study in a more positive way than the actual results reflect, or the presence of discrepancies between the abstract and the full text).

“Researchers have a duty to make publicly available the results of their research on human subjects and are accountable for the completeness and accuracy of their reports.”
WMA Declaration of Helsinki

So how common is “spin in clinical research?  An analysis of 72 randomised controlled trials that reported primary outcomes with statistically non-significant results, found that more than 40% of the trials had some form of “spin, defined by the authors as the “use of specific reporting strategies, from whatever motive, to highlight that the experimental treatment is beneficial, despite a statistically nonsignificant difference for the primary outcome, or to distract the reader from statistically non-significant results”. The analysis identified a number of strategies for “spin, with some of the most common being to focus reporting on statistically significant results for other analyses, i.e. not the primary outcomes; or to focus on another study objective and distract the reader from a statistically nonsignificant result.  Another analysis, this time involving 107 randomised controlled trials in oncology, similarly found that nearly half of the trials demonstrated some form of “spin” in either the abstract or the main text.

You might think that systematic reviews of primary research should address some of these problems. By seeking the totality of available evidence, interpreting the impact of bias, and then synthesising the evidence into a useable form, they can be powerful tools for informing clinical decisions. But not all systematic reviews are equal. Non-Cochrane systematic reviews have been shown to be twice as likely to have positive conclusion statements than Cochrane reviews. Furthermore, non-Cochrane reviews, when matched to an equivalent Cochrane review on the same topic, were more likely to report larger effect sizes with lower precision than the equivalent Cochrane review. In both cases, these findings may well reflect the extent to which methodological complexity is ignored or sidestepped in poorer quality reviews.

So not all systematic reviews are equal, and neither are they exempt from “spin. A review of the presence of “spin” (defined as the consistency of reporting between the abstract/conclusions and the empirical data) in reviews of psychological therapies found that “spin” was present in 27 of the 95 included reviews (28%). In fact, a recent study identified 39 different types of “spin” that may be found in a systematic review. Thirteen of those were specific to reports of systematic reviews and meta-analysis. When a sample of Cochrane systematic review editors and methodologists were asked to rank the most severe types of “spinfound in the abstracts of a review, their top three were (1) recommendations for clinical practice not supported by findings in the conclusion, (2) a misleading title, and (3) selective reporting.

Impacts of “spin” from clinical research

“Spin” may influence the interpretation of information by clinicians. A randomised controlled trial allocated 150 clinicians to assess a sample of cancer related abstracts with “spin” and another 150 clinicians to assess the same abstract with the “spin” removed. Although the absolute effect size was small, the study found that the presence of “spin” was statistically more likely to result in the clinicians reporting that the treatment was beneficial. Interestingly the study also found that “spin” resulted in clinicians rating the study as being less rigorous and they were more likely to want to review the full text-article.

Dissemination of research findings to the public e.g. through mainstream media, can also be a source of added “spin”. Ananalysis of 498 scientific press releases from the EurekAlert! Database identified 70 that referred to two-arm, parallel-group RCTs. “Spin”, which included a tendency to put more emphasis on the beneficial effects of a treatment, was identified in 33 (47%) of the press releases. Furthermore, the authors of the analysis found that the main factor associated with “spin” in a press releases was the presence of “spin” in the abstract conclusion.

So what motivates “spin”?

This is a complex area, to which more relevant research might add clarity. A desire to demonstrate impact has already been suggested as one driver. Other proposed mechanisms include (1) ignorance of scientific standards, (2) young researchers’ imitation of previous practice, (3) unconscious prejudice, or (4) willful intent to influence readers.

Conflicts of interest (COI) will almost certainly have some bearing on the presence of “spin”.  As an example, an overviewof systematic reviews examined whether financially related conflicts of interest influenced the overall conclusions from systematic reviews that examined the relationship between the consumption of sugar-sweetened beverages (SSBs) and weight gain or obesity. Of the included studies, 5/6 systematic reviews that disclosed some form of financial conflict of interest with the food industry, reported no association between SSB consumption and weight gain. In contrast, 10/12 reviews, which reported no potential conflicts of interest, found that SSB consumption could be a potential risk factor for weight gain.

However, while a great deal of discussion focuses on financial COI, the “blind spot” may be non-financial conflicts of interest (NFCOI), which could have an even greater bearing on the presence of “spin”. For systematic reviews, these types of conflicts have been defined as “a set of circumstances that creates a risk that the primary interest—the quality and integrity of the systematic review—will be unduly influenced by a secondary or competing interest that is not mainly financial.” Examples of NFCOI include strongly held personal beliefs (e.g. leading to a possible “allegiance bias”), personal relationships, a desire for career advancement, or (increasingly possible now) a greater media profile. All of these have the potential to affect professional judgment and thus generate a message that does not convey a fair test of treatment.

Unfortunately a significant proportion of clinical research is already littered with various types of bias, which we know can influence the treatments we provide our patients as well as waste valuable resources. The added bias of “spin, whethermotivated by financial, personal, or intellectual conflicts of interest, or even plain ignorance, further obfuscates the problem.

Beware evidence “spin”.


Kamal R Mahtani is a GP, NIHR clinical lecturer and deputy director of the Centre for Evidence Based Medicine, Nuffield Department of Primary Care Health Sciences, University of Oxford. He is also a member of the Evidence Live 2016 steering committee which brings together leading speakers in evidence-based medicine from all over the world, from the fields of research, clinical practice and commissioning.  

You can follow him on Twitter at @krmahtani

Competing interests: I declare no competing interests relevant to this article.

Disclaimer: The views expressed are those of the author and not necessarily of any of the institutions or organisations mentioned in the article.

Acknowledgements: Thanks to Jeff Aronson, Meena Mahtani and Annette Plüddemann for helpful comments.

The Research Registry: Advancing the Cause of Research Registration

Leading up to Evidence Live 2016, we will be publishing a series of blog posts highlighting projects, initiatives and innovative ideas from future leaders in evidence based medicine.
Please read on for the second in the series from Daniyal Jafree of UCL.
If you are interested in submitting a blog post, please contact alice.rollinson@phc.ox.ac.uk. Stay tuned! 

1432849351

How often do you register your research study on a publicly accessible database? The importance of research registration is summarised by the World Health Organisation (WHO) International Clinical Trials Registry Platform (1). Research registration increases transparency, identifies publication bias and selective reporting, avoids duplication, enables identification of flaws in study design early in the research process, facilitates collaboration and may encourage patient recruitment. So it is not surprising that registration is considered a responsibility for researchers.

The concept that research registration should be confined to randomised-controlled trials was dissolved by the 2013 update of the Declaration of Helsinki, stating that: “Every research study involving human subjects must be registered in a publicly accessible database before recruitment of the first subject”. A number of registries exist which enable the registration of various study types. However, given that the number of observational studies published over the last two decades is much greater than the number of registrations, it is estimated that over 90% of observational studies remain unpublished (2). There also appears to be no comprehensive data on the registration of audits, quality improvement projects, case reports or case series. Assuming that not all studies performed are published, how is it then possible to learn from the results of unpublished studies to improve clinical practice?

In February 2015, we launched the Research Registry: enabling free prospective or retrospective registration of any research study involving human participants (2). Research Registry was created by Mr Riaz Agha, a Specialist Trainee in Plastic Surgery and doctoral student at Oxford University. The registry was designed using the WHO dataset for registration of clinical trials. It only takes a few minutes to register your study. We also curate Research Registry using a system based on Sir Austin Bradford Hill’s criteria for what a research study should convey (3).

At 11:30am on Friday 24th June at Evidence Live 2016, I will describe the conception of Research Registry and presenting our analysis of the first 500 registrations. Approximately 1.77 million patients were enrolled across registered studies. Registration were received from 57 different countries, and a high proportion of registrations were observational studies, case series and case reports. During the talk I will also describe how we curate Research Registry, and how as a result, the quality of registrations has significantly improved over time. Please do come along at 11:00am to Lecture Theatre 2, The Andrew Wiles Building, Maths Institute for more information.

Daniyal Jafree

Daniyal Jafree

What Colour is Evidence?

Dr Amy Price

Dr Amy Price

During the last decade a number of EBM proponents as well as critics have addressed the ‘problems with EBM’ with devaluing words and destructive proposals. The intentions of the EBM-black painters are not ours to justify. The most important results of these discussions remain an increased interest in EBM, deeper interest in the problems of the evidence base of modern medicine, the critical appraisal of research, and improving instruments for research methodology.

EBM-black painting serves as wolf-crying, it disorientates the public and brings intimidation, division and confusion to novices in EBM. This makes the progress of EBM more problematic when taken into the cultures and subspecialties where the critical appraisal of the evidence has yet to become known or accepted. Embracing error and the need to adapt by painting EBM all white is equally damaging as it denies the roots of critical appraisal. Constructive change will mean facing negative outcomes with a problem solving, solution orientated mindset that honors the roots of critical appraisal and the application of evidence which is at the core of EBM.

We suggest the alternative painting of EBM as a rainbow where evidence is celebrated and honored in the diversity needed to meet the needs of that individual patient who comes to us for help. This will provide room for the constructive change that is needed to grow in excellence, effectiveness and empathy. The rainbow was chosen because black absorbs all colors and darkens from within while the rainbow reflects the light that it finds in diversity and beauty.

This captures the essence of the abstract co-created by Professor Vasiliy V Vlassov and Dr Amy Price. The history of this article has evolved through the excellent contributions of the Evidence Based Health List Serve and some constructive thinking about what evidence means.

The Journey of Evidence Based Medicine | More Interesting Than a Paper

My idea was to capture this journey with PEE [Points | Examples | Evidence] write it up and discuss it in a simple paper but there were complications. The first glitch came in how the world defines the word Evidence. Professor Ben Djulbegovic and I explored this following a discussion on how differences in the understanding of common words can lead to working at cross purposes. Both parties hear the word but it is shaped by divergent world views and language interpretations.

How is the Word Evidence Understood?

We explored the multi-language meaning for the word evidence. The data showed two primary categories for evidence as translated into other languages: data, fact, testimony or observations, which per se do not make any link to the statement about the “truth”, and the inconvertible act of “proving” the “truth”3.

These common meanings are incompatible with how evidence is used and graded in EBM.2,3 The gaps in translation could expose EBM to probable assumptions and miscommunication. Our concepts of medical evidence and, thus, EBM are arguably shaped by translation. We contend that much of current criticism of EBM5 is rooted in misunderstanding the meaning of the concept of evidence.

The Call for Help to the EBH (Evidence Based Health) List serve

Dear all,

I am looking for concrete examples of EBM changing healthcare practice. I see arguments and methods but lack examples other than personal or with those I have worked with to say the practice of EBM changed healthcare specifically. E.G.  reduced harm be forcing the recall of a harmful drug, the discovery of a better way to treat a specific disease etc. Please help I think we need to build a history for the world outside EBM.

Best

Amy

And the EBH Responses Were:

“Success in children aematological tumours is a great example of what the scientific method can do.

Ambuj Kumar, Heloisa Soares, Robert Wells, Mike Clarke, Iztok Hozo,Archie Bleyer, Gregory Reaman, Iain Chalmers, and Benjamin Djulbegovic.

Are experimental treatments for cancer in children superior to established treatments? Observational study of randomised controlled trials by the Children’s Oncology Group. BMJ 331 (7528):1295, 2005 http://www.bmj.com/content/bmj/331/7528/1295.full.pdf .”

Contributed: Dr Federico Barbani

Do We Need Evidence to “Prove” EBM?

“I would suggest one does not need to have evidence that EBM changes healthcare – either positively or negatively.

In my mind, the concept of EBM or evidence-based healthcare (EBP) is a philosophy – it is an approach as to how we should practice heath care.

Personally, I believe one should for all health care decisions:

1) use the best available evidence

2) use one’s clinical expertise/experience, and

3) integrate patients’ values and preferences into the decision making process

Are there people who disagree with these three recommendations?

The reason I say it is a philosophy and not something that needs to show it improves outcomes is because using EBM could in theory “worsen” health outcomes.

As an example, let’s assume we did an RCT of statin use in primary prevention and randomized people to either taking a statin or EBP (making the choice for themselves)

For the sake of the argument let’s assume that:

1) statins definitely reduce the risk of cardiovascular disease by 25% or roughly 1-2% over 5 years in primary prevention

2) statins do NOT cause any side effects (I know this is not true)

3) let’s assume that most people when give the absolute benefit numbers would decide not to take a pill every day for five year

With these assumptions, the people that got randomized to choosing for themselves would have more CVD than the group that was “forced” to take a statin.

I would suggest that the outcome doesn’t matter – regardless of the outcome, practicing EBP is simply the right thing to do.”

Contributed: Dr James McCormack

Conceptual Agreement  

It may not be pure EBM but what about the Panorama programme that covered the discovery of Helicobacter Pylori

Contributed: Dr Neil Upton

What about any drug proved effective via an RCT?  Or systematic review?  Iain Chalmers often mentions streptokinase post-MI, but there are lots of other examples.

Contributed: Dr Jon Brassey

Is There Everyday Evidence for EBM?

“In workshops with GPs/family physicians, I point to research showing that the more you share evidence with patients, the less likely they are to want an intervention, such as warfarin for Atrial Fib or PSA screening for prostate cancer.

However, there is still some merit in looking for evidence that TRAINING practitioners to be Evidence Based (EB) produces Evidence Based practitioners. EB practitioners PRACTISING EB have better outcomes, defined otherwise than just event rates e.g. more cost-effective care post-stroke, more patient satisfaction.

In the UK, GPs’ performance in certain chronic conditions is monitored and financially rewarded. GPs who meet targets are  regarded in some circles as the high achievers whereas it could be argued, as I have done (http://bjgp.org/content/63/611/315.short), those who shares decisions with patients will appear to perform “badly”. 

Contributed: Dr Kev (Kevork) Hopayian

Teaching evidence based practice to improve patient outcomes

Emparanza, J. I., Cabello, J. B. & Burls, A. Does evidence-based practice improve patient outcomes. Journal of Evaluation in Clinical Practice, doi: 10.1111/jep.12460/full  Pre publication Access

Overturned by Evidence

“The WHI trial reversing the “standard of care” promoting postmenopausal hormone replacement therapy for cardiovascular disease prevention is one of the easiest to recognize, though data before the WHI trial was congruent — an EBM approach was different than the popular/accepted approach before the WHI trial was published.

The use of antiarrhythmic to treat PVCs after a heart attack (PVCs are associated with increased mortality and specific antiarrhythmic flecainide and encainide are effective in reducing PVCs) was a classic example in the need for a clinical outcome focus.  This was done as “good patient care” (Do you want to die while waiting for study results?) and a randomized placebo-controlled trial (the CAST trial) found the drug reduced PVCs but killed more patients.  This was the end of that practice”.

Contributed: Dr Brian S. Alper

“1. Smoking causes lung cancer. There was a time when we thought smoking was good for you until Doll and Hill’s studies showed otherwise. 

  1. Contaminated water and not air (miasma theory) spread Cholera by John Snow. Rightly named the “father of modern epidemiology”
  2. Studies showing babies sleeping on their stomach’s as recommended by the prominent pediatrician Benjamin Spock caused 1000’s of deaths due to SIDS. A great example of the fallacy of expert opinion. 
  3. I think it was a study which first questioned the blood-letting – a major treatment in medicine practiced for over 2000 years.
  4. Loftus studies showing the concept of “repressed memories” to be false”.

Contributed: Dr Anoop Balach   

RCTs Are Not the Lone Precursor to Major Change

The “trial” debunking blood-letting was an observational trial in about 1830 by Pierre Charles Alexandre Louis who found that patients with typhoid fever were more likely to die if they had their blood removed.  The likelihood that this result occurred by chance alone was less than one in ten thousand.

The Hill and Doll study was a case control study with an Odds Ratio that was very high (I cannot recall if it was 7 or 10, but either is very strong result). These both show that strong results can be obtained without doing expensive RCTs.  I believe that this is especially true when a very large result occurs.

Contributed: Dr Dan Mayer

Evidence Based Research for Midwives and Breastfeeding Mothers

A Cochrane review on continuity of midwife care was first published in 2004 and last updated in 2016. As more trials have been added to the Cochrane review uncertainties in the original findings have been reduced. Women who have received continuity of care from a midwife they know, rather than receiving medical-led or shared care, are:

  • 24% less likely to experience preterm birth,
  • 19% less likely to lose their baby before 24 weeks gestation, and
  • 16% less likely to lose their baby at any gestation.

These women are also more likely to have a vaginal birth, and fewer interventions during birth (instrumental birth, amniotomy, epidural and episiotomy), and are likely to have a more positive experience of labour and birth. These findings apply to both low- and mixed-risk populations of women, and there are no significant differences in outcomes between caseload and team care models.

This Cochrane review was identified as a priority review for updating by both the World Health Organisation and the Department of Health to inform the National Institute for Health and Care Excellence (NICE) review on the latest evidence on continuity of midwife care.

Evidence from the updated Cochrane review has had a significant influence on recent policy developments in relation to maternity care in both the UK and abroad.

  1. The results were cited as a key piece of evidence to inform models of care in Creating a Better Future Together – National Maternity Strategy 2016-2026, the first national maternity strategy for Ireland, which was published in January 2016.
  2. Evidence from the review was also cited in the National Maternity Review for England published in February 2016 (Better births – Improving outcomes of maternity services in England), led by Baroness Cumberlege and conducted as part of the NHS England Five Year Forward View.
  3. They have also informed the RCM/RCOG statement on continuity of carer and multi-disciplinary working published in April 2016.
  4. Internationally, the Cochrane review was cited in a Lancet series on midwifery which aimed to inform workforce and health system development plans under the United Nations’ Post-2015 Development Agenda.

Sandall J, Soltani H, Gates S, Shennan A, Devane D (2016). ‘Midwife-led continuity models versus other models of care for childbearing women‘. Cochrane Database of Systematic Reviews 2016, Issue 4. Art. No.: CD004667.

Contributed: Dr Jane Sandall

These three evidence based reviews have been the most significant in early postnatal care/ infant feeding. The timing of the introduction for solid food, kangaroo care for low birth weight babies and skin to skin for term infants.

Contributed: Phyll Buchannon

Thoughts for the Future

We all benefit from promoting the discovery of better treatments as we grow in communities of practice. Through the practice of EBM it is possible to speed up the acceptance of the best treatments available and to contribute to the elimination of useless/dangerous interventions.

EBM may indirectly influence the FDA, regulators, and consumers, who in turn influence providers. Let us work to build bridges of Evidence Based Medicine with all stakeholders as building better healthcare is the challenge of a lifetime and every person represents a life that matters.  

One limitation EBM and in fact any organization faces is that evidence and knowledge translation are dynamic. The challenge is we can synthetize the evidence we have available today but tomorrow may bring new discovery or uncover hidden evidence. Answers tend to reflect the knowledge collected, it is bounded by that we think we know and this is not always the objective or full faceted truth. Facing the challenges that require change constructively and growing with grace will continue to build EBM from the inside out.

Evidence Based Research Needs a SWAT

Dr Amy Price

Dr Amy Price

Getting Research Done Right

“I don’t think enough is being done to make new practitioners ask about the evidence when they are faced with “expertise” and opinion; and randomised trials need to become so much part of practice that they are the standard way of dealing with uncertainty and making choices” [Mike Clarke 2016].

It is one thing to study research and even to quote it but it is yet another thing to understand and apply it to every day practice. There will be much research that is outdated before it is even implemented. We need to know why so we can use research that we know already to optimally change practice before it grows old.

Sir Muir Gray states it this way “Knowledge is the enemy of disease; the application of what we know will have a bigger impact than any drug or technology likely to be introduced in the next decade” AND “In the nineteenth century health was transformed by clear, clean water. In the twenty-first century, health will be transformed by clean, clear knowledge.”

Research is not just about the research model as a profession it is about how research works in the real world and this is where SWAT matters. For clinical research to thrive ways to decrease RCT costs and increase efficiency are needed. Evidence Live 2016 is a great place to share research ideas, collaborate, and to learn what can make research better.

Why SWAT for Research?

SWAT is an acronym for Study Within a Trial. These studies can make full use of an ongoing funded trial by embedding practical concepts within the trial to produce better evidence to manage and problem solve uncertainties faced in running future trials. Embedding methodology research can decrease research waste and build value for minimal resource costs.  The results can be reported and the study concepts are free for anyone to use or adapt. Those who plan, conduct, and report trials will be better able to do so in ways that will improve health and wellbeing by using what works.

What Do We SWAT?

I am working on self-recruited online trials so some of my interests include what kind of reminders and encouragement work best; how does social media change reporting; what medium of engagement works best (tablet, phone text, computer); is there a difference between written and audio feedback; what form of interactive consent is best value for knowledge and more. I find the clear and concise way the SWAT methods are written up to be great examples for writing up methodology within protocols. The areas in process below are used by permission and are cited directly from the report: [Education section – Studies Within A Trial (SWAT). J Evid Based Med 2012;5:44–5. doi:10.1111/j.1756-5391.2011.01169.x]

A large cohort study of aging in Northern Ireland called NICOLA is testing the impact of different invitation letters (NICOLART:NCT01938898) This study also looks at means of collecting baseline data(NICOLA-QT: NCT01978522). The findings of these studies, SWAT-2 to SWAT-5, will influence future phases in the distribution of invitations to up to 20,000 people and the recruitment of 8500 people to NICOLA. Another series of SWAT relating to recruitment were conducted as part of the MOSAICC study, an observational cohort study on the etiology of myeloproliferative neoplasms (NCT01831635). These investigated the effects of providing information on end-of-study compensation to improve participation (SWAT-16), sending a letter or telephoning potential participants as a method of follow-up to improve recruitment (SWAT-17) and providing small gifts with the letter inviting people to join the study (SWAT-18).

Where Does SWAT Live?

The study within a Trial (SWAT) initiative is the work of the Northern Ireland Network for Trials Methodology Research and the Health Research Board Trial Methodology Research Network. The development of the SWAT collection was supported by the Medical Research Council Network of Hubs for Trials Methodology Research (MR/L004933/1-R50).

How Can Others Use SWAT to Make Research Better?

SWAT protocols for use in a trial and research about how they have worked for others are freely available. There is also a database available where other researchers can apply to build build and share SWATs. If you are interested in doing a SWAT, suggesting an outline or seeing the findings, please visit the website:

http://www.qub.ac.uk/sites/TheNorthernIrelandNetworkforTrialsMethodologyResearch/SWATSWARInformation/

Using SWAT we can all learn something we did not know. Collaboration and curiosity will power discovery and innovation. The public is the sensor that provokes influence. We can learn from them and each other using studies within a trial and the public can work with us to build evidence into practice. Let us build this into discussion for Evidence Live 2016.

SPIRIT and SEPTRE: Research Protocols You Can Build

Dr Amy Price

Dr Amy Price

Evidence Building Protocols

When developing a protocol for a clinical trial there are clear steps to building good methods, truth, accuracy and transparency into the research.  The SPIRIT statement says it best:

“The protocol of a clinical trial is essential for study conduct, review, reporting, and interpretation. SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials) is an international initiative that aims to improve the quality of clinical trial protocols by defining an evidence-based set of items to address in a protocol”.

SPIRIT is a great free tool. But soon there will be an online trial protocol builder (presently in testing) called SEPTRE, which will help every protocol step, with mouse overs and videos for added information and ways to proceed.

The SEPTRE initiative is chaired by Dr. An-Wen Chan, a firm believer in medical research transparency.  

Research results that fail to support the hypothesis are seldom published, which can be dangerous for healthcare. Without being able to view negative and positive research together, doctors can only make prescribing decisions based on what they see: a set of studies that are inherently biased toward positive results.

When that happens”, Chan says “it’s the patients who potentially suffer, particularly those who are exposed to ineffective or costly treatments, or even worse, harmful ones.

Research Transparency Saves Lives and Families

I know this first hand after witnessing the effects of prescription of hypnotic sleep drugs to a family member who after taking the medication, he and his doctor trusted would help, committed suicide. Years later the psychiatrist and family would learn that psychosis, suicide, and self-harm were unreported side effects that showed up even during animal testing and were present during all phases of clinical trials testing, but the adverse events were left unreported.

Finding unpublished research is a learned skill. I have scoured thousands of conference abstract without finding a usable unpublished trial. The time I got closest was during my first ever medical conference where one of the presenters had a near nervous breakdown during his presentation, screaming that the sponsor threatened to sue him if he published the “real” results of his research and that he had blood on his hands. Needless to say that was not in the conference abstract and it was never published.

Is Knowing Better Enough?

While it was widely discussed that he – the presenter – should have known better, his career was over. Were the others too smart and moral to be trapped like that? There was a gnawing awareness that gripped me and was indelibly printed on my heart. All of us could be vulnerable when funding is elusive and a sponsor offers the dream with a golden handshake and just a small “concession”. We can cite platitudes like “All that glitters is not gold” and “What we compromise to keep will destroy our foundation” but the greatest protection for medicine and for us is a culture of transparency, where good reporting becomes a habit without exceptions.

Finding Unpublished Trials

In case you wonder how to find unpublished trials, Chan’s BMJ article, Out of sight but not out of mind: how to search for unpublished clinical trial evidence, is open access. Chan discusses how to retrieve unpublished data that researchers otherwise would miss.

Another great resource for materials submitted to the FDA is the Clear Road Map.This is a commercial solution but there is a lot of freely accessible information on the site. Systematic literature reviews with published and unbalanced research can offer a balanced view of how a drug performs overall across multiple studies, compared to the small snapshots provided by individual studies. This is elegantly demonstrated in the Tami-Flu  campaign. This changed the life and research of one investigator, Dr Tom Jefferson, who is speaking at Evidence Live and who is leading one of the free to attendees fringe events “Evidence in the pub/college bar – Diary of a Tamiflu Research Parasite” Thursday June 23rd 18:15 – the Terrace Bar, Somerville College, plus meet Dr Chan at Evidence Live 2016.  An-Wen Chan – Leap of faith or formula for success: Championing careers in evidence-based medicine

Last Thoughts to Consider

But reviews that only consider published data still don’t give doctors a good picture, because they’re missing so much of the whole story,” says Chan. “When research is not reported transparently it’s not only less helpful, but also potentially dangerous. It goes against why we do research – which is to learn the truth for the benefit of patients.”

 

Build Trials Right with Equator

equator-logo

Clinician Initiated Trials, Can They Work?

Clinical trials are done to explore whether a medical strategy, treatment, or device is safe, effective, and economical. Trials also explore which treatments work best for specific illnesses or populations and which ones don’t work at all. It seems clinicians and patients working together could figure this out and make a trial work. However, there are very few clinician led trials in primary care and even fewer that are run by patients.

Patients have the experience of living in the trenches, it could even be said they have all the skin in the game. So how can Evidence Live help equip an inexperienced but motivated team to run a successful trial or to be part of a collaboration that does?

It is increasingly clear that being a doctor or academic is insufficient training to develop, or know, the difference between good research that is well reported and pseudo-science that is well marketed. Equator research reporting guidelines and education can be the difference between research that is buried unfinished and a well reported trial.

You will learn about the All Trials Initiative to lead all researchers to register clinical trials and the COMPare group that checks that trials that are published report all outcomes, primary and secondary. Going to the Equator workshop will teach you how to all do this when you plan your trial.  These groups will raise awareness, increase research transparency and protect public health.

Equator Workshop June 21, 2016

The workshop will be run by proven trialists and scientists including Doug Altman, Gary Collins, Ben Goldacre, Jo Silva, Iveta Simera, and Elizabeth Wager.

As students we would use Equator research guidelines to shape how we built our trials and later for how we reported what we found. Equator’s goal for this workshop is to equip clinician researchers to make research that is effective, economical usable, and fit for purpose. They point out that public funds are wasted because of bad reporting or non-reporting of research, and then the goodwill of research participants is betrayed, and patients’ care is compromised.

There will be talks, discussion and several practical (and fun!) exercises highlighting:

  • How ambiguous and incomplete reporting misleads clinicians and harm patients
  • How good planning, design and methods help with your writing
  • How reporting guidelines and other EQUATOR resources can help researchers, editors and peer reviewers work as a team to improve the literature.
  • How to sail through methodological and statistical review unscathed
  • Learn from medical publications professionals and communications experts to make your message soar.

Click here for more information.  Register soon as places are capped to ensure optimal small-group learning.

Dr Amy Price

Dr Amy Price