Author Archives: Carl Heneghan

About Carl Heneghan

Carl is Professor of EBM & Director of CEBM at the University of Oxford. He is also a GP and tweets @carlheneghan. He has an active interest in discovering the truth behind health research findings

Developing Leaders of Tomorrow in Evidence-Based Medicine

By Peter J Gill

Over two decades has passed since the late David Sackett coined the term evidence-based medicine (EBM) to refer to decision-making that incorporates clinical expertise, best available evidence and patient preferences. The early era of EBM saw the emergence of a cohort of leaders mainly in academia that applied this simple concept to mainstream medicine. These leaders, many of whom are now mid to late-career, include Gordon Guyatt, Sharon Straus, Brian Haynes, Paul Glasziou, Iain Chalmers, among several others. Further, some suggest provocatively that the evidence-based medicine movement is in crisis for various reasons, partly because the agenda has been hijacked by special interests groups. Who will be the voice of evidence-based medicine when these leaders retire, and what does the future hold for evidence-based medicine?

Evidence Live, a joint partnership between the Centre for Evidence-Based Medicine (CEBM) and the BMJ, is the central meeting point for the EBM community. Since inception, Evidence Live has worked tirelessly to include the voice of students, young doctors and early career researchers by soliciting submissions on pertinent questions, offering discounted conference positions, hosting sessions specific for students and publishing top submissions in the Student BMJ. Yet, over the past three Evidence Live conferences, it has become clear that there is a ‘generational gap’ between those individuals whom currently lead the EBM debate with those whom are using EBM everyday, with notable exceptions like Kamal Mahtani and An-Wen Chan.

There are several reasons why this generational gap has emerged. First, given the growing complexity of generating and synthesizing research, it has become prohibitively difficult for all but the very few to participate; most of the low-hanging fruit has been plucked. Second, the upcoming generation of leaders is naïve to the time before evidence-based medicine (‘wild wild west’) and therefore may take it for granted. Third, the current paradigm of evidence-based medicine is framed by the old paradigm; such a structure may inherently inhibit and fail to foster new and potentially revolutionary ideas that challenge status quo. Fourth, training and time commitments limit the opportunity of students to participate in initiatives.

Despite these challenges, notable examples of younger leaders have emerged. For example, the Students 4 Best Evidence website was launched in 2013, supported by Cochrane UK, to bring together students interested in evidence-based healthcare. The website provides an opportunity for students to read and write about EBM. Another example builds on the success of the Choosing Wisely campaign in North America. A group of Canadian medical student leaders recently met to brainstorm ways to reduce unnecessary tests and treatments. By asking provocative questions, certain professors are now routinely included Choosing Wisely recommendations in lecture slides.

But more is needed. In order to safeguard evidence-based medicine for the next twenty years, it is imperative to foster the development of young leaders, including students, academics, entrepreneurs, clinicians, economists, political scientists, sociologists, journalists, patients, analysts, accountants, and others. For the past 20 years, the CEBM in Oxford has played a crucial role in disseminating EBM globally through teaching workshops, research projects and advocacy. In particular, beyond teaching courses both in Oxford and globally, CEBM oversees an entire MSc and DPhil program in Evidence-Based Health Care. The launch of the Evidence Live conference series has provided a broader venue on which the principles and issues in evidence-based medicine can be debated and discussed, and has proven to be a major success.

One theme of Evidence Live 2016 is titled ‘Training the Next Generation of Leaders in Applied Evidence’ and aims to address these challenges. But we want to identify young leaders in EBM, and to assist these individuals in becoming future leaders in healthcare. At Evidence Live 2016, we will be hosting a first ever workshop for Young Leaders in Evidence-Based Medicine. This networking event will bring together young leaders and discuss mentorship, career trajectories, advice from successful young professionals, and much more. The workshop will be the first of many, and in time, may evolve into an entire pre-conference workshop or leadership program.

For EBM to continue to inform practice, it must be salient to those on the front-line, those who will be taking over the helm of decision-making in the next two decade. The debate about its future should include these thinkers. We hope that early identification and subsequent mentorship will increase the likelihood that these individuals will pursue careers focused on leadership, and become the face of evidence-based medicine.

But for this initiative to be successful, we need young leaders to attend! Evidence Live 2016 is offering 5 free places for the top 5 articles submitted which will be published in the Student BMJ. There are also 50 positions for young leaders at a reduced rate of £155. And this workshop is open for all, not just clinicians and researchers; we want economists, political scientists, sociologists, journalists, patients, etc. Register online before Evidence Live 2016 sells out!

This blog was written by Peter J Gill, member of the Evidence Live 2016 Steering Committee, paediatric resident at the Hospital for Sick Children at the University of Toronto and an Honorary Fellow at the Centre for Evidence-Based Medicine.

Evidence Live and Kicking (Part 1)

“Evidence based medicine: a movement in crisis?”

That 2014 editorial by Trisha Greenhalgh and colleagues echoed through the hallways leading up to this year’s Evidence Live conference, on now at Oxford University.

Day 1 down, and the question is well and truly answered. The “EBM movement” is facing solid bombardment – and about time, too, I reckon. But the discussion of scientific evidence itself, and its role in medical practice and healthcare decision making – that’s livelier than ever.

Originally posted April 14th 2015 – Hilda Bastian EL2016 Steering Committee
Read the whole Blog HERE

PLOS

Evidence Live 2015: Plenty of food for thought

Pluddemann

Dr Annette Plüddemann, interim Course Director MSc in Evidence-Based Health Care

Evidence Live 2015 took place at the Oxford University Examination Schools on two surprisingly sunny days in April.  There were lots of fantastic talks from a range of speakers providing plenty of food for thought and lively debate.  Here’s a summary of what I took away from the conference:

 

“The right evidence for this patient”

Or as Trish Greenhalgh put it, “Is the management of this patient in these circumstances an appropriate (“real”) or inappropriate (“rubbish”) application of the principles of EBM?”.  Iona Heath echoed this sentiment, “Evidence from science is essential, but not sufficient when dealing with individual patients” We therefore need to practice patient-focused individualisation of the evidence, and that means we need more work on the external validity of studies.  By doing so we may find that for some things we don’t need much more evidence and that, as Trish put it, “more research isn’t needed”.  Richard Peto spoke out against excessive and spurious subgroup analyses, stating that “virtually all subgroup analyses are rubbish”. We should be wary of their findings, particularly when the treatment effect is small. It also means that all clinicians need to be able to appraise evidence.  To help with this, Rod Jackson has developed the GATE tool for critical appraisal, which – if you haven’t heard about it- is worth looking in to (www.epiq.co.za). It can be used for any study design; in fact Rod offered £100 to anyone who could come up with a study design where the GATE framework could not be used….

“We have changed the world”

A session on the All Trials campaign with Ben Goldacre, Carl Heneghan, Iain Chalmers and Sile Lane (Sense about Science) opened with this extraordinary statement.  To date, about 540 organisations and 83 000 people have signed up to the AllTrials campaign; if you haven’t already, I would encourage you to do so as well.  Partly in response to pressure from the campaign – and coincidentally on the day this session was held –  the WHO published a Statement on Public Disclosure of Clinical Trials Results . It contained the following strong statements: (1) results from clinical trials should be publicly reported within 12 months of the trial’s end; (2) results from previously unpublished trials should be made publicly available; and (3) calls on organisations and governments to implement measures to achieve this.  Carl presented work he is involved in, auditing the registration and publication of clinical trials undertaken in Oxford as part of the NIHR Biomedical Research Centre and Unit. He called this “getting our house in order” and challenged delegates to go back to their institutions and to do the same too.

“Dangerous ideas”

Delegates were challenged to come with a “dangerous idea” – an idea that was daring, because it actually might work! Some very interesting suggestions came up here, have a look  for yourself on the BMJ youtube channel and do let us know about your own “dangerous ideas” for EBM.

And lastly, I have to briefly mention diagnostic studies.  Patrick Bossuyt likened mastering the 2×2 table to Judo…Is it really that tough? But beyond the numbers, he reminded us that, “diagnosis is not an end in itself; the ultimate value is the difference in health outcomes resulting from the test”.

Overall, compared to the previous Evidence Live conference held in 2013, there was a shift away from merely focusing on issues such as study flaws and lack of access to trial information towards thinking about how to implement evidence into practice for the benefit of individual patients.  Planning is already underway for the next Evidence Live which will be held in Oxford in June 2016. We do hope to see you there!

 

 

Quality decision making – a dangerous idea to fix EBM?

Original blog by Sharon Mickan- KT@OX

This blog has been written to complement a podcast  made at the recent EvidenceLive conference in Oxford.  Attendees were asked to propose a dangerous idea in relation to the future of EBM, and then to suggest a solution. This idea is about recognising, and making explicit the quality of all components of clinical decision making in evidence-based health care.

Read More

Time for a Second Tipping Point in Evidence-Based Medicine

PeterGill

I am currently reading The Tipping Point: How Little Things Can Make a Big Difference, a book published in 2000 by the journalist and writer Malcolm Gladwell. In medicine, we seldom think laterally and seek ideas from other disciplines; I hoped Gladwell could inspire some radical ideas in anticipation of Evidence Live 2015.

The Tipping Point outlines why certain trends, ideas or behaviour’s suddenly ‘take off’, using examples such as teenage smoking and the AIDS epidemic. In describing why certain trends reach their ‘tipping point’, Gladwell proposes three key concepts: ‘The Law of a Few’, the ‘Stickiness Factor’ and the ‘Power of Context’. In particular, the ‘Law of a Few’ suggests that certain people play an integral role in epidemics: connectors, mavens and salesmen. I will describe how these individuals relate to the initial tipping point in evidence-based medicine.

  1. Connectors: bringing people together

Connectors are people with “a special gift for bringing the world together.” They are people who seem to know everyone, and who bring seemingly diverse groups of people together. Gladwell describes that while most people have ‘six degrees of separation’, some only have three or four degrees; they are the connectors, and they play a powerful role transmitting messages and igniting epidemics.

Who are the connectors that tipped the scales for evidence-based medicine? The first person that comes to mind is Sir Iain Chalmers, co-founder of the Cochrane Collaboration. When I first met Chalmers at his office in 2010, before sitting down, he asked me to sign his guestbook. I somewhat embarrassingly did, unclear why a young student warranted space in his book. Yet the guestbook is an illustration of Chalmers meticulously attention to people, and I witnessed countless examples of him ‘connecting’ people. Undoubtedly, his diverse social circles and innate ability to connect people, combined with his captivating speaking skills helped lay the foundation for the Cochrane Collaboration in 1992. I would argue this achievement was a key tipping point in evidence-based medicine.

  1. Mavens: the devils in the details

Mavens are individuals who accumulate knowledge. In the economic literature, they are referred to as “price vigilantes” because they meticulously monitor prices, looking for discrepancies and tell others when they find them, not for personal gain but simply because they want to help.

The equivalent ‘evidence vigilantes’ in medicine are the Cochrane Collaboration, an international organisation of people who methodically seek to compile all clinical information to ensure clinicians and patients can make informed decisions. For example, one author group has spent the past decade trying to find all clinical information about drugs used to treat influenza called neuraminidase inhibitors, otherwise known as Tamiflu and Relenza. These mavens discovered that many trials funded by pharmaceutical companies were not published and more concerning, that limited data were presented to drug regulators.

Early work by these evidence mavens made public the serious limitations in how clinical studies were designed, conducted and reported. Such findings led to a number of important changes such as Trial Registration and the CONSORT Statement. Mavens can be markedly persuasive simply because they have nothing to gain from sharing the information.

  1. Salesmen: communicating the message effectively and persuasively

Lastly, salesmen are people who persuade. They are the charming, charismatic and influential individuals who seem trustworthy, sincere and convincing. The pharmaceutical industry has mastered this role, for example by hiring young, good-looking sales representatives who arrive at doctor’s offices with a free lunch, free drug samples and a well designed sales pitch for why the new therapy is better than the competitor. But the employee may not provide information about other drugs, may only highlight benefits without discussing harms, and fundamentally has a conflict of interest.

An excellent example of a salesmen in evidence-based medicine is Ben Goldacre, journalist and author of the best-selling books Bad Science and Bad Pharma. Both books were in response to widespread misuse of science by the media, the public and the pharmaceutical industry. For example, Goldacre challenges the (absent) link between autism and the MMR vaccine, and details the unethical tactics used by drug companies.

Goldacre is a popular and well-liked individual because he is charming, easy-going and affable. He does not come across as abrasive or aggressive, but rather as enthusiastic and funny. Goldacre is remarkably likeable which makes him a good salesman.

Is it time for another tipping point in evidence-based medicine?

Applying Gladwell’s theories of the role of key individuals in epidemics, I have provided examples of how they apply to evidence-based medicine. But why do some say that Evidence-Based Medicine is a movement in crisis? I suggest that over the past decade the scales have shifted: the connectors, mavens and salesmen have disproportionately distorted the use of evidence-based medicine away from its’ core values.

But, we must find fault in ourselves. For too long, we have relied on traditional academia to demonstrate the self-evident benefits of evidence-based medicine. I would argue, such an approach is complacent, passively assuming for example, that the conclusions of a high quality Cochrane systematic review will change practice. By applying Gladwell’s principles, the latter approach relies on connectors and mavens.

The evidence-based medicine movement must now more actively embrace the role of salesmen to tip the scales and re-align the agenda back to the core principles: integrating the best available evidence with clinical judgement and patient values. New initiatives such as Choosing Wisely and AllTrials are examples of taking the message to the public with persuasive messages.

Evidence-based medicine has already had one tipping point but it is time for another. It starts by bringing together the connectors, mavens and salesmen; all of whom will be at Evidence Live 2015. On April 13th and 14th in Oxford, let’s start another epidemic in evidence-based medicine.

——–

This blog was written by Peter Gill, a paediatric resident at the Hospital for Sick Children at the University of
Toronto and an Honorary Fellow at the Centre for Evidence-Based Medicine.

Irritated by research reporting? I am too. The which, the what, and the way…

Has anyone else noticed the dire state of research reporting? After recently analyzing 500 research articles for a systematic literature review I was shocked. “Why don’t they report the number of patients in each treatment group; why did patients drop out of the study; why are the statistics so poorly described!?” I irritably demanded of the PhD student one desk over. He sighed, shrugged, and turned back to his work without answering.

All my questions boiled down to a lack of good research reporting. As a result, I was forced to exclude relevant studies, mark unclear on my ‘quality of evidence’ form countless times, and, more than anything, was left concerned about the volume of research that is wasted due to poor reporting.

I, therefore, decided to take a brief detour from my PhD thesis- not a bad idea from time to time- to get an idea of how bad the problem is.

What I found did not surprise me: poor reporting is one of the biggest challenges currently facing evidence-based medicine.

I think there are three issues with research reporting: the which, the what, and the way.

 

Which studies get reported? Only a select few.

I thought excluding a handful of published studies from my review was a waste. As it turns out, that’s just the tip of the ice-berg: up to half of all clinical trials are never even published.1

Why spend millions of pounds- and thousands of work hours- conducting a randomized controlled trial and then not publish it?

Ask big pharma, who routinely withhold trial data. The influenza drug Tamiflu (Oseltamivir) is a well-known example. Roche, the manufacturer of Tamiflu, funded clinical trials to test the effectiveness of the drug but subsequently withheld huge portions of data from publication. Only after a drawn out four year public relations campaign were researchers finally able to reanalyze the full set of data. The investigators- Tom Jefferson, Carl Heneghan, and colleagues- found Tamiflu to be less effective and potentially more harmful than reported in the previously limited set of published studies.2-4 However, this fully informed review came after governments around the world spent hundreds of millions stockpiling the drug.5

Even if studies are not withheld on purpose, particular types of studies are more likely to be published than others. Randomized control trials (RCTs) are more likely to be published than observational studies, and studies with statistically significant findings are more likely to be published than those with non-significant findings.6

Why are particular types of research more likely to get published?

Academic journals may prefer to publish “gold standard” studies (RCTs) with significant findings as they attract a lot of attention. Meanwhile, researchers may be reluctant to spend time writing up and submitting observational studies with non-statistically significant results for fear of rejection by editors.

Whatever the reason for non-publication of data, the end-result is the same: only a sub-set of research findings have been reported and the published literature is a biased representation of all conducted studies. Clinicians and policy makers are informed by an incomplete evidence-base and may be wasting resources on ineffective or even harmful interventions.

 

What gets reported in published studies? Poor, fudged, or missing descriptions.

Despite the availability of 81 published guidelines for reporting health research missing information and poorly defined interventions, outcomes, and analyses still plague the medical literature.7

Studies have found that adequate intervention descriptions are only available in approximately 60% of clinical trial reports.8 Furthermore, up to 50% of published randomized trials alter their primary outcome of interest between publication of the study protocol and final reporting of results.6

Why does reporting deviate so much from established standards?

Lack of awareness, oversight, attempts to salvage a study with non-significant results, or intentional efforts to mislead.

Poor reporting translates to less confidence in study results, reported effectiveness of interventions, and the quality of evidence. Reporting must improve to reduce uncertainty in the evidence which informs decision making.

 

In what way is research reported? Endless, cryptic, and small print for evidence users. Sex, glamour, and exaggerations for the public.

Over twenty years ago, evidence-based medicine (EBM) was conceived as a new paradigm for clinical teaching and practice that would combine research evidence with clinical expertise and patient needs and preferences.9 According to Trish Greenhalgh and colleagues we still don’t have it right, in part because research evidence is not presented in an accessible way for end-users. 10

Why is research not accessible for end-users?

Rarely do research articles include short, plain language summaries, creative or appealing infographics, or decision aids to help make the evidence useable for clinicians, guideline developers, policy makers, and patients. Even for trained researchers small print and extensive results tables are a barrier to using evidence.

If research evidence is not accessible for evidence users the paradigm of EBM falls apart.

A separate issue exists for the way research is reported to the public; research results are often exaggerated in the mainstream media. A newspaper headline might read: “Oxford researchers show that green tea prevents cancer”. A quick glance of the actual article would reveal that, of course, they have not. The researcher’s observational study (no treatment actually administered) may have shown a slightly lower occurrence of cancer in patients that drink more green tea as part of their daily routine. Yes, it’s an association. No, it does not mean green tea prevents cancer.

Why are research results exaggerated in the mainstream media?

Big headlines sell newspapers and raise institutional and researcher profiles.

While it’s easy to blame journalists for such sensationalism, academic institutions have been playing a role as well. Over a third of research related press releases sent by UK universities to journalists contain exaggerated advice, exaggerated casual claims, or exaggerated inferences to humans from animal research.11

The detriment of glorified research results is an ill-informed public that may change lifestyle choices and develop treatment preferences based on inaccurate information.

 

So, what has this detour from my PhD has shown me? I’m not alone in my disenchantment with research reporting. Deficits in reporting have been widely recognized, the scope of the problem extends beyond published articles, and the consequences are far worse than a minor irritation while I complete my systematic review.

However, all is not lost. The world’s experts in evidence-based medicine have solutions to offer and will be presenting them at Evidence Live 2015 on April 13th and 14th at the University of Oxford.


Nik Bobrovitz is a Clarendon Scholar and PhD Student at the Nuffield Department of Primary Care Health Sciences, University of Oxford. His doctoral research focuses on the use of unscheduled secondary care including emergency hospital admissions. He can be reached at niklas.bobrovitz@gtc.ox.ac.uk or on Twitter @nikbobrovitz

 

References

  1. Schmucker C, Schell LK, Portalupi S, et al. Extent of non-publication in cohorts of studies approved by research ethics committees or included in trial registries. PloS one 2014;9:e114023.
  2. Loder E, Tovey D, Godlee F. The Tamiflu trials. BMJ 2014;348:g2630.
  3. Jefferson T, Jones MA, Doshi P, et al. Neuraminidase inhibitors for preventing and treating influenza in healthy adults and children. The Cochrane database of systematic reviews 2014;4:CD008965.
  4. Cohen D. Roche offers researchers access to all Tamiflu trials. BMJ 2013;346:f2157.
  5. Jack A. Tamiflu: “a nice little earner”. BMJ 2014;348:g2524.
  6. Dwan K, Gamble C, Williamson PR, Kirkham JJ. Systematic review of the empirical evidence of study publication bias and outcome reporting bias – an updated review. PloS one 2013;8:e66844.
  7. Moher D, Weeks L, Ocampo M, et al. Describing reporting guidelines for health research: a systematic review. Journal of clinical epidemiology 2011;64:718-42.
  8. Glasziou P, Meats E, Heneghan C, Shepperd S. What is missing from descriptions of treatment in trials and reviews? BMJ 2008;336:1472-4.
  9. Evidence-Based Medicine Working Group. Evidence-based medicine. A new approach to teaching the practice of medicine. Jama 1992;268:2420-5.
  10. Greenhalgh T, Howick J, Maskrey N. Evidence based medicine: a movement in crisis? BMJ 2014;348:g3725.
  11. Sumner P, Vivian-Griffiths S, Boivin J, et al. The association between exaggeration in health related science news and academic press releases: retrospective observational study. Bmj-Brit Med J 2014;349.

Evidence shouldn’t be a luxury: increasing capacity for evidence-based health care in low-resource settings

As we approach Evidence Live 2015, I’d like to begin a (hopefully engaging and productive) dialogue on the role of evidence-based medicine in low-resource settings (LRS), particularly in low- and middle-income countries (LMIC). One of our focal themes this year is ‘EBM across the globe: How the best evidence can improve global health,’ and this post is meant to orient ourselves with respect to one facet of this theme. As a small case study, let’s explore the role of primary health care in the management of Noncommunicable Diseases (NCDs).

NCDs, including cardiovascular diseases, diabetes, cancer, and chronic respiratory diseases, are the leading cause of death worldwide. Each year NCDs kill 38 million people (68% of global annual deaths) an 29 million of these deaths occur in LMIC. Of these deaths, cardiovascular diseases alone accountNCD_infographic for 17.5 million deaths worldwide each year.

LMIC are now dealing with an unprecedented epidemiological shift toward a burden of NCDs, and in many countries, a double burden of NCDs and communicable diseases. Under-powered health systems, high demand for care, and lack of political will have stifled much progress in these jurisdictions to implementing primary health care to address this rapidly changing epidemiological landscape.

In response, the World Health Organisation developed and published (2010) the Package of Essential Noncommunicable Disease Interventions for Low-Resource Settings (abbreviated WHO PEN) to promote the management of NCDs through a primary health care approach. WHO PEN outlines the minimum medicines, technologies, and policies that national and sub-national health systems need to have in place in order to have capacity for primary health care for NCDs.

These strategies must be adapted to local contexts using endogenous, high-quality evidence. However,  there is a disproportionately low amount of research originating from LMIC.  We need evidence for these jurisdictions, from these jurisdictions, in order to improve capacity for evidence-based health care.  Our attention should be drawn to the availability of education and resources to allow for the generation, appraisal, and application of evidence  for clinical practice and health care policy.

We should  ask ourselves: How do you stimulate grass-roots research? Who is responsible for increasing the capacity for research in low-resource settings? Is the current evidence base generalisable to these settings/ patients? How do we disseminate quality research? How do we promote the uptake and use of evidence in practice?

When we meet in April at EvidenceLive 2015, I hope we can discuss this topic, and the role we can play as EBM researchers, teachers, and practitioners in improving access to evidence-based health care. I would encourage you to invite a new colleague to coffee at the conference, and chat about your thoughts on this topic, and perhaps one of these questions:

How can we increase participation and representation of low-resource jurisdictions in the EBM community, and at Evidence Live?

How do we uphold evidence-based practice in low-resource settings? What are the challenges when doing so?

What are the educational opportunities for these settings?

 How can we adapt and make available the knowledge, skills, and policies that are employed in high-income countries to low-resource settings?

 How can we collaborate with Universities and practitioners in low-resource settings?

I’m looking forward to reading, listening, and speaking about this theme with delegates in April. In the meantime, feel free to share your ideas on twitter by tweeting us @EvidenceLive and using #EvidenceLive.


Dylan Collins is a Rhodes Scholar and PhD Student at the Nuffield Department of Primary Care Health Science, University of Oxford. His doctoral research focuses on primary health care for the management of NCDs in low-resource settings. He can be reached on Twitter @dylanrjcollins.

Time-dependent bias in observational studies of oseltamivir

Mark Jones (2)

Dr Mark Jones

[This blog post was written by Dr Mark Jones and originally published on the Originally posted on the cebm blog  on 11 Feb 2015. ]

Time-dependent bias is not an issue in randomised studies as treatment (including placebo) is given at the beginning of follow up for each treatment group compared. However in observational studies, treatment exposure often occurs sometime after initiation of a study. An analysis that does not take account of this delay misclassifies time at risk of outcome prior to treatment as being associated with treatment when in fact it is associated with no treatment. So for example if the treatment group has 100 patients followed for an average of 10 days from hospital admission and 20 die, a naïve mortality rate is 0.020 deaths per patient day (20 deaths in 10×100 = 1000 patient days). If 100 untreated “control” patients are followed for an average of 10 days and 25 die then the crude mortality rate is 0.025. A ratio of mortality rates (0.80) suggests treatment reduces risk of death by 20%. However if the average delay from admission to treatment is 1 day we have misallocated time at risk and 100 patient days (1 day for each of the 100 treated patients) should be subtracted from the treatment group and added to the no treatment group. This leads to a corrected mortality rate of 0.0222 for treated patients and 0.0227 for untreated patients hence a reduced ratio of mortality of 0.98.

This example illustrates that taking account of time-dependent exposure reduces the treatment effect in favour of treatment (from 20% risk reduction to 2% risk reduction).

TDB Illustration

In fact Beyersmann et al prove this is always the case. Any reported effect of time-dependent treatment exposure on outcome is at risk of bias if the time-dependent nature of treatment is not taken account of appropriately. Furthermore the bias is always in the direction of making treatment look better than it really is.

This point is illustrated further in an observational study on oseltamivir for treatment of critically ill patients in Canada during the 2009 influenza pandemic (Kumar, et al 2009). Of 578 patients, 540 got oseltamivir (of which 105 (19%) died) compared to 38 who did not get antiviral (of which 12 (32%) died). A simple chi-square test gives weak evidence of a difference in survival (P=0.072) and Cox regression assuming treatment exposure occurs at hospital admission provides evidence of reduced risk of death for patients getting oseltamivir (Hazard Ratio(HR) = 0.52, 95% CI: 0.29 to 0.95, P=0.033). See Table 1 for a life table and Figure 1 for a Kaplan-Meier plot of the data assuming treatment exposure occurred at hospital admission.

An alternative analysis that takes into account the fact that treatment with oseltamivir does not occur at hospital admission but rather occurred at a mean of 0.62 days (range 0 to 45 days) after hospital admission shows a markedly different result. Cox regression assuming time dependent treatment exposure gives no evidence of reduced risk of death for patients getting oseltamivir (HR = 0.87, 95% CI: 0.48 to 1.61, P=0.66). See Table 2 for a life table and Figure 2 for a survival plot of the data using the method of Simon and Makuch (1984).

The life tables and survival plots are shown for the first 12 days as this is where most of the mortality occurred. When standard survival analysis is used there is an implicit assumption that treatment exposure begins at baseline which in this case is hospital admission. Therefore at baseline there were 540 patients at risk in the oseltamivir group and 38 patients at risk in the no-treatment group (Table 1, Figure 1). This incorrect assumption is what leads to time-dependent bias.

In the alternative analysis the timing of exposure to treatment is taken account of correctly by considering how many patients were exposed or unexposed to treatment on a daily basis. If we had the data we could do this more accurately, for example, on an hourly basis. Table 2 shows that in fact there were only 423 patients exposed to oseltamivir in the first 24 hours of hospital stay. By simple subtraction we also know that 155 patients had no exposure to oseltamivir during the first 24 hours of hospitalisation. This more accurate data then leads to more accurate estimates of the cumulative mortality. If we were to use hourly data we would obtain more accurate estimates and reduce time-dependent bias further.

Time-dependent bias is an important issue for any observational study that assesses the effect of a treatment exposure occurring sometime after initiation of the study. In the context of oseltamivir for treatment of influenza there appears to be no published study that has addressed this bias appropriately. This is particularly important for studies with high mortality rate (i.e. severely ill patients) and studies with the majority of patients treated. The reasons are that high mortality will tend to increase the bias as more patients die before treatment can be given; and few untreated patients means that those dying before initiation of treatment would tend to have a large influence on the odds of dying in the untreated group as most patients who survive long enough to get treatment do get treatment.

Acknowledgement: I am very grateful to Anand Kumar and Rob Fowler for providing the Canadian individual patient data.

References

Beyersmann, J., Gastmeier, P., Wolkewitz, M., Schumacher, M. An easy mathematical proof showed that time-dependent bias inevitably leads to biased effect estimation. Journal of Clinical Epidemiology. 2008; 61: 1216-21.

Kumar, A., Zarychanski, R., Pinto, R., et al. Critically Ill Patients With 2009 Influenza A(H1N1) Infection in Canada.  JAMA. 2009; 302(17):1872-9.

Simon, R. and Makuch, R. W. A non-parametric graphical representation of the relationship between survival and the occurrence of an event: Application to responder versus non-responder bias. Statistics in Medicine. 1984; 3: 35-44.

Table 1: Time from admission to death by time fixed treatment status (AV=antiviral)

Days since admission Number at risk (AV) Number dead (AV) Cumulative mortality (%) (AV) Number at risk(no AV) Number dead(no AV) Cumulative mortality (%) (no AV)
1 540 4 0.7 38 3 7.9
2 536 14 3.3 35 3 15.8
3 519 4 4.1 32 0 15.8
4 507 2 4.5 31 0 15.8
5 498 3 5.0 29 0 15.8
6 485 4 5.8 27 1 18.9
7 467 6 7.0 26 1 22.0
8 449 4 7.9 25 0 22.0
9 441 4 8.7 23 1 25.4
10 422 8 10.4 22 0 25.4
11 394 7 12.0 22 1 28.8
12 375 6 13.4 20 0 28.8

 

Figure 1: Kaplan-Meier plot of time to death (TF = Tamiflu[oseltamivir])

FIG 1

 

 

 

 

 

 

 

 

 

 

 

 

Table 2: Time from admission to death by time dependent treatment status (AV=antiviral)

Days since admission Number at risk (AV) Number dead (AV) Cumulative mortality (%) (AV) Number at risk(no AV) Number dead(no AV) Cumulative mortality (%) (no AV)
1 423 4 1.0 155 3 2.0
2 484 14 3.9 87 3 5.5
3 487 4 4.7 64 0 5.5
4 485 2 5.1 53 0 5.5
5 481 3 5.7 46 0 5.5
6 472 4 6.6 40 1 8.0
7 459 6 7.9 34 1 11.0
8 442 4 8.8 32 0 11.0
9 434 4 9.7 30 1 14.4
10 415 8 11.7 28 0 14.4
11 388 7 13.5 28 1 18.0
12 370 6 14.8 25 0 18.0

Figure 2: Survival plot for time dependent treatment exposure (TF = Tamiflu[oseltamivir])

FIG 2

 

Outdated consent rules a barrier to improving children’s health care

The evidence to support much of children’s healthcare is limited. Ten years ago, randomized controlled trials in adults were increasing ten times faster than pediatric ones. Unfortunately, it doesn’t seem like the trend has changed and the gap in evidence between adult and children’s healthcare continues to widen. Randomized controlled trials are the best ‘fair tests’ we have in medicine when the choice between treatments is unclear and help provide clarity whether new treatments are better or worse than routine ones.

 

When there is uncertainty in the practice of medicine, some argue that it is unethical to not enroll patients in randomized trials. Not doing so is wasteful, and actively maintains uncertainty rather than using such opportunities to improve patient care. Yet, traditional randomized controlled trials are technically difficult, costly, time-consuming, complicated, and often do not apply to average patients outside the study. Trials in children are especially hard because of parental reluctance and small numbers.

 

What happens when two or more treatments are considered routine? For example, several different steroid regimens are believed to prevent complications in children with asthma attacks, but it is unclear if one works better than the other. Comparative effectiveness research is the term coined for trials that compare such routine treatments. They are essentially large-scale randomized controlled trials that are inserted into the everyday practice of medicine; by doing so, they minimize the barriers faced by traditional trials, and others have proven they are possible.

 

Comparative effectiveness research is important. The basic assumption that ‘routine’ treatments are harmless may be false. Treatments that have been routinely used for decades are seldom based on robust studies, or held to a similar standard if they were unveiled today. For example, the widespread use of oxygen for babies in the nursery led to serious eye problems including blindness. What treatments used routinely today may have similar harms?

 

But comparative effectiveness studies challenge the ethical framework that is steadfastly defended by Institutional Review Boards. This is particularly true when studies involve children. These regulatory committees require excruciatingly detailed consent documents, which outline every possible risk and benefit of enrolling in the study. These requirements make it difficult to recruit patients and act as a barrier to get doctors involved in research. While these consents make perfect sense for traditional trials where new therapies with unknown benefits and risks are being tested, they make much less sense when the therapies being tested are already considered the standard of care. The end result: failure to advance medical knowledge around uncertainty and maintaining the minimal evidence base available to guide children’s healthcare.

 

Here lies the rub. If there are two readily acceptable treatments used in routine practice, doctors decide a particular treatment based on various reasons like personal preference, anecdotal experience, pharmaceutical industry influence, or cost. Yet, if the same doctor wants to formally study the treatment to determine if it does more good than harm, a costly, and burdensome system awaits. Pediatrician Richard Smithells summarized this conundrum: “I need permission to give a drug to half of my patients, but not to give it to them all.”

 

Several authors have proposed changing the current Institutional Review Boards’ informed consent process to better reflect the actual risk of comparative effectiveness research studies and to reduce the barriers to conducting such research in children.

 

In certain situations, where risks to patients are minimal, or no greater than usual care, informed consent may not be required at all. Such studies would require ethical review and approval, but not consent for individuals enrolling in the study.

 

Others have argued for a ‘streamlined’ consent process which aims to better reflect the way actual decisions about treatments are made by doctors and patients. When the choice between two equally acceptable treatments is unclear, a conversation could be had between the doctor and parent describing this uncertainty. Verbal informed consent for study enrolment and randomization would be documented in the patient record the same way we already discuss risks and benefits of initiating these treatments. Such a system would avoid exaggerating the actual risk of the research which creates unnecessary deterrents to comparative effectiveness research participation.

 

There is a real tension between Institutional Review Boards’ necessary role in protecting the vulnerable on one hand (particularly in research involving children) and the inability to move the evidence base for children’s healthcare forward. But changes to how we conduct research involving children are needed to improve the quality of children’s healthcare.

 

We suggest that where clinical uncertainty is unresolved, we may be unintentionally causing harm to our young patients. While parents, clinicians and policy makers may find this unacceptable, the current Institutional Review Board framework for informed consent needs to be modified to reflect the actual risks of comparative effectiveness studies for the quality of children’s healthcare to improve.

 

This blog was written by Peter Gill and Jonathan Maguire and was originally posted on Healthy Debate.

Should I prescribe anti-virals to prevent flu for nursing home patients?

[This blog post was written by Helen Macdonald and originally published on the BMJ Blog on 21 Jan 2015. It has been reproduced with the permission of the author and the BMJ.]

A news story last week reported scepticism about whether GPs should prescribe anti-viral neuraminidase inhibitors osteltamivir and zanamivir to prevent flu in nursing home patients. Yesterday afternoon, another news story said that some GPs feel pressured by Public Health England to do so. The chair of the BMA’s General Practitioner’s Committee’s Clinical and Prescribing Subcommittee, Andrew Green, has said: “Nobody can compel you to do it, but nobody can advise you not to either.” So, as a GP, if the decision to prescribe is mine, how will I decide?

I spent an hour or so having a cursory look. (NB: far more time than I would have working a clinical day)…

Sources of information

The stories highlight discrepancies in key sources of information that GPs might turn to.Public health England advice 2014/2015 is predominantly based on the 2008 NICE guidelines. But NICE decided not to update these guidelines after new data emerged; these included some unpublished studies, and fuller data sets underpinning already published studies. These new data, drawn from clinical study reports of drug trials, were published in The BMJ in 2014. The data on oseltamivir and zanamivir inform the latestCochrane library review on neuraminidase inhibitors for preventing and treating influenza in adults and children.

The further I read the more uncertain I became. All the sources draw on a small set of imperfect studies. Some exclude people with chronic illness or include very few patients over 65 years, limiting their utility. Marrying the words and terminology used in clinical practice with the words used by flu researchers is tricky and it is hard to understand which symptoms are included in what patterns, or the clinical relevance of some terms such as “asymptomatic” flu.

The conclusions were more favourable towards giving the drug where only the published studies were used, and less favourable where the newer data from the clinical study reports had been added. But in both cases the overall picture seemed uncertain due to an absence of evidence.

Even where sources seemed to agree about a particular study and its finding were worthy of discussion, the experts interpreting the findings seemed divided. For example online yesterday in rapid responses the Cochrane authors challenged the impressive sounding statistics used to support one of the drug’s use to avert secondary complications in prophylaxis. They agreed that a 0.30 reduction in pneumonia was seen, but are unconvinced about the value of self reported pneumonia as an outcome, and the strength of the relative risk statistic. When given as absolute measures they say, “the risk difference, however was small (0.32%), meaning 311 people would need to be treated to prevent one person self-reporting that they thought they had pneumonia (95% CI 244 to 1086).”

What decision am I making?

Having made little progress, I had a cup of tea, and pondered what the aim of prevention was in the first place. And in the process realised, that I don’t have a clear understanding of what the rationale for prevention of flu in nursing homes is.

Is it about public health or individuals? Preventing complications, admissions, deaths? Or is it really just a question of an individual and their comfort?

If the aim is a public health one—to prevent spread— would my isolated decision to prescribe or not be helpful? Each nursing home may be served by a number of GP surgeries and doctors, is a generally co-ordinated approach needed to achieve benefits? And how does that square with another thread mentioned in the news story that Roche have admitted that oseltamivir does not prevent infection and Cochrane found that the oseltamivir trials did not demonstrate that it stopped spread because of their design.

Do we have the evidence to answer these questions? It was a frustrating morning overall, and one that left me with more questions than answers, not just about the drug, but about organisations judging the data. Why haven’t NICE, the regulators, and medical colleges communicated the implications of the new evidence in a more clinically applicable way? It seems to me that deferring the decision to doctors and patients without clearer information for them is a bit of a cop out.