Author Archives: CEBM

About CEBM

CEBM Centre Manager Responsible for maintaining the Centre's ability to respond to new initiatives. Facilitating the development and dissemination of research to improve clinical practice and patient care. Elevating the position of all EBM and EBHC learning related activities globally. Follow CEBM on twitter @CebmOxford and facebook cebm.oxford

Spin the Odds

I recently attended the Evidence-Based Medicine Live19 conference at Oxford University where Professor Isabella Boutron from the Paris Descartes University presented a lecture entitled ‘Spin or Distortion of Research Results’. Simply put, research spin is ‘reporting to convince readers that the beneficial effect of the experimental treatment is greater than shown by the results’(Boutron et al., 2014). In a study of oncology trials spin was prevalent  in 59% of the 92 trials where the primary outcome was negative (Vera-badillo et al., 2013). I would argue that spin also affects a large proportion of dental research papers.

To illustrate how subtle this problem can be  I have selected a recent systematic review (SR) that was posted on the Dental Elf website  regarding pulpotomy (Li et al., 2019). Pulpotomy  is the removal of a portion of the diseased pulp, in this case from a decayed permanent tooth, with the intent of maintaining the vitality of the remaining nerve tissue by means of a therapeutic dressing. Li’s SR was comparing the effectiveness of calcium hydroxide with the newer therapeutic dressing material mineral trioxide aggregate (MTA).

In the abstract Li states that the meta-analysis favours mineral trioxide aggregate (MTA), and  in the results sections of the SR that ‘MTA had higher success rates in all trial at 12 months (odds ratio, 2.23,  p= 0.02, I2=0%), finally concluding that ‘mineral trioxide aggregate appears to be the best pulpotomy medicament in carious permanent teeth with pulp exposures’. I do not agree with this assumption, and would argue that the results show substantial spin. Close appraisal of Li’s paper reveals several methodological problems that have magnified the beneficial effect of MTA.

The first problem is regarding the use of reporting guidelines, which in this case was the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement (Moher et al., 2009). The author states this was adhered to, but there is no information regarding registration of a review protocol to establish predefined primary and secondary outcomes or methods of analysis. To quote Shamseer:

‘Without review protocols, how can we be assured that decisions made during the research process aren’t arbitrary, or that the decision to include/exclude studies/data in a review aren’t made in light of knowledge about individual study findings?’(Shamseer & Moher, 2015)

In the ‘Data synthesis and statistical analysis’ section the author states that the primary and secondary outcomes for this SR were only formulated after data collection. This post hoc selection makes the data vulnerable to selection bias. Additionally, there is no predefined rationale relating to the choice of an appropriate summary measure or method of synthesising the data.

The second problem relates to the post hoc choice of summary measure, in this case ‘odds ratio’ and the use of a fixed effects model in the meta-analysis (Figure.1).

Figure 1. Forest plot of 12-month clinical success (original).

Of all the options available to analyse the 5 randomised control trails odds ratio and a fixed effects model produced the largest significant effect size (OR 2.23 p = 0.02). There was no explanation as to why odds ratio was selected over relative risk (RR), risk difference (RD), or arcsine difference (ASD) if the values were close to 0 or 1. Since the data for the SR is dichotomous the three most common effect measurements are:

The authors specifically chose a fixed-effects model for meta-analysis based on the small number of studies. There are two problems with this, firstly there is too much variability between the 5 studies in terms of methodology and patient factors, such as age (in 4  studies the average age is approximately 8 years and in  one study its 30 years). Secondly we don’t need to used a fixed effect model since there are 5 studies, therefore we can use a random effects model using a Hartung-Knapp adjustment specifically for handling the small number of studies (Röver, Knapp & Friede, 2015; Guolo & Varin, 2017).

Below I have reanalysed the original data using a more plausible random effects model (Hartung-Knapp) and RR to show the relative difference in treatments plus RD to highlight the actual difference (Figure 2. and 3.) using the ‘metabin’ package in R (Schwarzer, 2007).

Figure 2. 12-month clinical success using  Hartung-Knapp adjustment for random effects model and relative risk

Figure 3. 12-month clinical success using  Hartung-Knapp adjustment for random effects model and risk difference

Both analyses now show a small effect size ( 8% to 9%) that slightly favours the MTA but is non-significant as opposed to a 2.23-fold increase in odds. In the pulpotomy review the OR magnifies the effect size by 51% using the formula (OR-RR)/OR×100. In a paper by Holcomb reviewing 151 studies using odds ratios 26% had interpreted the odds ratio as a risk ratio (Holcomb et al., 2001).

There are a couple of further observations to note. Regarding the 5 studies, even combined one would need 199 individuals in each arm of the study for it to be sufficiently powered ( ∝ error prob = 0.05, 1-β error prob = 0.8) putting the authors results into question about significance.

I have included a prediction interval in both my forest plots to signify the range of possible true values one could expect in a future RCT’s, which is more useful to know in clinical practice than the confidence interval (IntHout et al., 2016). Using the RD meta-analysis, a future RCT could produce a result that favours calcium hydroxide by 20% or MTA by 35% which is quite a wide range of uncertainty.

One of  Li’s primary outcomes was cost effectiveness and the paper concluded there was insufficient data to determine a result, it also mentions the high cost and technique sensitivity of MTA compared to the calcium hydroxide. I would argue that since there appears to be no significant difference between outcomes, we could conclude that on the evidence available calcium hydroxide must be more cost effective.

In conclusion researchers, reviewers and editors need to be aware of the harm spin can do. Many clinicians are not able to interrogate the main body of a research paper for detail as it is hidden behind a paywall and they rely heavily on the abstract for information (Boutron et al., 2014). Registration of a research protocol prespecifying appropriate outcome and methodology will help prevent post-hoc changes to the outcomes and analysis. I would urge researches to limit the use of odds ratios to case-control studies and use relative risk or risk difference as they are easier to interpret. For the meta-analysis avoid using a fixed effects model if the studies don’t share a common true effect and include a prediction interval to explore possible future outcomes.

References

Boutron, I., Altman, D.G., Hopewell, S., Vera-Badillo, F., et al. (2014) Impact of spin in the abstracts of articles reporting results of randomized controlled trials in the field of cancer: The SPIIN randomized controlled trial. Journal of Clinical Oncology. [Online] 32 (36), 4120–4126. Available from: doi:10.1200/JCO.2014.56.7503.

Guolo, A. & Varin, C. (2017) Random-effects meta-analysis: The number of studies matters. Statistical Methods in Medical Research. [Online] 26 (3), 1500–1518. Available from: doi:10.1177/0962280215583568.

Holcomb, W.L., Chaiworapongsa, T., Luke, D.A. & Burgdorf, K.D. (2001) An Odd Measure of Risk. Obstetrics & Gynecology. [Online] 98 (4), 685–688. Available from: doi:10.1097/00006250-200110000-00028.

IntHout, J., Ioannidis, J.P.A., Rovers, M.M. & Goeman, J.J. (2016) Plea for routinely presenting prediction intervals in meta-analysis. British Medical Journal Open. [Online] 6 (7), e010247. Available from: doi:10.1136/bmjopen-2015-010247.

Li, Y., Sui, B., Dahl, C., Bergeron, B., et al. (2019) Pulpotomy for carious pulp exposures in permanent teeth: A systematic review and meta-analysis. Journal of Dentistry. [Online] 84 (January), 1–8. Available from: doi:10.1016/j.jdent.2019.03.010.

Moher, D., Liberati, A., Tetzlaff, J. & Altman, D.G. (2009) Systematic Reviews and Meta-Analyses: The PRISMA Statement. Annulas of Internal Medicine. [Online] 151 (4), 264–269. Available from: doi:10.1371/journal.pmed1000097.

Röver, C., Knapp, G. & Friede, T. (2015) Hartung-Knapp-Sidik-Jonkman approach and its modification for random-effects meta-analysis with few studies. BMC Medical Research Methodology. [Online] 15 (1), 1–8. Available from: doi:10.1186/s12874-015-0091-1.

Schwarzer, G. (2007) meta: An R package for meta-analysis. [Online]. 2007. R News. Available from: https://cran.r-project.org/doc/Rnews/Rnews_2007-3.pdf%0A.

Shamseer, L. & Moher, D. (2015) Planning a systematic review? Think protocols. [Online]. 2015. Research in progress blog. Available from: http://blogs.biomedcentral.com/bmcblog/2015/01/05/planning-a-systematic-review-think-protocols/.

Vera-badillo, F.E., Shapiro, R., Ocana, A., Amir, E., et al. (2013) Bias in reporting of end points of efficacy and toxicity in randomized, clinical trials for women with breast cancer. Annals of Oncology. [Online] 24 (5), 1238–1244. Available from: doi:10.1093/annonc/mds636.

Mark-Steven Howe , University of Oxford

Shifting to ‘disagree to disagree’ using EBM

What would you consider a disease?

This question has been formulated in different ways in the previous two blogs The evidence of what?  and Trustworthy evidence or paid lip service. Evidence-based medicine (EBM) seems to lack explicit considerations of the properties of the phenomena that we want evidence about (1–4). The consequence can be a discrepancy between potentially correct evidence and our fundamental understanding of the phenomenon. Hypertension was used as an example: clear evidence of being a risk factor but unclear and unaware considerations of what constitutes a disease turns it into overdiagnosis (5).

Following example is chosen by its properties of being an existential condition: sarcopenia is the phenomenon of age-related loss of muscle mass and function and was assigned with an ICD-10-CM diagnosis code in 2016 (6).

For clarity, let us assume that the current evidence of sarcopenia is all highly trustworthy (6–9):

  • The condition is caused by: age, lack of activity, genetic factors, and insufficient energy or protein intake due to e.g. anorexia or malabsorption.
  • Can lead to: frailty, falls and fractures, cardiac and respiratory diseases, cognitive impairment, low quality of life, and death.
  • Main treatment: resistance exercise, optionally supplemented with a high uptake of essential amino acids and vitamin D.
  • Prevalence: potentially up to 2 billion people aged ≥60 years in 2050 worldwide.

Does the presented evidence show that sarcopenia should be diagnosed? Or does the evidence show natural life conditions and correlations that are related to getting older? Should we diagnose because it is a disease to loose muscle mass when you turn old? Or should we diagnose because it is a potential solvable problem for some?

In the case of hypertension, we are diagnosing a risk (a continuum) and evidence is insufficient to draw a line for the reach of the diagnosis. Regarding sarcopenia, evidence is insufficient to decide which existential phenomena that should be diagnosed. EBM is a crucial help for medical research and practice and EBM does avoid overdiagnosis, but it seems insufficient to suggest tools to discuss the properties of the diseases we are diagnosing including the limitations (1–4).

Without such discussions all kind of other factors and interests can dictate the development. In the case of sarcopenia, the evidence is used as the argument to consider the condition a diagnosis and disease (6–8). The evidence is used as if evidence speaks for itself, as if evidence-based health correlations naturally legitimise the creation of a new diagnosis. Such an approach opens for any condition to potentially become a disease as soon as evidence backs it up and can be used for conflicting interests. This challenges our fundamental idea and understanding of a disease.

EBM could promote explicit considerations from those who want to influence the diagnostic use by stating 1) what they consider a disease and why, 2) the reasons why the given condition should be considered a disease. The point of such an EBM-initiated discussion is not to agree but to point out that we disagree, that the real question may not be about evidence but underlying values and preferences.

  1. Doust J, Vandvik PO, Qaseem A, Mustafa RA, Horvath AR, Frances A, et al. Guidance for modifying the definition of diseases: A checklist. JAMA Intern Med. 2017;177(7):1020–5.
  2. Guyatt G, Rennie D, Meade M, Cook D. Users’ guides to the medical literature : a manual for evidence-based clinical practice [Internet]. 3rd ed. McGraw-Hill Education; 2015 [cited 2019 Mar 27]. 697 p. Available from: https://jamaevidence.mhmedical.com/content.aspx?bookid=847&sectionid=69030714
  3. Heneghan C, Mahtani KR, Goldacre B, Godlee F, Macdonald H, Jarvies D. Evidence based medicine manifesto for better healthcare. BMJ [Internet]. 2017;357:15–7. Available from: http://dx.doi.org/doi:10.1136/bmj.j2973
  4. Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, Schünemann HJ. Rating Quality of Evidence and Strength of Recommendations: GRADE: What Is “Quality of Evidence” and Why Is It Important to Clinicians? Source BMJ Br Med J [Internet]. 2008;336(7651):995–8. Available from: http://www.jstor.org/stable/20509658%5Cnhttp://www.jstor.org/stable/20509658?seq=1&cid=pdf-reference#references_tab_contents%5Cnhttp://about.jstor.org/terms
  5. Martin SA, Boucher M, Wright JM, Saini V. Mild hypertension in people at low risk. BMJ [Internet]. 2014 Sep 14 [cited 2019 Feb 5];349:g5432. Available from: https://www.bmj.com/content/349/bmj.g5432
  6. Cruz-Jentoft AJ, Bahat G, Bauer J, Boirie Y, Bruyère O, Cederholm T, et al. Sarcopenia: revised European consensus on definition and diagnosis. Age Ageing [Internet]. 2019 Jan 1 [cited 2019 Feb 6];48(1):16–31. Available from: https://academic.oup.com/ageing/article/48/1/16/5126243
  7. Anker SD, Morley JE, von Haehling S. Welcome to the ICD-10 code for sarcopenia. J Cachexia Sarcopenia Muscle [Internet]. 2016 Dec [cited 2019 Feb 5];7(5):512–4. Available from: http://www.ncbi.nlm.nih.gov/pubmed/27891296
  8. Cao L, Morley JE. Sarcopenia Is Recognized as an Independent Condition by an International Classification of Disease, Tenth Revision, Clinical Modification (ICD-10-CM) Code. J Am Med Dir Assoc [Internet]. 2016 Aug 1 [cited 2019 Feb 5];17(8):675–7. Available from: http://www.ncbi.nlm.nih.gov/pubmed/27470918
  9. Cruz-Jentoft AJ, Pierre Baeyens J, Bauer JM, Boirie Y, Cederholm T, Landi F, et al. Sarcopenia: European consensus on definition and diagnosis Report of the European Working Group on Sarcopenia in Older People. Age Ageing [Internet]. 2010 [cited 2019 Feb 5];39:412–23. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2886201/pdf/afq034.pdf

Christoffer Bjerre Haase is Doug Altman Scholar and a researcher and medical doctor from University of Copenhagen. Based on the theory of science, philosophy and evidence-based medicine, Christoffer is primarily interested in the inter-relations between the concepts of overdiagnosis and diagnosis/disease and the ways (medical) science and societal discourses influence those concepts.

Christoffer Bjerre Haase has no conflict of interest

Medication trade names; are they intended to “Trick” or can they help to “Treat”?

Have you ever wondered where medication tradenames come from? Can understanding tradenames be helpful to practitioners or patients? Can brand names affect their market share or have implications in patient care? The answer is: yes, some “tradenames” can serve these purposes.

Good appealing tradenames stick -“Trick”- the memory of both patients and prescribers (the actual end-users). Tradenames reflect whether medications are being primarily marketed to practitioners or to the public. Mostly, tradenames do not say much to the layperson. But, when it comes to over the counter (OTC) medications, they are usually simple, easily pronounced and can incorporate a direct message to patients. For example, Panadol cold & Flu day® (paracetamol, Phenylephrine HCl, Dextromethorphan HBr) includes the indication in the name, while Panadol Night® (paracetamol, diphenhydramine hydrochloride) explains when to take the product. These hints can increase consumers’ awareness, confidence in OTC selection and may ultimately improve compliance [1]. In contrast, when pharmaceutical companies target health care professionals, their aim is to create an appealing trade name that “sticks to mind” in order to have a better chance of creating prescription habits in favour of their medication and to distinguish their product among competitors. Some tradenames reflect a drug characteristic or a rational explanation, like Augmentin®(amoxicillin, clavulanic acid), which is meant to convey the meaning of an “augmented” effect to the health care professional in that clavulanic acid has an augmented effect to amoxicillin to combat against bacterial resistance and broaden antimicrobial coverage.

On the other hand, tradenames can also help to better “Treat” patients by helping to explain the indication or dosing frequency for a medication. The most interesting pattern is denoting how often to take the medication, (e.g. Cefobid® [cefperazone]; cefperazone, bid-twice daily, Singulair® (montelukast); drug is taken once daily (single) to treat asthma (air). Whereas, others denote the drug duration of action, (e.g. Lasix® [furosemide]; its effect last for SIX- to eight hours [half-life]), Apidra® (insulin glulisine; rAPID RApid acting insulin). Other tradenames point out the targeted patients that will benefit from certain drug therapy, such as Herceptin® (Trastuzumab), which is used is used for HER 2-receptor positive patients with breast cancer. Lastly, names can indicate the drug effect to help the pharmacist during counseling sessions, for example Emend® (Aprepitant) means putting emesis to an end.

Learning tradenames is considered a memorization challenge for medical/pharmacy students transitioning from preclinical to clinical settings [2]. Decoding the hidden messages in tradenames, can increase the learning experience and hence, the prescriber self-confidence which could impact prescribing patterns [3]. For instance, practitioners might get confused between similar sounding names Lopressor® (metoprolol) and Lyrica® (gabapentin). However, decoding the message may help them to distinguish which drug affects blood pressure and which drug makes you lyrical after becoming pain free. Physicians are more likely to prescribe drugs that they are most familiar with [4], which could also result in increasing the drug’s market share. Therefore, careful education, documentation, evaluation and publication of this line of work underscores the impact that naming drugs can have in healthcare practice.

References:

  1. Brabers, A.E., et al., Where to buy OTC medications? A cross-sectional survey investigating consumers’ confidence in over-the-counter (OTC) skills and their attitudes towards the availability of OTC painkillers. BMJ Open, 2013. 3(9): p. e003455.
  2. Hansen, A.J. Brand name or generic? Study probes use of drug names, which ties to health care costs. 2018; Available from: https://scopeblog.stanford.edu/2018/05/10/brand-vs-generic-medication-call-it-by-its-name/.
  3. Afzal Khan, M.I., Mirshad PV, Jeyam Xavier Ignatius. , How confident are the students and interns to prescribe ? – An assessment based on their views and suggestions. . Natl J Physiol Pharm Pharmacol. , 2014. 4(2): p. 138-142.
  4. Flegel, K., The adverse effects of brand-name drug prescribing. CMAJ, 2012. 184(5): p. 616.

Biography: Yasmin Elsobky is a 2019 Building Capacity Bursary recipient, a drug information specialist and co-founder at NAPHS Consultancy, and an early-career researcher at El-Galaa Military Medical Complex (GMMC), Egypt. She completed a B.Pharm.Sci at Misr International University (MIU), is a board-certified pharmacotherapy specialist (BCPS) from the Board of Pharmacy Specialties (BPS), USA and has a diploma in public health from the High Institute for Public Health (HIPH) in Biostatistics at Alexandria University.

The blog was co-written with Dr. Islam Mohamed, an assistant professor in Northstate University College of Pharmacy (CNUCOP) in California, USA. In 2008, he was awarded a full Fulbright Scholarship for studying MS in Neuroscience from the State University of New York at Buffalo (SUNY Buffalo) and in 2014, he earned his PhD in Clinical and Experimental Therapeutics from the College of Pharmacy, University of Georgia.

No conflicts of interest.

Trustworthy evidence or paid lip service?

How do we consider diseases nowadays and how do we develop and interpret evidence on this? The questions was introduced in an earlier blog because Evidence-based medicine (EBM) does not seem to tell  (1–4). I argued that we need to be explicit in our consideration of both the properties of evidence and disease because they influence each other. Focus only on evidence may create a discrepancy between potentially correct evidence and our fundamental understanding of disease.

As suggested, overdiagnosis could be a manifestation of such a discrepancy: the diagnosis is correct and potentially based on a massive amount of evidence throughout the years but it identifies conditions that were never going to cause harm and the diagnosis itself turns harmful (5). Such diagnosis is far from our fundamental understanding of it. And current EBM does not seem to address this.

Hypertension is one example: an overdiagnostic condition with clear evidence of being a health risk (6). It is also an example of the important point that EBM is already reducing overdiagnosis even without explicit considerations about the properties of a disease as a phenomenon. My colleagues and I used EBM to point out the lack of evidence of positive effects for lowering the treatment thresholds (7) and Bell and colleagues have pointed out the evidence of no effect for lowering the diagnostic threshold for 80 % of the suggested group (8).

But the 80 % is a problem: some people benefit of being diagnosed. It is a question of risk just like the diagnosis of hypertension itself is fundamentally a risk. We are returning to the main point: how do we consider the investigated phenomenon? Which principles qualify a risk factor to be categorized as a diagnosis? Is it then a disease?

A risk is a continuum: the extremes are quite distinct but all in between is difficult to determine. The problem has been known for 2400 year and is called Sorites Paradox: how many times can we remove grains from a heap (soros) before we no longer have a heap? (9). Vague terms hinder a clear boundary of application.

The implication is the point: more or better evidence does not give an answer to this problem. We can reject diagnosing a risk if we have no evidence of effect or if we have evidence of no effect. But anything in between can potentially be diagnosed. Although being crucial contribution to EBM and medical practice, neither the Guidance for Modifying the Definition of Diseases, the Users’ Guides to the Medical Literature, GRADE, nor the manifesto seem to address this (1–4).

However, there are ways to handle the paradox. In everyday life we can to some extent neglect the inexactness by contextualize the term in the given situation. Such an approach is less useful in science. One solution is to accept that we can potentially diagnose all health risks. Another is to reject diagnosing a health risk. I would suggest that we discuss what we consider diseases to be, not for the purpose of a solution but for the sake of the discussion itself. I will clarify this in the last blog.

  1. Doust J, Vandvik PO, Qaseem A, Mustafa RA, Horvath AR, Frances A, et al. Guidance for modifying the definition of diseases: A checklist. JAMA Intern Med. 2017;177(7):1020–5.
  2. Guyatt G, Rennie D, Meade M, Cook D. Users’ guides to the medical literature : a manual for evidence-based clinical practice [Internet]. 3rd ed. McGraw-Hill Education; 2015 [cited 2019 Mar 27]. 697 p. Available from: https://jamaevidence.mhmedical.com/content.aspx?bookid=847&sectionid=69030714
  3. Heneghan C, Mahtani KR, Goldacre B, Godlee F, Macdonald H, Jarvies D. Evidence based medicine manifesto for better healthcare. BMJ [Internet]. 2017;357:15–7. Available from: http://dx.doi.org/doi:10.1136/bmj.j2973
  4. Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, Schünemann HJ. Rating Quality of Evidence and Strength of Recommendations: GRADE: What Is “Quality of Evidence” and Why Is It Important to Clinicians? Source BMJ Br Med J [Internet]. 2008;336(7651):995–8. Available from: http://www.jstor.org/stable/20509658%5Cnhttp://www.jstor.org/stable/20509658?seq=1&cid=pdf-reference#references_tab_contents%5Cnhttp://about.jstor.org/terms
  5. Brodersen J, Schwartz LM, Heneghan C, O’Sullivan JW, Aronson JK, Woloshin S. Overdiagnosis: what it is and what it isn’t. BMJ evidence-based Med [Internet]. 2018 Feb [cited 2019 Jan 7];23(1):1–3. Available from: http://ebm.bmj.com/lookup/doi/10.1136/ebmed-2017-110886
  6. Martin SA, Boucher M, Wright JM, Saini V. Mild hypertension in people at low risk. BMJ [Internet]. 2014 Sep 14 [cited 2019 Feb 5];349:g5432. Available from: https://www.bmj.com/content/349/bmj.g5432
  7. Haase CB, Gyuricza JV, Brodersen J. New hypertension guidance risks overdiagnosis and overtreatment. BMJ [Internet]. 2019 Apr 12 [cited 2019 Apr 12];365:l1657. Available from: https://www.bmj.com/content/365/bmj.l1657.full?ijkey=6MF3u3YO2zvX1Uv&keytype=ref&fbclid=IwAR2JvD6FX6HPIwkrpz-R5qOvB8vagWlr1ki5UQMwPwrFV-TcNM_-lfTDGn8
  8. Bell KJL, Doust J, Glasziou P. Incremental Benefits and Harms of the 2017 American College of Cardiology/American Heart Association High Blood Pressure Guideline. JAMA Intern Med. 2018;178(6):755.
  9. Hyde D, Raffman D. Sorites Paradox [Internet]. The Stanford Encyclopedia of Philosophy. Stanford University; 2018 [cited 2019 Mar 27]. Available from: https://plato.stanford.edu/archives/sum2018/entries/sorites-paradox/

Christoffer Bjerre Haase is Doug Altman Scholar and a researcher and medical doctor from University of Copenhagen. Based on the theory of science, philosophy and evidence-based medicine, Christoffer is primarily interested in the inter-relations between the concepts of overdiagnosis and diagnosis/disease and the ways (medical) science and societal discourses influence those concepts.

Christoffer Bjerre Haase has no conflict of interest

Bridging the Research – clinical practice Gap with EBM

“It is not our differences that divide us; but the inability to recognize, accept and celebrate those differences” said writer Audre Lorde.

For decades, there has been a gap between researchers and clinical practitioners. Lately, Evidence-based healthcare is increasingly gaining popularity to bridge this gap.

Globally, Clinical Practice Guidelines have been formulated to guide clinicians. These guidelines are usually systematically developed statements designed to help healthcare providers, payers, consumers and policy-makers in deciding ways to prevent, diagnosis and treat diseases.1 However, today is an era of personalized medicine.

While formulating and implementing guidelines or clinical decision making algorithms, one needs to be aware of the multiple clinical variables that apply to an individual patient and/or a specified population. Genetic variations, ethnic differences, differences in diet and climate, etc. are all responsible for different pattern of diseases and varied responses to same set of treatments2.

It has been stated that ‘We are more microbes than cells’, when infact, the gut microbiota alone carries 150 times genes than the entire human genome.3 Hence, the effect of one’s microbiome on health and diseases cannot be underestimated.3 The role of epigenetics, a mechanism of expression of genes without altering the genome, combined with knowledge about technologies like CRISPR/Cas9 gene editing and next-generation sequencing have enabled a better understanding of the epigenetic change and gene regulation in human diseases, thus, giving way to newer approaches for molecular diagnosis and targeted treatments,4 although the impact to patient care and outcomes is still to be observed.

The gap between research and practice is more fundamental; and can be attributed to a host of other factors like lack of communication between research-scholars and clinicians, reluctance of practitioners to change from traditional methods of practice, inadequate practitioner training, misfit between treatment requirements and available organisational structures, insufficient administrative support and clear understanding of co-morbidities.4,5 The answers to such challenges can be addressed by ‘Translational research’.5 The practical utility of researchers can be increased by facilitating two-way communicating through the translation of research findings into language of practical use as well as clearly communicating the research needs of a community. Real life messy variables such as co-morbidities, financial constraints, inadequate insurance coverage, cultural and family issues need to be fully understood.

Possible solutions that may help close this gap include:

  • Involving private practitioners directly in the research loop may facilitate a better understanding and addressing of real-world challenges in practice.
  • Facilitating working relationships between researchers and clinicians to work as equal partners in community-based interventions.
  • Community intervention programmes be based on results from Multi-centric well designed Randomised Control Trials.
  • Training of practitioners, wherever required, for eg. including modules on Evidence based practice in predoctoral internships, postdoctoral fellowships, and continuing education programs .
  • Journals and publishers should emphasize detailed descriptions of the community applicability of research findings6.
  • Negative findings and limitations should be disclosed and published.

While the gap between the research and clinical practice is real, there are potential solutions that can close this gap!

References:

  • Clinical Practice Guidelines We Can Trust—Standards for Developing Trustworthy Clinical Practice Guidelines (CPGs)(www.nationalacademies.org)
  • Vasseur E, Quintana-Murci L. The impact of natural selection on health and disease: uses of the population genetics approach in humans. Evol Appl. 2013;6(4):596–607.
  • Baohong Wang, Mingfei Yao and LongxianLv, etal. The Human Microbiota in Health and Disease.Elsevier Publication. 2017;3(1):71-82.
  • Alexander Osborne. The role of epigenetics in human evolution. Horizons. 2017;10:1-8.
  • Estape ES, Mays MH, Harrigan R, Mayberry R. Incorporating translational research with clinical research to increase effectiveness in healthcare for better health. Clin Transl Med. 2014;3:20. Published 2014 Jul
  • S Mallonee, C Fowler, G R Istre. Bridging the gap between research and practice: a continuing challenge. Injury Prevention. 2006;12:357–359.

Conflicts of Interest: None

Bio:  I am a practicing clinician based in Mumbai, India. I have a keen interest in research activities, content writing and medical emergencies. My educational background includes a Bachelors degree in Medicine and a fellowship in Industrial Health along with a couple of original researches. Having a craving for knowledge and passion for learning new ideas, I am actively involved in free lance medical content development for publication houses and companies. I also enjoy contributing my service in managing on-field medical emergencies in various events. I am currently exploring potential avenues that may help me ‘make a difference’!.. “

A proposed framework for the pre-specification of statistical analysis methods in clinical trials (Pre-SPEC)

Choosing a statistical analysis approach in clinical trials requires us to make a number of decisions. Which patients should we include in the analysis? What statistical model should we use? How should we handle missing data? Each of these decisions will have an impact on the results of our analysis, and different choices could lead to different interpretations of the data.

But too much freedom can be a bad thing when it comes to analysis. The danger is that we may use the trial data to help us choose a method that gives us the answer we want — if we run enough analyses, one of them is bound to give a significant result. The solution to this problem is to pre-specify; we need to choose our analysis method before seeing the data.

However, this concept is not as simple as it first appears. If I say that I’ll use multiple imputation to handle missing outcome data, is this pre-specified? After all, I’ve said what I plan to do. Except that I haven’t really; there are many different ways to do imputation, and I’ve not said which one I plan to use. I could keep running different imputation methods until I got the result I wanted, and claim it was pre-specified.

Pre-specification is not just about saying what we plan to do; it’s also about making sure we can’t use the data to choose a method that gives us the answer we want. We have recently proposed a framework (Pre-SPEC) (https://arxiv.org/abs/1907.04078) for the pre-specification of statistical analyses that is based around this principle. This framework is intended to help trialists plan their own analysis approach, and also to help reviewers and journal editors identify whether trial results may be at risk of bias due to inadequate pre-specification.

Our proposed framework involves five points:

  • Pre-specify the analysis methods before recruitment to the trial begins;
  • Specify a single primary analysis strategy (if multiple analyses are planned, one should be identified as the primary);
  • Plan all aspects of the analysis, including the analysis population, statistical model, use of covariates, handling of missing data, and any other relevant aspects;
  • Enough detail should be provided so that a third party could independently perform the analysis (ideally this could be achieved by providing the planned statistical code); and
  • Adaptive analysis strategies which use the trial data to inform some aspect of the analysis should use deterministic decision rules

This framework is still a work in progress, so we are interested in what others think; if you have comments or suggestions, drop us a line (https://arxiv.org/abs/1907.04078).

Short bio

Brennan Kahan is a 2019 Doug Altman Scholarship recipient and a medical statistician at the Pragmatic Clinical Trials Unit, Queen Mary University of London. His research interests include design and analysis of clinical trials, and improving transparency in the analysis of clinical trials. He has no conflicts of interest to declare.

The evidence of what?

The Evidence based medicine (EBM) manifesto revolves around the question: why can’t we trust the evidence? (1). A strong question that deserves a follow-up: the evidence of what?

Several answers are intuitive but some degree of explicit consideration would be preferable. After all, “evidence-based medicine de-emphasizes intuition” (2) and more importantly, the actual evidence depends on it.

In general, evidence is information, proofs, indications, or traces of something (3). ‘Something’ here can be understood as literally anything, often termed a thing, being, entity, item, existent, or object (4). Evidence depends on the context (3). We constantly act according to this regardless of awareness: what we understand as evidence of a broken leg is different from the evidence of a depression. The assumed or considered properties of the thing that we want evidence about influence the evidence itself.

Same is with science as our overall methodological approach. Very simplified, a change happened in the 19th century of what was considered a disease (3,5). From a holistic view, disease now became distinct, material phenomena, having an archetypical form regardless of the patient. A change in the evidence followed: to be trusted it now had to come from laboratories, based on technology and basic and molecular science(3,5). Later, in the 1960s, Alvan Feinstein acknowledged the evidence from the laboratories. However, as a clinician, he considered diseases as phenomena that evolve with the patients (3). He therefore considered best evidence to be illness observed in clinical settings: clinical epidemiology. In the 70s, Archie Cochrane agreed but pointed out that social surroundings influence as well, making clinical evidence sometimes insufficient to understand diseases and treatment. To bring trusted evidence randomized trials could be needed especially to disclose false beliefs.

Today, how do we consider diseases and how do we develop and interpret evidence on this? How does modern medicine actually consider interrelated medical fundamentals such as disease, diagnosis, risk factor, existential condition, and problem?

The questions may be more relevant than ever: some people now turn into “patients unnecessarily, by identifying problems that were never going to cause harm or by medicalising ordinary life experiences through expanded definitions of diseases” (6). The phenomenon is called overdiagnosis. How is it related to diseases? It depends on how we (unawarely) consider diseases and how we develop and interpret evidence on this.

To reiterate: we need to be explicit in our consideration of both the properties of evidence and disease because they influence each other. Focus only on evidence may create a discrepancy between potentially correct evidence and our fundamental understanding of disease. Overdiagnosis could be a manifestation of such a discrepancy.

The EBM manifesto does not seem to address this, neither do the Guidance for Modifying the Definition of Diseases, the Users’ Guides to the Medical Literature nor GRADE (1,7–9). I, therefore, intend to elaborate on this issue and a potential solution in my presentation at EBMLive this year, and two additional follow-up blogs on this topic will be published after the conference.

References

  1. Heneghan C, Mahtani KR, Goldacre B, Godlee F, Macdonald H, Jarvies D. Evidence based medicine manifesto for better healthcare. BMJ [Internet]. 2017;357:15–7. Available from: http://dx.doi.org/doi:10.1136/bmj.j2973
  2. Carl Heneghan. Evidence-Based Medicine: What’s in a name? – CEBM [Internet]. 2015 [cited 2019 Jun 21]. Available from: https://www.cebm.net/2015/12/evidence-based-medicine-whats-in-a-name/
  3. Jensen UJ. The Struggle for Clinical Authority: Shifting Ontologies and the Politics of Evidence. Biosocieties. 2007;2(1):101–14.
  4. Rettler, Bradley, Bailey, Andrew M. Object [Internet]. Stanford encyclopedia of philosophy. Metaphysics Research Lab, Stanford University; 2017 [cited 2019 Jun 24]. Available from: https://plato.stanford.edu/cgi-bin/encyclopedia/archinfo.cgi?entry=object
  5. Rosenberg C. Managed fear. Lancet [Internet]. 2009 [cited 2019 Mar 12];373(9666):802–3. Available from: www.thelancet.com
  6. Brodersen J, Schwartz LM, Heneghan C, O’Sullivan JW, Aronson JK, Woloshin S. Overdiagnosis: what it is and what it isn’t. BMJ evidence-based Med [Internet]. 2018 Feb [cited 2019 Jan 7];23(1):1–3. Available from: http://ebm.bmj.com/lookup/doi/10.1136/ebmed-2017-110886
  7. Doust J, Vandvik PO, Qaseem A, Mustafa RA, Horvath AR, Frances A, et al. Guidance for modifying the definition of diseases: A checklist. JAMA Intern Med. 2017;177(7):1020–5.
  8. Guyatt G, Rennie D, Meade M, Cook D. Users’ guides to the medical literature : a manual for evidence-based clinical practice [Internet]. 3rd ed. McGraw-Hill Education; 2015 [cited 2019 Mar 27]. 697 p. Available from: https://jamaevidence.mhmedical.com/content.aspx?bookid=847&sectionid=69030714
  9. Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, Schünemann HJ. Rating Quality of Evidence and Strength of Recommendations: GRADE: What Is “Quality of Evidence” and Why Is It Important to Clinicians? Source BMJ Br Med J [Internet]. 2008;336(7651):995–8. Available from: http://www.jstor.org/stable/20509658%5Cnhttp://www.jstor.org/stable/20509658?seq=1&cid=pdf-reference#references_tab_contents%5Cnhttp://about.jstor.org/terms

Christoffer Bjerre Haase is a 2019 Doug Altman Scholar and a researcher and medical doctor from University of Copenhagen. Based on the theory of science, philosophy and evidence-based medicine, Christoffer is primarily interested in the inter-relations between the concepts of overdiagnosis and diagnosis/disease and the ways (medical) science and societal discourses influence those concepts.

Christoffer Bjerre Haase has no conflict of interest.

 

Research without journals?

Journals distort evidence and restrict access to research for the public who fund it.  In the digital age, subsidising traditional publishers is no longer necessary or justifiable.

For supposedly clever people, we academics are complete mugs.  Not only do we give away our research for publishers to make a profit1, our institutions pay hefty subscriptions just to access it, or pay even more if we want to make it freely available. Bizarrely, we also voluntarily staff the editorial and decision making processes of their journals.  Meanwhile careers (and egos) demand recognition in ‘luxury journals’2, which often privilege novelty over solid science and finding answers to questions that matter to patients.

Until recently the journal system represented the most efficient way to disseminate discovery.  But in the internet age, publishers’ grip on science has become an ever more anachronistic impediment to science.  That supposed innovators remain unquestioningly infatuated with a publishing model as redundant as leeches and blood-letting would be merely ridiculous, were the consequences not so dire.

Preventing almost everyone from accessing research is manifestly unjust. Research outputs are never really ‘ours’ to give away, the work is subsidised almost entirely by public and charitable funding and relies on the goodwill of the patients who participate so that others may benefit.  Modern medicine’s avowed commitment to patient empowerment surely rings hollow if we even won’t share information freely, with the insinuation that patients must just ‘take our word for it’.

As well as being elitist and unfair, donating knowledge to be locked away is utterly stupid.  Shutting out researchers from institutions, many in low and middle income countries, which can’t afford journal subscriptions is only hindering science.  And, while we castigate politicians for being evidence-illiterate, the researchers who prepare science reports for MPs have to rely on what they can find for free on-line3.

So what should we do about it?

In the short term, we should be lobbying all funders to require (and enforce) that research is published in journals which are either open access, or allow unrestricted dissemination of original manuscripts.  In the longer term we should have the ambition to liberate research from the publishing industry altogether.

State funders of research can begin to establish the infrastructure to vet, catalogue and host freely available research.  Publication decisions based solely on scientific integrity and ethical acceptability would be a major step in addressing publication bias and the replicability crisis.  In time, funders will be able to insist that publication occurs only in these public repositories and editors and reviewers will follow.  Eventually global integration of national catalogues could be undertaken through international collaborations, perhaps overseen by the World Health Organization.

Most popular journals would continue to thrive and could reproduce research of interest, from the public repositories.  Some titles might migrate or integrate into the public system creating a mixed economy albeit on the basis of unhindered access and cost containment.

Some might question how the importance of research would be recognised if not by the traditional journal hierarchy.  A range of reviewer and crowd based metrics can be implemented and refined. Though any attempt to quantify subjective quality will always be flawed, transparent metrics would surely be preferable to the entirely reductive4 and apparently negotiable5 methodology of journal impact factors.

Our publishing model is unfair, entrenches bias and represents poor use of public and charitable money.  We already give away our work for free – we should demand it is made open and dare to imagine research beyond journals.

This blog inspired by a discussion with Dr Bethany Shinkins and was informed primarily by Chris Chambers’ book, ‘the seven deadly sins of psychology’, Princeton University Press 2017

References

  1. Buranyi S. Is the staggeringly profitable business of scientific publishing bad for science? The Guardian2017 [Available from: https://www.theguardian.com/science/2017/jun/27/profitable-business-scientific-publishing-bad-for-science accessed 7 July 2019.
  2. Schekman R. How journals like Nature, Cell and Science are damaging science: The Guardian; 2013 [Available from: https://www.theguardian.com/commentisfree/2013/dec/09/how-journals-nature-science-cell-damage-science accessed 7 July 2019.
  3. Palma AD. Why all PhD students should do a policy placement The Rostrum Blog2015 [Available from: https://therostrumblog.wordpress.com/2015/01/12/why-all-phd-students-should-do-a-policy-placement/7 July 2019.
  4. Time to remodel the journal impact factor. Nature;535:466.
  5. The impact factor game. PLoS Medicine 2006;3(6):e291.

Conflicts of Interests: I have published in non-open access journals and have chosen to publish in journals based on impact factor.  As well as being employed as a GP, I am undertaking a PhD funded by Cancer Research UK and I receive payment for occasional work with the GMC on the PLAB examination. I serve without payment on a NICE committee and on the executive committee of the Fabian Society, a political think tank.

Biography: Stephen Bradley is a GP and a clinical research fellow researching lung cancer diagnosis at the University of Leeds. He is also interested in addressing inequalities in health and health policy

Evidence-based medicine challenges in new anticancer drugs

Cancer caused the death of 9.5 million people in 2018 and its incidence is increasing each year (1). The costs of anticancer drugs are also rising which is impacting healthcare systems all over the world (2).

Health Agencies such as the United States Food and Drug Administration (FDA), the European Medicines Agency (EMA), and the Brazilian Health Regulatory Agency (ANVISA) have made flexible requirements for cancer drug approvals (3). For new cancer treatments, phase 2 and non-randomized studies can be used as a single reference for approval. Increasingly, these trials use surrogate outcomes as the primary endpoints, such as progressive-free survival (PFS) and overall response rate. Surrogate endpoints are used to predict a clinically meaningful outcomes, such as overall survival, however they are not patient-important outcomes.

The majority of papers published in oncology use surrogate outcomes. This can be biased due to various reasons, including:

  1. there are no standard measurements for outcomes (most studies use radiological parameters)
  2. progression or response is subjective and the measurement can be different among researchers
  3. frequency of assessment can influence the results and
  4. demonstrate low or moderate correlations with overall survival (4,5).

Moreover, patients are becoming more involved in healthcare decisions, and surrogate endpoints are barriers to shared-decision making (6,7).

Pressure from patient associations regarding new technologies and treatments is contributing to the challenges. Patients with advanced cancer are facing death and they want fast and effective solutions for their health problems. This social pressure for “unmet medical needs” is, in many times, used as an argument for faster drugs approvals pathways. Access to new technologies is important but a balance between access, the cost of clinical drug trials and ensuring robust evidence,  “relevant, replicable, and accessible to end users” (8) is required.

Currently, when a new drug enters the market, patients and physicians are not fully aware of the risks. Some new treatments are approved based on accelerated pathways, examples include:

a.Drugs approved with single-arm and non-randomized studies (10,11);
b. Drugs approved with no confirmatory data (still in the experimental phase) (12);
c. Drugs approved where the studies have serious methodological limitations (11,13);
d. Drugs approved where there is limited safety information (14).

Possible solutions

Clinical trials and real-world data (prospective and standardized data collection) could be linked to generate information about effectiveness and safety of new drugs in the post marketing phase. This data could be mandatory for a pharmaceutical company to continue selling their drug. The inclusion of patient and public representation would be essential, as stated in AllTrials initiative (9). A multi-disciplinary team to prior study development, including experts in EBM, is necessary.

Patient education about that uncertainty of using new cancer medicines and training for physicians on shared decision making should be at the forefront. Creating an international collaboration, with chapter in all regions (i.e.: Latin America, European etc), would be an important action to discuss and implement practical solutions in EBM ecosystem.

References

(1)  Bray, F., Ferlay, J., Soerjomataram, I., Siegel, R., Torre, L. and Jemal, A. (2018). Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA: A Cancer Journal for Clinicians, 68(6), pp.394-424.
(2)  Pricing of cancer medicines and its impacts. Geneva: World Health Organization; 2018. Licence: CC BY-NC-SA 3.0 IGO.
(3)  Baird LG, Banken R, Eichler H, Kristensen FB, Lee DK, Lim JCW, et al. Accelerated Access to Innovative Medicines for Patients in Need. 2014;96(5):559–71.
(4)  Prasad V, Kim C, Burotto M, Vandross A. The Strength of Association Between Surrogate End Points and Survival in Oncology. JAMA Internal Medicine. 2015;175(8):1389.
(5)  Haslam A, Hey S, Gill J, Prasad V. A systematic review of trial-level meta-analyses measuring the strength of association between surrogate end-points and overall survival in oncology. European Journal of Cancer. 2019;106:196-211.
(6)  Shimp WS, Smartstate CLB. Interpretation of surrogate endpoints in the era of the 21st Century Cures Act. 2016;6286 (December):1–4. Available from: http://dx.doi.org/doi:10.1136/bmj.i6286.
(7)  Heneghan C, Goldacre B, Mahtani KR. Why clinical trial outcomes fail to translate into benefits for patients. 2017;1–7.
(8)  Heneghan C, Mahtani K, Goldacre B, Godlee F, Macdonald H, Jarvies D. Evidence based medicine manifesto for better healthcare. BMJ. 2017;:j2973.
(9)  All Trials. Available from: http://www.alltrials.net/find-out-more/why-this-matters/the-alltrials-campaign/
(10)  Ladanie A, Speich B, Briel M, Sclafani F, Bucher H, Agarwal A et al. Single pivotal trials with few corroborating characteristics were used for FDA approval of cancer therapies. Journal of Clinical Epidemiology. 2019.
(11)  Goring S, Taylor A, Müller K, et al. Characteristics of non-randomised studies using comparisons with external controls submitted for regulatory approval in the USA and Europe: a systematic review. BMJ Open 2019;9:e024895. doi:10.1136/ bmjopen-2018-024895
(12)  Davis C, Naci H, Gurpinar E, Poplavska E, Pinto A, Aggarwal A. Availability of evidence of benefits on overall survival and quality of life of cancer drugs approved by European Medicines Agency: retrospective cohort study of drug approvals 2009-13. BMJ. 2017;:j4530.
(13)  Downing NS, Aminawung JA, Shah ND, Krumholz HM, Ross JS. Clinical trial evidence supporting FDA approval of novel therapeutic agents, 2005-2012. JAMA – J Am Med Assoc. 2014. JAMA. 2014 January 22; 311(4): 368–377. doi:10.1001/jama.2013.282034.
(14)  Mostaghim S, Gagne J, Kesselheim A. Safety related label changes for new drugs after approval in the US through expedited regulatory pathways: retrospective cohort study. BMJ. 2017;358:j3837.

Conflict of interest statement: None to declare

Biography: Tatiane Ribeiro is a 2019 Building Capacity Bursary awardee, currently investigating evidence quality of new anticancer drugs on a Master’s degree at the University of Sao Paulo (USP) Medical School, Department of Preventive Medicine (Sao Paulo-Brazil). She is a Clinical Pharmacist and Health Technology Assessment consultant interested in oncology, real-world evidence, meta-epidemiology, HEOR, critical appraisal tools and shared decision making.

 

‘First, do no harm’: the promise of new scientific metrics

So I have just one wish for you—the good luck to be somewhere where you are free to maintain the kind of integrity I have described, and where you do not feel forced by a need to maintain your position in the organization, or financial support, or so on, to lose your integrity. May you have that freedom—Richard Feynman, Nobel laureate (Feynman, 1974)

In the new era of evidence-based medicine, ‘newly qualified doctors must be able to apply scientific methods and approaches to medical research and integrate these with a range of sources of information used to make decisions for care’ (1). Often, professional bodies and institutions measure aptitude for such learning outcomes using number of publications. Although many would argue this is not the case, it is certainly implicit in their actions; “not only do medical students perceive publications as a requirement for their residency applications, but there is also the idea that there’s a preferred number of publications” (2). This fixation on quantity stifles originality and creativity, and prioritises short-term gains at the expense of true innovation. We aren’t fostering the development of the visionary physician-scientists of tomorrow, rather one’s that are preoccupied with empty prestige.

Sadly, the exponential growth of scientific literature has not been paralleled by a growth in scientific knowledge (3). Moreover, the very scientific ecosystem that we engage with contradicts its own commitment to primum non nocere: “an unethical doctor may risk the life of a single patient, whereas an unethical or careless interpreter of statistical data may risk the lives of a whole population” (4).

Clearly, focusing on quantity is not the path to research excellence. Relevant stakeholders need to urgently evaluate and redefine their standards. Efforts to provide guidance on good research practice (5), which focus heavily on best practices for reporting individual studies, sends the message that research integrity at a systemic level is not a priority. Moreover, professional standards are almost exclusively based on using research to inform clinical practice (6), which are not inherently protective against poor scientific practice.

Having identified the problem(s), how do we go about solving it? Kretser et al. (2019) proposed a list of best practices for scientific integrity (7). For those institutions who want to produce graduates that are cognizant of the importance of scientific integrity, I strongly advocate for:

  1. Recognising and rewarding those who have shown a commitment to scientific integrity
  2. Training in:
    a. Scientific methods
    b. Appropriate experimental design and statistics
    c. Responsible research practices
    d. Science communication
  3. Strengthening internal scientific integrity oversight and processes

Institutions now have the chance to grant their candidates the freedom that Feynman wished upon all future scientists, and they need to act now before it is too late.

When you rely on incentives, you undermine virtues. Then when you discover that you actually need people who want to do the right thing, those people don’t exist” (8).

References 

  1. General Medical Council. Outcomes for graduates 2018 [Internet]. 2018. Available from: https://www.gmc-uk.org/-/media/documents/outcomes-for-graduates-a4-5_pdf-78071845.pdf
  2. Vos E. What motivates medical students to publish? [Internet]. The Wiley Network. 2017 [cited 2019 Jun 30]. Available from: https://www.wiley.com/network/researchers/writing-and-conducting-research/what-motivates-medical-students-to-publish
  3. Fortunato S, Bergstrom CT, Börner K, Evans JA, Helbing D, Milojević S, et al. Science of science. Science (80- ). 2018;359(6379):eaao0185.
  4. Bailar JC. Bailar’s laws of data analysis. Clin Pharmacol Ther. 1976;20(1):113–9.
  5. General Medical Council. Good practice in research [Internet]. 2010 [cited 2019 Jun 30]. Available from: https://www.gmc-uk.org/ethical-guidance/ethical-guidance-for-doctors/good-practice-in-research
  6. General Medical Council. Generic professional capabilities framework [Internet]. 2013 [cited 2019 Jun 30]. Available from: https://www.gmc-uk.org/education/standards-guidance-and-curricula/standards-and-outcomes/generic-professional-capabilities-framework
  7. Kretser A, Murphy D, Bertuzzi S, Abraham T, Allison DB, Boor KJ, et al. Scientific Integrity Principles and Best Practices: Recommendations from a Scientific Integrity Consortium. Sci Eng Ethics [Internet]. 2019;25(2):327–55. Available from: https://doi.org/10.1007/s11948-019-00094-3
  8. Edwards MA, Roy S. Academic research in the 21st century: Maintaining scientific integrity in a climate of perverse incentives and hypercompetition. Environ Eng Sci. 2017;34(1):51–61.

Conflict of interest: None

Biography: Logan is a final year medical student at The University of Auckland, in New Zealand. He is passionate about academia, especially open science and research integrity, and is pursuing a career as a physician-scientist in neonatal medicine.