Has anyone else noticed the dire state of research reporting? After recently analyzing 500 research articles for a systematic literature review I was shocked. “Why don’t they report the number of patients in each treatment group; why did patients drop out of the study; why are the statistics so poorly described!?” I irritably demanded of the PhD student one desk over. He sighed, shrugged, and turned back to his work without answering.
All my questions boiled down to a lack of good research reporting. As a result, I was forced to exclude relevant studies, mark unclear on my ‘quality of evidence’ form countless times, and, more than anything, was left concerned about the volume of research that is wasted due to poor reporting.
I, therefore, decided to take a brief detour from my PhD thesis- not a bad idea from time to time- to get an idea of how bad the problem is.
What I found did not surprise me: poor reporting is one of the biggest challenges currently facing evidence-based medicine.
I thought excluding a handful of published studies from my review was a waste. As it turns out, that’s just the tip of the ice-berg: up to half of all clinical trials are never even published.1
Why spend millions of pounds- and thousands of work hours- conducting a randomized controlled trial and then not publish it?
Ask big pharma, who routinely withhold trial data. The influenza drug Tamiflu (Oseltamivir) is a well-known example. Roche, the manufacturer of Tamiflu, funded clinical trials to test the effectiveness of the drug but subsequently withheld huge portions of data from publication. Only after a drawn out four year public relations campaign were researchers finally able to reanalyze the full set of data. The investigators- Tom Jefferson, Carl Heneghan, and colleagues- found Tamiflu to be less effective and potentially more harmful than reported in the previously limited set of published studies.2-4 However, this fully informed review came after governments around the world spent hundreds of millions stockpiling the drug.5
Even if studies are not withheld on purpose, particular types of studies are more likely to be published than others. Randomized control trials (RCTs) are more likely to be published than observational studies, and studies with statistically significant findings are more likely to be published than those with non-significant findings.6
Why are particular types of research more likely to get published?
Academic journals may prefer to publish “gold standard” studies (RCTs) with significant findings as they attract a lot of attention. Meanwhile, researchers may be reluctant to spend time writing up and submitting observational studies with non-statistically significant results for fear of rejection by editors.
Whatever the reason for non-publication of data, the end-result is the same: only a sub-set of research findings have been reported and the published literature is a biased representation of all conducted studies. Clinicians and policy makers are informed by an incomplete evidence-base and may be wasting resources on ineffective or even harmful interventions.
Despite the availability of 81 published guidelines for reporting health research missing information and poorly defined interventions, outcomes, and analyses still plague the medical literature.7
Studies have found that adequate intervention descriptions are only available in approximately 60% of clinical trial reports.8 Furthermore, up to 50% of published randomized trials alter their primary outcome of interest between publication of the study protocol and final reporting of results.6
Why does reporting deviate so much from established standards?
Lack of awareness, oversight, attempts to salvage a study with non-significant results, or intentional efforts to mislead.
Poor reporting translates to less confidence in study results, reported effectiveness of interventions, and the quality of evidence. Reporting must improve to reduce uncertainty in the evidence which informs decision making.
Over twenty years ago, evidence-based medicine (EBM) was conceived as a new paradigm for clinical teaching and practice that would combine research evidence with clinical expertise and patient needs and preferences.9 According to Trish Greenhalgh and colleagues we still don’t have it right, in part because research evidence is not presented in an accessible way for end-users. 10
Why is research not accessible for end-users?
Rarely do research articles include short, plain language summaries, creative or appealing infographics, or decision aids to help make the evidence useable for clinicians, guideline developers, policy makers, and patients. Even for trained researchers small print and extensive results tables are a barrier to using evidence.
If research evidence is not accessible for evidence users the paradigm of EBM falls apart.
A separate issue exists for the way research is reported to the public; research results are often exaggerated in the mainstream media. A newspaper headline might read: “Oxford researchers show that green tea prevents cancer”. A quick glance of the actual article would reveal that, of course, they have not. The researcher’s observational study (no treatment actually administered) may have shown a slightly lower occurrence of cancer in patients that drink more green tea as part of their daily routine. Yes, it’s an association. No, it does not mean green tea prevents cancer.
Why are research results exaggerated in the mainstream media?
Big headlines sell newspapers and raise institutional and researcher profiles.
While it’s easy to blame journalists for such sensationalism, academic institutions have been playing a role as well. Over a third of research related press releases sent by UK universities to journalists contain exaggerated advice, exaggerated casual claims, or exaggerated inferences to humans from animal research.11
The detriment of glorified research results is an ill-informed public that may change lifestyle choices and develop treatment preferences based on inaccurate information.
So, what has this detour from my PhD has shown me? I’m not alone in my disenchantment with research reporting. Deficits in reporting have been widely recognized, the scope of the problem extends beyond published articles, and the consequences are far worse than a minor irritation while I complete my systematic review.
However, all is not lost. The world’s experts in evidence-based medicine have solutions to offer and will be presenting them at Evidence Live 2015 on April 13th and 14th at the University of Oxford.
Nik Bobrovitz is a Clarendon Scholar and PhD Student at the Nuffield Department of Primary Care Health Sciences, University of Oxford. His doctoral research focuses on the use of unscheduled secondary care including emergency hospital admissions. He can be reached at firstname.lastname@example.org or on Twitter @nikbobrovitz