CEBM and BMJ logo

Publication bias

In 1995 Sterling reported nothing had changed in the last 30 years when it came to publication bias: in his 1959 study of the four major psychology journals publications he reported that 97% of published studies  were statistically significant.

A 2010 HTA systematic review reported that half of all trials had never published results, and positive trials were twice as likely to be published than negative trials.

Poor quality research

In 1994 researchers commonly used the wrong techniques, or the right techniques wrongly, misinterpreted results, reported results selectively and often came to unjustified conclusions. All of this meant “we need less research but better research.”   

In 2009, a Lancet series on avoidable waste in research estimated not much had changed: 85% of research spending currently goes to  waste.

Evidence production problems

Bad Pharma collated countless problems with the production of evidence: many medical tests and trials are profoundly flawed, evidence is often hidden by drug companies to the detriment of patient care.  Add to this is a poor regulatory system then it is increasingly clear we need a radically different system than the current defective system for producing evidence. 

Research more likely to be false than true

John Ionnadis pointed out research is more likely to be false, particularly when effects are small, when many outcomes are presented and when there are  greater financial interests. Moreover, “with increasing bias, the chances that a research finding is true diminish considerably”.

Reporting bias

An empirical analysis of 102 randomised trials found that half of the efficacy and two thirds of the harm outcomes were incompletely reported, with statistically significant outcomes being more than twice as likely to be reported.

The COMPare Projects analysis of outcome switching in clinical trials of the top five medical journals reports little has changed: on average each trial in the cohort only reported 62% of its specified outcomes.

Effects of reporting bias include  overestimation of efficacy and  underestimation of safety and is  a widespread phenomenon.

Ghost authorship

A substantial proportion of trials have ghost authorship, which often go undisclosed, and  undermine the validity of the results.

Financial and non financial conflicts of interest

Financial and non financial conflicts of interests are widespread amongst academic institutions and researchers and is associated with pro industry conclusions, restrictions on publication and data sharing and “private interests”.

Estimating costs of new treatments

Analysis of 32 Cancer drugs reported 2014 drug costs were on average six times higher than those in 2000: costing on average $11,325 compared to the $1,869 per month in 2000.

Under reporting of harms

86% of 92 Cochrane reviews did not include data from the main harm outcome and the primary harm outcome was inadequately reported in 76%  of the 931 included trials in these reviews.

Delayed withdrawal of harmful drugs

Analysis of 462 medicinal products withdrawn from the market found only 43 (9%)  were withdrawn worldwide, and the interval between the first reported adverse reaction and year first withdrawal was a median 6 years.

Lack of Shared Decision Making strategies

A systematic review of 39 studies reported no robust studies have evaluated shared decision making strategies, signifying it is difficult to advise which strategy, if any, to adopt when it comes to informing patients in real world practice.

Trials lacking external validity

Analysis of 20,000 Medicare patients with a principal diagnosis of heart failure reported only 13–25% met the criteria for 3 of the pivotal RCTs.

A systematic sampling  review of the eligibility criteria of 283 RCTs published between 1994 and 2006, in high impact general medical journals, reported common medical conditions led to exclusions in 81% of trials and commonly prescribed medications in 54%.

Amongst 155 RCTs of drugs frequently used by elderly patients, with chronic medical conditions, only three studies exclusively included elderly patients.

Regulatory failings

The pharmaceutical industry influence has led to shorter review times, fast track reviews and less thorough reviews.  User fees mean drug companies are the main client of the US FDA, and  as a consequence  what drugs are developed and how they are tested, is left up to the pharmaceutical industry and the quality of the regulatory reviews is declining.

Criminal behaviour

From 2009 to 2014 the pharmaceutical industry received fines totalling $13 billion for criminal behaviour and civil infringements – behaviour that has largely gone unnoticed. The three worst offenders were GSK (fine $3 billion) for marketing paxil to children and misleading the FDA; Pfizer ( $2.3 billion) for misbranding Bextra with “the intent to defraud or mislead,” and Johnson & Johnson ($2.2 billion) for illegal promotion of prescription drugs.  

Rise of surrogate outcomes

Surrogate outcomes are easier to measure the important patient outcomes and their use is on the risle. However, overinterpretation of their effects can lead to misinterpretation of the evidence, often ignoring important harms.

Unmanageable volume of evidence

Only a small minority of the trials done are analysed in up-to-date systematic reviews.

Clinical guidelines beset by major structural problems

“Despite repeated calls to prohibit or limit conflicts of interests among authors and sponsors of clinical guidelines, the problem persists” 

Too much medicine

Overdiagnosis is common and occurs more frequently with cancer screening, and extends across a range of conditions and currently health professionals are poorly informed.

Prohibitive costs of drug trials

The cost of clinical drug trials has risen significantly to the point where it is hindering the development of new medicines and preventing trial replication

Trials stopped early for benefit

A significant number of  RCTs  stop earlier than planned due to  apparent benefits that overestimate their true effectiveness. These trials  often receive greater media attention and affect clinical practice. Thisis particularly so when the number of events are small

Top