From June 21 to 22 of 2017, I attended the Evidence Live conference at University of Oxford. On reflection, one theme emerges within the diverse topics presented in the conference, that is the era of cross-border collaboration has arrived. By ‘border’, I mean both professional borders, organizational borders, and geographical borders.
The old time of working in silo is ending. Many speakers in this conference pointed out that in this old paradigm, many funders set their own agenda without wider input. Many investigators conduct trials without learning from prior studies, resulting in the sad fact that a large proportion of studies have serious methodological limitations, or the ‘Scandal of poor medical research’ described by speaker Doug Altman. After completion, results are not reported (in full), and funders don’t audit this waste of money. For those getting published in peer-reviewed journals, many still suffer from poor quality. The bi-directional link between the publication of primary studies and systematic reviews is broken. Primary studies may have added more noise than signal into the evidence base, and when there is a true signal, systematic reviews are not able to follow accordingly. Even when the evidence is accurate and updated, many local quality improvement activities have failed. Lastly, patients are not evolved in these steps.
In the new emerging paradigm, first and foremost, patients and the public are involved to help improve health care research and delivery, as pointed out by speakers Simon Denegri, Trish Groves, and James Munro. I am transformed when hearing the work done by James Lind Alliance on involving patients to prioritize research agenda. On reflection of my own research, I feel a little bit shamed that I never engaged patient in the process, thus I am not confident at all that my research is relevant to them. I also get super excited after learning about Care Opinion. Traditional evidence-based medicine tells us what works. New experience-based medicine tells us what matters. It is a trusted resource for clinicians to learn from patients.
Second, trials could be done in a more collaborative and creative way. Speaker Lars Hemkens gave an example of doing a trial in Switzerland where there was no need to have a dedicated research nurse to collect data on one primary outcome – 30-days mortality. This is because it is a universal practice in Switzerland to follow up with patients after 30-day of discharge. If we could incorporate this type of routinely collected data, the conduct of trials could become more efficient.
Third, after the research is completed, funders and journals need to enforce transparency in reporting, as pointed out by speaker Iain Chalmers and Doug Altman in their talk on REWARD Alliance and EQUATOR network. When we find breach in research integrity, we need to have an open feedback loop from peers, as suggested by Ben Glodacre in his talk about audit and accountability of research quality.
When a primary study is published, authors need to inform systematic reviewers of relevant topic to update the review (when necessary), as audience member and presenter Rabia Bashir came to a consensus after she presented her finding that systematic reviews are not being updated in areas where evidence is accumulating. This is easier said than done (as mentioned earlier, many primary studies were performed without knowledge of an existing systematic review). However, we are not disparate because it is becoming easier for systematic reviewers to identify randomized controlled trials now, as presenter Anna Noel-Storr demonstrated the incredible success of Cochrane Crowd, which allows tens of thousands of volunteers around the world to contribute to the classification of articles. Crowdsourcing is not only happening on the level of article screening, as speaker Jon Brassey is developing a “community rapid review” functionality in the TRIP database (an EBM-enriched search engine), which will allow user of TRIP to construct a rapid review after using TRIP to solve their clinical questions. No matter where you live and what you do, we all benefit from faster and more accurate review of evidence. A quick comment on this subject is that it is still not very clear when rapid review might work and might not work. I proposed a pilot-tested study design in my presentation and called for wider participation.
However, even we have the best evidence about what seems to improve quality of care, it only describes a very small portion of what might happen in the real world and it is hard to reproduce. As speaker Mary Dixon-Wood pointed out in an earlier paper that ‘the superficial outer appearance of the intervention or QI [Quality Improvement] method is reproduced, but not the internal mechanisms (or set of mechanisms) that produced the outcomes in the first instance’ (Dixon-Woods and Martin, 2016). She expanded this paper to a mind-changing talk and argued that too much improvement work is undertaken in isolation at a local level, failing to pool resources and develop collective solutions. We need to have collaboration on system level, or we are just introducing new hazards in the process.
At the end of the first day, conference attendees were asked to write a job list for professionals from other disciplines, so that, for example, researchers can understand what patients need, and policy makers can understand what clinicians want. This is such a creative way to encourage listening and learning from each other. The era of collaboration has come! I am so inspired to go back to my job to implement what I have learned.