CEBM and BMJ logo

2015 Session Abstracts – Monday April 13th

Session A1 communicating Evidence
11:00 Monday April 13th
Chair: Jack O’Sullivan

Assessing how much confidence to place in findings from qualitative evidence syntheses: the CERQual approach
Heather Menzies Munthe-Kaas2, Claire Glenton3, Simon Lewin3, Benedicte Calsen4, Chris Colvin5, Jane Noyes6, Arash Rashidian7, Andrew Booth8, Ruth Garside1
1European Centre for Environment and Human Health, University of Exeter, Exeter, UK, 2Norwegian Knowledge Centre for the Health Services, Oslo, Norway, 3Global Health Unit, NOKC, Oslo, Norway, 4Uni Rokkansenteret, Bergen, Norway, 5School of Public Health and Family Medicine, University of Cape Town, Cape Town, South Africa, 6School of Healthcare Sciences, Bangor University, Bangor, UK, 7National Institute of Health Research, Tehran, Iran, 8School of Health and Related Research, University of Sheffield, Sheffield, UK
Background: Systematic reviews are increasingly used to synthesise findings from qualitative studies. However, it is difficult to use these findings to inform decisions because methods to assess how much confidence to place in these synthesis findings are poorly developed.
Aim: To describe a new approach for assessing how much confidence to place in evidence from reviews of qualitative research.
Methods: The Confidence of the Evidence from Reviews of Qualitative research (CERQual) approach was developed through review of existing relevant tools; working group discussions; and piloting on several qualitative evidence syntheses.
Results: CERQual bases assessment of confidence on four components:
– Methodological quality of individual studies contributing to a review finding, assessed using a quality-assessment tool for qualitative studies.
– Coherence of each review finding, assessed by examining the extent to which a review finding is based on data that is similar across multiple individual studies and/or incorporates (plausible) explanations for any variations across studies.
– Relevance of a review finding, assessed by determining to what extent the evidence supporting a review finding is applicable to the context specified in the review question.
– Sufficiency of data supporting a review finding, assessed by an overall determination of the degree of richness and/or scope of the evidence and quantity of data supporting a review finding.
After assessing each component, an overall judgement of the confidence in each review finding is made. Confidence can be judged as high, moderate, low, or very low. This assessment should be described and justified in a transparent manner, preferably in a summary of qualitative findings table that includes narrative statements.
Conclusions: CERQual provides a transparent method for assessing the confidence in evidence from reviews of qualitative research. Like the GRADE approach, it may facilitate use of these findings alongside reviews of effects, and in guideline development.

Our research, your reality. Evidence, experience and engagement through social media.
Sarah Chapman
UK Cochrane Centre, Oxford, UK
Background: Cochrane’s strapline is ‘trusted evidence, informed decisions, better health’, but of those for whom Cochrane evidence is potentially relevant, many will never have heard of Cochrane, or look for evidence to aid their decision-making about health. Also, it can be difficult and time-consuming to find and understand. Social media has great potential for evidence-sharing, allowing engagement rather than just dissemination and removing many obstacles to communication between producers of evidence and potential users.
Aims: To raise awareness of using evidence to inform decision-making; to share it in ways that are useful and usable; to encourage discussion of evidence in the context of ‘expert’ (patient or professional) views.
Methods: Evidence is shared in weekly blogs and across several social media platforms, language and content tailored for different audiences. A focus on evidence that is new or related to current topics, news stories or health awareness events increases their relevance. Many consider evidence in the context of patients’ or professionals’ experience. On Twitter, evidence is shared routinely, responsively and opportunistically.
Results: A range of people have contributed to, or written, blogs, including a nine year old with cystic fibrosis. These have been widely shared and discussed, mostly on Twitter, where we have also shared evidence by responding to those seeking reliable information. We have joined popular conversations through the use of hashtags, such as #royalbaby, which provided opportunities to bring evidence on pregnancy and childbirth to new audiences. Our reach has grown and we have been particularly successful in engaging with nurses and stimulating debate about evidence and practice.
Conclusions: We have successfully used social media to bring together producers and potential users of evidence. We have made evidence more accessible and relevant and established a presence on social media through which we can continue to foster engagement with trusted evidence.

Evidence-Based Medicine in the Improvement of Medical Bad News Communication
Nazli Elsa Koc1, Alan Alper Sag3
1American Robert College, Istanbul, Turkey, 2Section of Interventional Radiology, Department of Radiology, Koç University School of Medicine and Hospital, Istanbul, Turkey, 3School of Medicine, Koc University, Istanbul, Turkey, 4Section of Interventional Radiology, Department of Radiology, VKF American Hospital,, Istanbul, Turkey
Introduction: Compassionate and effective delivery of bad news exemplifies the art of medicine. As such, medical student education must change to meet the medical, personal, and cultural and psychological needs of an increasingly diverse group of patients. Many new and complementary approaches are now available to the modern medical educator. Flexible approaches based from insight into the patient experience are quickly changing the way we train doctors.
Aims:
1)To review available literature regarding educational methods for medical bad news delivery,
2)To propose evidence-based and patient-centered directions for future education and research.
Methods: Institutional Board Review was not required as this study did not include patient data. Literature search included English-only articles in PubMed, Scopus, and Cochrane Library within the last 50 years. Included articles emphasized medical student eduation and trials on graduate students. Articles were critically reviewed for methodology and outcomes data.
Results: Of a total of 43 publications identified using the broad search parameters, a total of 21 relevant publications concerning physician training were selected, 4 of which are specific to undergraduate medical education. These were supported by 12 additional articles and 11 tables concerning patient preference and experience. Results of comparable studies were tabulated and aggregate descriptive statistics calculated thereof.
Conclusions: The delivery of bad news is a core competency for physicians and continues to evolve as a training modality. Newer methods emphasize evidence-based and patient-centered communication whereas older methods can still retain value for their “tried and true” experience-based significance. Future research directions on this topic encourage medical student education at an earlier stage, combining evidence-based approaches with empathy.

Using qualitative research to inform the design of large scale clinical trials: An example from the ACTIvATeS feasibility study
Francine Toye1, Mark A Williams2, Esther M Williamson2, Jeremy Fairbank2, Sarah E Lamb2
1Oxford University Hospitals NHS Trust, Oxford, UK, 2University of Oxford, Oxford, UK
Introduction: Adolescent Idiopathic Scoliosis (AIS) is a three dimensional deformity of the spine that results in a lateral curve, rotation and flexion/extension of the vertebrae. It occurs at or near the onset of puberty. Current management includes monitoring, bracing, exercise and surgery. This qualitative study was part of a larger project that evaluated the feasibility of conducting a randomised controlled trial of exercises for young people with AIS (The ACTIvATeS Study ISRCTN90480705). The innovation was to explore valued aspects of care from the perspective of patients, their parents and physiotherapists, with a view to bringing people’s experience into research design.
Aim: to explore participants’ perception of the trial, the issues influencing exercise adherence, and the appropriateness of the chosen outcome measurement.
Methods: We used semi-structured interviews to explore the experience of six adolescents with scoliosis, eight parents and four physiotherapists. We used the methods of Interpretive Phenomenological Analysis to interpret the data.
Results: We present a model that incorporates the following valued aspects of care in exercise interventions for AIS:  (a) it won’t necessarily change the bony bits; (b) I didn’t realise I stood like that; (c) she doesn’t mention the pain now; (d) it gives her a sense of control; (e) creating a space for concerns; (f) they talked to her like a person.
Conclusions: Valued aspects of care may be different for different stakeholders. Our findings highlight the importance of including different people’s perspective into research design. Findings raise the issue of how we determine which valued outcomes should drive the research agenda and the management of AIS. Choices about valued outcomes have implications for intervention, research and commissioning: namely, who decides what a valued outcome is, how do we measure these outcomes and how do we decide what valued outcomes to fund?

Engaging Patients and Stakeholders in Public Policymaking Process Design: A Case Study
Cathy Gordon, Pam Curtis, Samantha Slaughter-Mason, Aasta Theilke, Nicola Pinson, Valerie King
Center for Evidence-based Policy, Oregon Health & Science University, Portland, Oregon, USA
Background: The Texas Health and Human Services Commission (HHSC) identified the need to improve the process it uses to develop and implement medical and dental coverage decisions. Within the context of limited resources it was critical that this publicly funded program have clear, effective, and informed policies and processes in order to best serve their clients and citizens. HHSC contracted with the Center for Evidence-based Policy (the Center) at Oregon Health & Science University to assist in the development of a transparent process framework for evidence-informed policy decisions.
Aim: To develop a transparent, evidence-informed decision making process for the Texas Health and Human Services Commission (HHSC) with input and buy-in from diverse stakeholders.
Methods: The Center designed a stakeholder engagement plan that included an online survey, key informant interviews and a series of stakeholder meetings.
Results: The Center received 274 online survey responses (out of 770 unique email addresses) from patients and other stakeholders. Data were analyzed and used to inform the conduct of 31 in-depth, qualitative interviews with both internal and external stakeholders. All findings were then presented at an all-day, in-person stakeholder meeting in Austin, Texas. Stakeholders identified six key principles for operationalizing decision-making processes: transparency, evidence-based, timely, flexible, equitable and improvement-focused.
Conclusions: Transparent policymaking frameworks are essential during this time of significant transformation in the US healthcare delivery system. Affected stakeholders can be involved in the design of these processes to assure effective implementation. Evidence-informed decision-making in public agencies need an engagement framework that allows affected stakeholders to provide input at key points in the process. Engaging stakeholders early and often allowed Texas HHSC to create a process based on a widely agreed upon set of principles.

Attitude, Awareness and knowledge of Evidence Based Medicine among Postgraduate Medical Students
Ayesha Memon, Fazian Qaisar
Liaquat University of Medical & Health Sciences Jamshoro, Jamshoro Pakistan
Background: To provide the better health care to the patients, and to take appropriate clinical decisions for them, experts of the medical field from all over the world, emphasize on practice of evidence based medicine.  That requires a health care provider, to go through the scientific literature to find out the best available option, to solve patient’s health problem.  ENM not only provide better health care to patients, but also prevents inappropriate clinical decision regarding the patient’s health.
Methodology: This cross-sectional survey was carried out in the month of August 2013 at Liaquat University hospital Hyderabad/Jamshoro.  A pre-tested-self-administered questionnaire previously used for similar type of surveys, was used.
Results: Overall response rate was 94%.  Of the participants 68.1% (n=96/141) heard the word of EBM first time during postgraduation training.  Teaching of EBM at both under- and postgraduate level was strongly suggested.  95.7% (n=135/141) of the participants never attended a workshop on EBM. 70.2% (99/141) use both books and internet to update their knowledge.  53.2% (n=75/141) agreed that doctors practice need to be audited. 85.1% (120/141) replied that they have no one around them who practice EBM.  46.8% (n=66/141) admitted that only sometimes they discuss the need of evidence based guideline during ward rounds and OPD.  51.1% (72/141) were of the opinion introducing EBM in undergraduate education will help producing better doctors.  38.2% (n=50/141) research articles/reports are not readily available, 17% (22/141) lack of postgraduate interest to change or try new ideas were the barriers faced by postgraduate.
Conclusion: Although attitude of postgraduate students towards EBMpractice in Pakistan is welcoming nevertheless, they need more knowledge and training in this regard.  Therefore, there is a strong need of incorporating the teaching od EBM at undergraduate level to promote the practice of EBM.

Session A2 Improving conduct of  Research
11:00 Monday April 13th
Chair: Paul Glasziou

The stepped wedge cluster randomised trial: an opportunity to increase the quality of evaluations of service delivery and public policy interventions
Karla Hemming, Richard Lilford
University of Birmingham, Birmingham, UK
Background: The stepped wedge cluster randomised trial (SW-CRT) is a novel research study design that is increasingly being used in the evaluation of service delivery type interventions. The design involves random and sequential crossover of clusters from control to intervention, until all clusters are exposed.
Aims: We illustrate the use of the design by giving case examples, summarise the results of an update of a methodological systematic review of the quality of reporting and provide recommendations for reporting and analysis.
Methods: A methodological systematic review of published SW-CRTs. Assessment was guided by recent developments in statistical design and analysis of these studies.
Results: The use of the SW-CRT is rapidly increasing and that areas of use are diverse. We illustrate how the design is being used to evaluate the effectiveness of a complex intervention, being rolled-out across 90 UK hospitals, to reduce mortality in patients undergoing emergency laparotomy.
Quality of reporting is found to be low. In a SW-CRT more clusters are exposed to the intervention towards the end of the study than in its early stages. A result which prima facia might look to be suggestive of an effect of the intervention may therefore transpire to be the result of a positive underlying temporal trend.
A large number of studies do not report how they allowed for temporal trends in the design or analysis.
Conclusions: The SW-CRT is a pragmatic study design which can reconcile the need for robust evaluations with political or logistical constraints. Quality and reporting is generally low and so consensus guidelines on reporting and analysis are urgently needed.

The IDEAL (Idea, Development, Exploration, Assessment, Long-term follow-up) Framework for medical devices: is it sufficient?
Christopher Pennell2, Allison Hirst1, Peter McCulloch1
1IDEAL Collaboration, Nuffield Department of Surgical Sciences, University of Oxford, Oxford, UK, 2Maimonides Medical Center, Brooklyn, New York, USA
Introduction: The evidence to support surgical innovations has been poor in comparison to that for medical innovations. The IDEAL Framework and Recommendations were developed to describe the stages of surgical innovation, propose appropriate evaluation methodology for each stage and provide guidance on reporting findings (www.ideal-collaboration.net). Recent debate about the adequacy of medical device evaluation in surgery has led us to ask whether IDEAL adequately describes device development.
Aims: Clarify the ability of the original IDEAL framework to describe the stages of development for innovations involving devices.
Methods: We initiated a Delphi study of 32 experts asking their views on the development of medical devices in relation to surgical procedures and IDEAL. We will conduct three survey rounds by email and conclude with a consensus meeting. Round two is currently underway.
Results: In round one, 19 surveys were completed by 20/32 experts (62.5%). Clinical surgeons comprised the greatest proportion of respondents (55%). 35% reported being researchers, methodologists, or statisticians. 25% had roles as journal editors. 10% were industry professionals. 25% reported more than one applicable professional role.
A majority of respondents (73.7%) believed devices were sufficiently different from procedures to warrant either a separate IDEAL-Device framework or an extension to the original IDEAL. 68.4% believed this should include a “stage 0” to report preclinical results and 63.2% reported stages 2a and 2b could be merged for devices. Other common themes included a) complexity and intimacy of links between devices and procedures and b) regulatory and long-term safety concerns.
Conclusions: Medical devices, while often intimately related to surgical procedures, have unique differences that are not adequately described in the original IDEAL framework. An extension to IDEAL or a unique IDEAL-Device framework is needed to guide researchers in developing and reporting innovations involving devices.

Discloure, conflict of interest & funding issues in Urogynecology articles – a cross-sectional study
Marianne Koch1, Paul Riss1, Wolfgang Umek1, Engelbert Hanzal1
1Medical University of Vienna, Vienna, Austria, 2Karl Landsteiner Gesellschaft, Vienna, Austria
Objectives: To investigate the actual practice of how disclosures and conflict of interest statements (COI) are managed by six journals publishing urogynecology articles in the year 2013.
Materials and Methods: All urogynecology articles published in 2013 in the 6 journals – Obstetrics & Gynecology, AJOG, BJOG, Neurourology and Urodynamics (NAU), International Urogynecolgoy Journal (IUJ), and Female Pelvic Medicine and Reconstructive Surgery (FPMRS) – were included. Each article was assessed for disclosure/ COI and funding statements. Instructions to authors of the included journals were screened for requirements for disclosures and funding. Original disclosure/ COI statement forms were accessible for one journal (IUJ). Information given on these forms was compared to the disclosures in the published articles. IRB approval was not required.
Results: We included 434 articles, of which 431 contained a statement on disclosure/ COI (99%). Funding statements were given in 49% of articles (212 of 434 articles).  The majority of funding statements was categorized as “grant” (n= 123), followed by “none” (n= 34), “industry” (n= 33) and “institutional” (n= 22). All investigated journals require disclosure/ COI and funding statements in their instructions to authors.
The information on original disclosure /COI forms and final statements in the published article (IUJ) was identical in 80% of articles (n= 186 of 242). In 46 articles statements were inconsistent.
Conclusion: Disclosure/COI statements have become accepted in urogynecology and are found in almost all articles. The content of the statements, however, is often incomplete and should be monitored more closely by journals and authors. Despite the requirements of journals, the reporting of funding is done inconsistently. Half of the urogynecology articles carried no statement on funding at all. This issue, as well as the completeness of disclosures, should be given more attention.

The predictive validity of GRADE
Gerald Gartlehner1, Tammeka Swinson Evans2, Andreea Dobrescu3, Isolde Sommer1, Kylie Thaler2, Kathleen Lohr2
1Danube University, Krems, Austria, 2Research Triangle Institute International, Research Triangle Park, USA, 3Victor Babes University of Medicine and Pharmacy, Timisoara, Romania
Background: Many organizations have adopted the GRADE (Grading of Recommendations Assessment, Development and Evaluation) approach to rate researcher confidence in an available body of evidence. GRADE’s definition of quality of evidence (QOE) links individual grades to the degree of confidence that estimates are close to the true effect (and thus will remain stable as new evidence accrues).
Aims
 – To determine how producers or users of systematic reviews interpret grades of QOE regarding the likelihood that effect estimates will remain stable as new studies emerge.
– To determine the predictive validity of the GRADE approach (i.e. whether GRADE discriminates reliably between evidence that remains stable and evidence that changes as new studies emerge).
Methods: We used an international web-based survey to ask producers and users of systematic reviews to assign each grade of QOE likelihoods that treatment effects will remain stable. Using multivariate analysis of covariance, we tested whether the estimated likelihoods differed between producers and users. To determine the predictive validity, we randomly assigned researchers from six U.S. Evidence-based Practice Centers and Cochrane Austria 160 bodies of evidence to grade. Using likelihoods from the survey as reference points, we calculated c-statistics to determine the predictive validity.
Results: 244 participants completed the survey. The associated likelihoods that treatment effects will remain stable were 86-100% for high, 61-85% for moderate, 34-60% for low, and 0-33% for very low grades.  Likelihoods were similar between producers and users of systematic reviews (p>0.05). GRADE, however, did not discriminate well between bodies of evidence that remained stable and those that changed (c-scores 0.56-0.58).
Conclusion: GRADE is a suitable method for systematic review producers to convey uncertainties to users. The predictive validity of GRADE was compromised by grades of QOE that seemed, in general, too low.

Analysis of US phase 3 ClinicalTrials.gov records completed before January 1st, 2011 (n=5051; time frame: 2002 to 2014)
Jorge Ramirez
Universidad del Valle, Cali, Valle, Colombia
Introduction (background): There has been a growing concern about the selective reporting of clinical trial results (i.e., publication bias). The problem of unpublished and misreported clinical trials was the main reason behind the creation of campaigns such as AllTrials and The BMJ Open Data Campaign. The birth of clinical trial registers (e.g., ClinicalTrials.gov, EudraCT, ISCRTN, among others) more than one decade ago allowed for the first time the possibility to identify unpublished clinical trials in public databases.
Aims: The aim of this study was to answer the following question: are US clinical trial data sufficiently shared?
Methods: Results disclosure of US phase 3 completed ClinicalTrials.gov registries (n=5051; registration dates: January 1st, 2002 to January 1st, 2014) was carried out by data mining (i.e., expert search queries) in journal article databases: PubMed, Embase, Google Scholar, and EBSCO Discovery Service. Other variables such as locations, time difference between the registration date and start date, and results disclosure at ClinicalTrials.gov were also analyzed.
Results, raw data and comments related to these analysis are available in the following BMJ rapid responses
 – Re: The US requirement to deposit trial data within a year is unworkable. http://www.bmj.com/content/347/bmj.f6449/rr/690626
– Response to J. Castellani (PhRMA): An ounce of data (i.e., 64740 data values). http://www.bmj.com/content/347/bmj.f1881/rr/762606
– Zombie statistics strikes again http://www.bmj.com/content/347/bmj.f1880/rr/763200
Supplementary explanation of research methods is available via figsharehttp://dx.doi.org/10.6084/m9.figshare.1121675
Results: Over half ClinicalTrials.gov study registers – completed before January 1st, 2011 (n=2957) – are unpublished (50.8%).
Over half of these study records were retrospectively registered.
Over half of these registries have not disclosed their results at ClinicalTrials.gov.
Conclusions: US phase 3 clinical trials data are not sufficiently shared.

Session A3 Evidence for Diagnostics
11:00 Monday April 13th
Chair: Ann Van den Bruel

The (pregnant) elephant in the room: a tool to speed up delivery of Cochrane Reviews
Mercedes Torres Torres1, Aakash Rana1, Benjamin Stark2, Sebastian Hagmann2, Constanze Knahl2, Stefanie Polzmacher2, Annabelle Wolff2, Clive E. Adams1
1University of Nottingham, Nottingham, UK, 2University of Applied Sciences, Ulm, Germany
Background: In Manchester’s UK Cochrane Contributors meeting (2014) Trisha Greenhalgh drew our attention to the fact that the gestation period of a Cochrane review is the same as that for an elephant. We argue that there remains a place for the slow, methodical and painstaking but recognise the danger of Cochrane being overtaken on the outside lane. Our outdated reviews lie unread, they exhaust authors and editors – the situation is impossible and unsustainable. Too little Cochrane resources have been constructed to assist with fast reviewing. We propose the proof-of-concept RevMan-HAL: an open source tool largely programmed by volunteers to assist production of Cochrane reviews in their current form.
Methods
 – Developed in JAVA using NetBeans.
– Employing:
– Five data-management students from Ulm, Germany, with limited JAVA experience (50% time, 8 weeks).
– A Computer Science post-doctoral researcher (50% time, 12 weeks).
– A Masters student (50% time, 12 weeks).
– Time of those working at the Editorial base (ME, Co-Ed).
Results
RevMan-HAL:
 – Uses the structure of the review title – INTERVENTION X versus INTEVENTION Y for CONDITION Z – to suggest text for the relevant sections of the Background.
– Uses the structure of the PRISMA table to automatically create text for the section relating to results of the search.
– Uses labelling of outcomes to input text and references in the subsection relating to Rating Scales.
– Uses the structure of the data analysis to automatically generate text for the ‘Effects of Interventions’ section, adding in formatted clearly written [English/German/Spanish/Chinese] text, correct numbers and hyperlinks to graphs.
– Uses the structure of the SoF table to automatically generate text in the Discussion section and Results section of the Abstract.
– Uses the structure of the SoF table to output a Wikipedia-compatible table for easy importing into the relevant online page.

A road map for efficient and reliable network meta-analyses: what busy clinicians should look for in a published article
Andrea Cipriani1, Anna Chaimani2, Georgia Salanti2, Stefan Leucht3, John Geddes1
1Department of Psychiatry, University of Oxford, Oxford, UK, 2Department of Hygiene and Epidemiology, University of Ioannina, Ioannina, Greece, 3Department of Psychiatry and Psychotherapy TU-München, Munich, Germany
Background: Tools developed to evaluate the extent to which findings from network meta-analysis (NMA) are valid and useful for decision-making are quite complex and time-consuming. To improve clinical practice and patient care clinicians should quickly identify which NMAs deserve further attention.
Aims: To propose a framework that busy clinicians could apply to NMA to infer about methodological robustness and reliability of results.
Methods: Focusing on few key elements of NMA, we re-analyzed the studies included in a NMA recently published in the BMJ. We used the methods reported in the article (both Bayesian hierarchical model and multivariate meta-analysis model in Stata) and compared our findings with the original publication.
Results: We found some discrepancies which materially affected study findings. The validity of NMA results depends on the plausibility of the transitivity assumption. As in pairwise meta-analysis, the risk of bias introduced by limitations of primary studies must be considered first. Judgment should be used to infer about the plausibility of transitivity in a network of trials and decide whether differences in the distributions of the effect modifiers across studies are large enough to make NMA invalid. Moderators include clinical similarity (i.e. patients’ characteristics, interventions, settings, follow-up, outcomes) and methodological similarity (i.e. study design, risk of bias). Unlike transitivity, inconsistency can be always evaluated statistically and should be scrutinised for errors in primary studies’ data.
Conclusions: NMAs can be considered a piece of scientific work, because they produce new knowledge. However, as for systematic reviews and standard meta-analyses, findings from NMA should be replicable. Published reports of NMA should include all information needed to fully understand how the study was conducted and to let clinicians independently assess the validity of the analyses and the reliability of findings. With publication, datasets of NMA should be made freely available online in journal’s websites.

Public Health Intervention Research to Improve Health Inequalities
Hannah Dorling, Liz Ollerhead, Claire Kidgell, Phil Taverner
NIHR Evaluation Trials and Studies Coordinating Centre (NETSCC), Southampton, Hampshire, UK
Introduction: The National Institute for Health Research (NIHR) Public Health Research (PHR) Programme aims to generate evidence by funding evaluations on non-NHS interventions intended to improve the health of the public and reduce inequalities in health. It is important that the programme funds high-quality public health research, which addresses health inequalities in order to produce robust evidence that can be used by public health decision-makers. However, in many cases it is not explicit from the research proposal how the project will specifically address inequalities.
Aims: The aim of the research was to seek to establish how projects currently funded by the PHR programme are addressing health inequalities.
Methods: A health inequalities intervention framework developed by Bambra et al. has been used to map PHR funded studies to explore what types of interventions are used to tackle health inequalities. The framework is based on theory dividing interventions into four levels and underpinning these by two different approaches. Case studies were used to illustrate the different levels of interventions.
Results: A total of 57 PHR projects were categorised using the framework; 16 PHR research projects were classified as strengthening individuals, 24 strengthening communities, 15 improving living and school/work conditions, and 2 promoting healthy macro policies. 18  were classified as a targeted intervention  whereas 39 were classed as universal.
Conclusions: Mapping the interventions being evaluated by the PHR programme to a typology differentiated health inequality interventions by their underlying theories of how and why the measures are expected to have an impact, which would have important implications for commissioning decisions made by local public health decision makers. Consequently, it’s important for applicants to the PHR Programme to explicitly state how the proposed research will address inequalities using a defined framework. This will help ensure the translation of research evidence into public health practice.

Feasibility phase methods to inform a pragmatic large-scale randomised controlled trial – the SARAH trial methods and results
Mark A Williams1, Esther M Willamson1, Peter J Heine2, Vivien Nichols2, Matthew J Glover3, Melina Dritsaki2, Jo Adams4, Sukhi Dosanjh2, Martin Underwood2, Anisur Rahman4, Christopher McConkey2, Joanne Lord3, Sarah E Lamb1
1University of Oxford, Oxford, UK, 2University of Warwick, Coventry, UK, 3Brunel University, Uxbridge, UK, 4University of Southampton, Southampton, UK, 5University College London, London, UK
Introduction: Rheumatoid Arthritis (RA) has a substantial effect on hand function and therefore the quality of life and productivity of millions of people globally. The effectiveness of exercise for improving hand/wrist function in people with RA is uncertain. Feasibility phases provide opportunity to modify trial design using information from patients and clinicians.
Aims: To optimise trial design/interventions using a feasibility phase prior to a pragmatic RCT. To estimate the clinical and cost-effectiveness of an exercise programme for patients with RA of the hands/wrists.
Methods: Acceptability of trial design and interventions were tested in a feasibility phase. A subsequent multi-centre RCT involving 17 English NHS trusts recruited patients with RA with pain/dysfunction of the hand/wrist joints who were on a stable drug regimen for ≥3 months. Participants were randomised to usual care or usual care plus an exercise programme. Follow-up was at 4 and 12 months post-randomisation. Outcome assessors were blinded.  The primary outcome was the 12 month Michigan Hand outcomes Questionnaire (MHQ) hand function subscale score.
Results: An 8 month feasibility phase allowed rapid establishment of effective randomisation and acceptable interventions for main trial. 490 participants were recruited. 89% of participants responded at 12 months. The exercise programme significantly improved hand function compared to usual care at 4 and 12 months (mean difference [95%CI] 4.60 [2.22 to 6.97] and 4.35 [1.60 to 7.10] respectively). There was no difference in pain scores or adverse events. The programme is likely to be cost-effective (12 month mean difference in QALYs = 0.01 (95%CI -0.03 to 0.05, corresponding ICER of £10,689).
Conclusions: The SARAH exercise programme was acceptable to participants and clinically and cost effective compared to usual care. Following this pragmatic RCT, clinicians should consider providing the SARAH exercise programme for NHS patients with stable RA of the hands/wrist.

Developing a conceptual framework for going from evidence to coverage decisions
Elena Parmelli, Laura Amato, Marina Davoli
Department of Epidemiology, Lazio Regional Health Service, Rome, Italy
Background: DECIDE, a 5-year project funded by the EU’s FP7, aims at improving the dissemination of evidence-based recommendations by building on the work of the GRADE Working Group to develop and evaluate methods that address the targeted dissemination of guidelines.
Within this project we are developing tools to assist policymakers to systematically and transparently consider factors that should influence decisions about whether to pay for the introduction of an intervention (coverage).
Objectives: To inform the development of a conceptual framework for going from evidence to coverage decision (EtD), using input collected through a structured consultation of stakeholders.
Methods: The EtD include criteria identified as necessary to inform the process that goes from evidence to coverage decisions. We collected stakeholders’ feedbacks, through a structured questionnaire, on the main features of the EtD exploring dimensions such as comprehensiveness, relevance, applicability, simplicity, logic, clarity, usability, suitability, usefulness. We aimed at collecting suggestions and comments about the EtD that could be useful to ameliorate it.
Results: We had a total of 103 contacts accessing the questionnaire, but only the 77% of the questionnaire were complete.
Stakeholders generally liked the design and the structure of the EtD finding it adequate for the intended purpose (80%) and gave positive judgments about its simplicity (73%) and usefulness (76%).
According to the feedbacks collected all the factors included are relevant for taking coverage decision (79%) and are presented and organize in a clear (67%) and logic (86%) way that help the stakeholders through the process.
The main criticisms relate to the comprehensiveness (47%) of the information and to the usability (51%) by people responsible for coverage decisions.
Conclusions: The EtD received positive feedbacks in almost all the dimensions explored. The comments collected were used to made changes and refine the contents of some criteria.

Session A4 Problems in EBM
11:00 Monday April 13th
Chair: David Nunan

Establishing and prioritising a local health research agenda
Darren Moore, Rebecca Abbott, Morwenna Rogers, Alison Bethel, Ken Stein, Jo Thompson-Coon
University of Exeter, Exeter, UK
Introduction/Background: By involving a wide group of Stakeholders (including service users) at all stages of the research process, from the inception of research ideas through to delivery of outputs, a portfolio of clinically relevant, locally tractable and patient-informed projects can be established.
Aims: This paper describes and evaluates the research prioritisation process being used by NIHR’s Collaboration for Leadership in Applied Health Research and Care for the South-West Peninsula (PenCLAHRC) – a partnership of all the local NHS organisations across Somerset, Devon and Cornwall, plus the Universities of Exeter and Plymouth.
Method: PenCLAHRC seeks research questions from a wide range of its partners including members of the public. Questions are received through a number of routes including individual submission via the PenCLAHRC website and during engagement with service users, clinical teams and organisations around the use of evidence. Questions received are prioritised by involving all PenCLAHRC stakeholders (including our Peninsula Patient and Public Involvement group). Prioritisation is based on a set of explicit criteria which include importance, local relevance and feasibility. In 2014 we have piloted a novel approach consisting of two rounds of electronic voting, before a face-to-face meeting to discuss and rank the prioritised questions.
Results: In the current 2014 round of question prioritisation, 72 questions were prioritised to 50 and then to nine questions after two rounds of electronic stakeholder comments and voting. Stakeholders are currently (December 2014) considering further details regarding these 9 questions. They will discuss and vote on these questions, leading to a number being adopted. The process of question generation and prioritisation has been evaluated by stakeholders. Feedback indicates that voting electronically, sharing comments on questions and more than one round of voting was preferred. Issues regarding the time available for prioritisation activities and the quality of some questions submitted were raised.

Effective evidence based medicine through optimal reporting of research: Lessons from the NETSCC Research on Research programme
David Wright, Matt Westmore, Elaine Williams, Amanda Young
National Institute for Health Research Evaluation, Trials and Studies Coordinating Centre (NETSCC), Southampton, UK
Background: Over US$100 billion is spent each year on biomedical research worldwide. A problem for EBM is waste through poor / irrelevant questions, non-publication of results or limited usability of findings. The NIHR Evaluation, Trials and Studies Coordinating Centre (NETSCC), a health research funder, has examined its commissioning and dissemination activities through a series of projects conducted as part of an internal ‘Research on Research’ (RoR) programme. Research has recently been undertaken on the NIHR Health Technology Assessment Journal, which is hosted by the Centre, and findings from these studies illustrate how effective research dissemination supports effective EBM.
Aim: To describe how ‘Research on Research’ can inform and enhance research commissioning and dissemination, using findings from studies conducted on NIHR HTA reports as an exemplar.
Methods: An overview of the history of the RoR programme will be presented. Specific study findings will be presented on: the publication rate and median time to publication of NIHR Health Technology Assessment studies with a planned final report submission date on or before 9 December 2011; the clinical relevance of NIHR HTA reports publishing 2007-2012 as assessed through the McMaster Online Rating of Evidence system hosted by McMaster University, Canada; the completeness of intervention descriptions of NIHR HTA funded RCTs published up to March 2011.
Results: RoR activity is important in informing the commissioning and dissemination of research. Through RoR projects we know, for example, that the publication rate for NIHR HTA studies in the period 2002 – 2011 is 98%, and that components of the intervention description were missing in 68 (69.4%) published RCTs.
Conclusion: Effective commissioning, delivery and dissemination of research are an essential part of effective EBM. Undertaking research on research processes provides an indication of how effective those processes are and highlights areas for development.

Evidence based  decision support on drug safety: a mission impossible? The case of CYP-drug interactions.
Thierry Christiaens, Geert De Loof, Jean-Marie Maloteaux
BCFI/CBIP, Brussels, Belgium
Background: Policy makers, quality managers, patient organizations and some clinicians believe that software-linked decision support with alarms, advices and ‘do /do not’ messages can change dramatically health care and enhance patient safety. Drug related items are a crucial aspect in this field. The recent enormous evolution in IT makes that the problem is no longer a technical one,  but rather content-linked: is it possible to deliver evidence-based advices about drug safety in decision support?
Aims: To illustrate the difficulties for drug information centers such as BCFI/CBIP to deliver, as example, evidence based decision support about the frequently occurring cytochrome P450 (CYP) drug interactions.
Methods: BCFI/CBIP mentions in its publications a CYP-interaction if at least 2 of the 4 consulted international sources agree. For new drugs the Summary of the Product Characteristics (SPC) is consulted. Afterward the clinical relevance is weighted. Some frequently used drugs will be taken as examples and discussed.
Results: Totally different warnings on the existence and the clinical relevance of CYP-interactions are commonly found:
– Substantial differences exist between sources (even within these sources between summaries and full text ) depending on which basic information is used: in vitro data, human or animal pharmacokinetic studies, reports of Adverse Drug Reactions…
– Most data originate from in vitro observations without any indication of clinical relevance
– For new drugs often the SPC is the only source; most drug companies, ‘playing safe’, mention every potential interaction.
Conclusions: It is very difficult to  decide what to consider as a clinically relevant CYP-interaction. This is just an example, and is also valid for side-effects, contra-indications, use of drugs in pregnancy. The  scientific community should warn against black and white messages in decision support and we should acknowledge the grade of uncertainty to achieve useful, but simultaneously scientifically sound messages for practitioners.

Evidence-informed person-centered healthcare Part I: Do ‘cognitive biases plus’ at organizational levels influence quality of evidence?*
Shashi S Seshia1, Michael Makhinson2, Dawn F Phillips3, G Bryan Young4
1Department of Pediatrics, University of Saskatchewan, Saskatoon, Saskatchewan, Canada, 2Department of Psychiatry and Biobehavioral Science,David Geffen School of Medicine at the University of California, Los Angeles, California, USA, 3Department of Clinical Health Psychology, Royal University Hospital, Saskatoon, Saskatchewan, Canada, 4Department of Clinical Neurological Sciences, Western University, London, Ontario, Canada
Introduction:  There is increasing concern about the unreliability of much of healthcare evidence and reservations about the application of evidence to individuals [Greenhalgh et al. 2014].
Hypothesis:  Cognitive biases, financial and non-financial conflicts of interest, and ethical violations (which, together with fallacies, we collectively refer to as ‘Cognitive biases plus’) at the levels of individuals and organizations involved in healthcare, undermine the evidence that informs person-centered healthcare.
Methods:  Narrative review of the pertinent literature from basic, medical and social sciences, ethics, philosophy, law etc. The healthcare- related organizations (including individuals working in them) studied were industry, political influences, regulators, non-industry funders, researchers, universities, hospitals/health authorities, professionals and societies, publication industry, and  advocacy groups.
Literature-based analysis:  Financial conflicts of interest (primarily industry-related) have become systemic in several organizations.  There is also plausible proof for non-financial conflicts of interest, especially in academic organizations.
Financial and non-financial conflicts of interest frequently result in self-serving bias. Self-serving bias can lead to self-deception and rationalization of actions that entrench self-serving behavior, both potentially culminating in unethical acts.  Individuals and organizations are also susceptible to other cognitive biases. Collectively, ‘cognitive biases plus’ can erode quality of evidence that informs healthcare, a conclusion based on inferential evidence.
Conclusions:  ‘Cognitive biases plus’ are hard-wired, primarily at the unconscious level; resulting behaviors are not easily corrected.  Reform is not possible without addressing ‘cognitive biases plus’ in organizations that influence healthcare. Social behavioral researchers advocate  multi-pronged measures in similar situations: (i) Abolish incentives that spawn self-serving bias, (ii) enforce severe deterrents for breaches of conduct,  (iii)  strengthen self-awareness, and (iv) design curricula especially at the trainee level to promote awareness of consequences to society. However, only a collective commitment to integrity can ensure “real EBM” and high-quality evidence-informed individualized healthcare.
*Journal of Evaluation in Clinical Practice. In Press.

What is ‘valid’ knowledge? Mindlines, philosophy and virtual networks beyond EBM.
Sietse Wieringa
Queen Mary University, London, UK
Background and Aims: Evidence based medicine holds high the ultimate need to weigh the findings of research in light of considerations of patients values and clinicians’ expertise [1]. However, how valid and useful answers to clinical questions are found during this process largely remains a black box.  A better understanding of how clinicians value and incorporate evidence from research in their clinical methods, including the use of intuition, heuristics and reasoning is called for [2].
Important empirical evidence regarding the integration of evidence in clinical practice comes from an ethnographic study in 2004 by Gabby and le May [3]. They found that GPs seldom use explicit knowledge (such as clinical guidelines) in everyday practice. Instead they heavily relied on inexplicit, tacit knowledge, practical routines and past personal experiences shared with and influenced by colleagues, dubbed ‘mindlines’.
Methods and Results: In a narrative systematic review we analysed 122 papers published since 2004 on mindlines. We identified several philosophical sophisticated perspectives that fundamentally unpack many of the assumptions underlying the EBM paradigm. Where conventional EBM limits itself to frequentist reasoning trying to find a single knowable truth, mindlines present us with multiple realities, alternative ways of reasoning and the concept of useful knowledge being ‘created’ instead of ‘translated’ in contexts and communities.
By further studying mindlines in theory and in practice as for instance in virtual social networks we may be able to increase the array of tools we use to develop valid knowledge and find what lies beyond the current EBM paradigm.

 

2017 Highlights