Sampson M, de Bruijn B, Urquhart C, Shojania K. Complementary approaches to searching MEDLINE may be sufficient for updating systematic reviews. J.Clin.Epidemiol. Epub 2016 Mar 11. PMID: 26976054.

OBJECTIVES:
To maximize the proportion of relevant studies identified for inclusion in systematic reviews (recall), complex time-consuming Boolean searches across multiple databases are common. Although MEDLINE provides excellent coverage of health science evidence, it has proved challenging to achieve high levels of recall through Boolean searches alone.
STUDY DESIGN AND SETTING:
Recall of one Boolean search method, the clinical query (CQ), combined with a ranking method, support vector machine (SVM), or PubMed-related articles, was tested against a gold standard of studies added to 6 updated Cochrane reviews and 10 Agency for Healthcare Research and Quality (AHRQ) evidence reviews. For the AHRQ sample, precision and temporal stability were examined for each method.
RESULTS:
Recall of new studies was 0.69 for the CQ, 0.66 for related articles, 0.50 for SVM, 0.91 for the combination of CQ and related articles, and 0.89 for the combination of CQ and SVM. Precision was 0.11 for CQ and related articles combined, and 0.11 for CQ and SVM combined. Related articles showed least stability over time.
CONCLUSIONS:
The complementary combination of a Boolean search strategy and a ranking strategy appears to provide a robust method for identifying relevant studies in MEDLINE.
Copyright © 2016 Elsevier Inc. All rights reserved.

DOI: http://dx.doi.org/10.1016/j.jclinepi.2016.03.004.
PubMed: http://www.ncbi.nlm.nih.gov/pubmed/?term=26976054.

Schwartz LM, Woloshin S, Zheng E, Tse T, Zarin DA. ClinicalTrials.gov and Drugs@FDA: A Comparison of Results Reporting for New Drug Approval Trials. Ann.Intern.Med. Epub 2016 June 14. PMID: 27294570.

Background
Pharmaceutical companies and other trial sponsors must submit certain trial results to ClinicalTrials.gov. The validity of these results is unclear.
Purpose:
To validate results posted on ClinicalTrials.gov against publicly available U.S. Food and Drug Administration (FDA) reviews on Drugs@FDA.
Data Sources:
ClinicalTrials.gov (registry and results database) and Drugs@FDA (medical and statistical reviews).
Study Selection:
100 parallel-group, randomized trials for new drug approvals (January 2013 to July 2014) with results posted on ClinicalTrials.gov (15 March 2015).
Data Extraction:
2 assessors extracted, and another verified, the trial design, primary and secondary outcomes, adverse events, and deaths.
Results:
Most trials were phase 3 (90%), double-blind (92%), and placebo-controlled (73%) and involved 32 drugs from 24 companies. Of 137 primary outcomes identified from ClinicalTrials.gov, 134 (98%) had corresponding data at Drugs@FDA, 130 (95%) had concordant definitions, and 107 (78%) had concordant results. Most differences were nominal (that is, relative difference <10%). Primary outcome results in 14 trials could not be validated. Of 1927 secondary outcomes from ClinicalTrials.gov, Drugs@FDA mentioned 1061 (55%) and included results data for 367 (19%). Of 96 trials with 1 or more serious adverse events in either source, 14 could be compared and 7 had discordant numbers of persons experiencing the adverse events. Of 62 trials with 1 or more deaths in either source, 25 could be compared and 17 were discordant.
Limitation:
Unknown generalizability to uncontrolled or crossover trial results.
Conclusion:
Primary outcome definitions and results were largely concordant between ClinicalTrials.gov and Drugs@FDA. Half the secondary outcomes, as well as serious events and deaths, could not be validated because Drugs@FDA includes only “key outcomes” for regulatory decision making and frequently includes only adverse event results aggregated across multiple trials.
Primary Funding Source:
National Library of Medicine.

DOI: http://dx.doi.org/10.7326/M15-2658.
PubMed: http://www.ncbi.nlm.nih.gov/pubmed/?term=27294570.

Pham MT, Waddell L, Raji? A, Sargeant JM, Papadopoulos A, McEwen SA. Implications of applying methodological shortcuts to expedite systematic reviews: three case studies using systematic reviews from agri-food public health. Res.Synth.Methods. Epub 2016 Jun 10. PMID: 27285733.

BACKGROUND:
The rapid review is an approach to synthesizing research evidence when a shorter timeframe is required. The implications of what is lost in terms of rigour, increased bias and accuracy when conducting a rapid review have not yet been elucidated.
METHODS:
We assessed the potential implications of methodological shortcuts on the outcomes of three completed systematic reviews addressing agri-food public health topics. For each review, shortcuts were applied individually to assess the impact on the number of relevant studies included and whether omitted studies affected the direction, magnitude or precision of summary estimates from meta-analyses.
RESULTS:
In most instances, the shortcuts resulted in at least one relevant study being omitted from the review. The omission of studies affected 39 of 143 possible meta-analyses, of which 14 were no longer possible because of insufficient studies (<2). When meta-analysis was possible, the omission of studies generally resulted in less precise pooled estimates (i.e. wider confidence intervals) that did not differ in direction from the original estimate.
CONCLUSIONS:
The three case studies demonstrated the risk of missing relevant literature and its impact on summary estimates when methodological shortcuts are applied in rapid reviews. © 2016 The Authors. Research Synthesis Methods Published by John Wiley & Sons Ltd.
© 2016 The Authors. Research Synthesis Methods Published by John Wiley & Sons Ltd.

DOI: http://dx.doi.org/10.1002/jrsm.1215.
PubMed: http://www.ncbi.nlm.nih.gov/pubmed/?term=27285733.

Adorno M, Garbee D, Marix ML. Improving Literature Searches. Clin.Nurse Spec. 2016 Mar-Apr;30(2):74-80. PMID: 26848895.

Research is a core competency for all clinical nurse specialists (CNSs). While engaged in clinical practice, questions arise related to best practices, patient outcomes, and effectiveness of therapeutic interventions. Identifying a clinical problem or a clinical outcome that needs improvement is the impetus for conducting a literature search. The initial step to answer a clinical question is conducting a literature review that supports the need for research and/or an evidence-based practice change. The abilities to find and critique the quality of research to improve patient outcomes are skills that take time to develop. At first, the literature search can seem to be a daunting task. In this article, the process will be discussed in manageable steps to guide the CNS through a literature search.

DOI: http://dx.doi.org/10.1097/NUR.0000000000000187.
PubMed: http://www.ncbi.nlm.nih.gov/pubmed/?term=26848895.

Bastian H. Nondisclosure of Financial Interest in Clinical Practice Guideline Development: An Intractable Problem? PLoS Med. 2016 May 31;13(5):e1002030. PMID: 27243232.

In a Perspective linked to Stelfox and colleagues, Hilda Bastian discusses the challenges of improving transparency and management of financial conflicts of interest among committees that develop guidelines for medical practice.

Comment on:

Campsall P, Colizza K, Straus S, Stelfox HT. Financial Relationships between Organizations That Produce Clinical Practice Guidelines and the Biomedical Industry: A Cross-Sectional Study. PLoS Med. 2016 May 31;13(5):e1002029. doi: 10.1371/journal.pmed.1002029. eCollection 2016 May. PubMed PMID: 27244653.

FREE FULL TEXT: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4886959/pdf/pmed.1002030.pdf
DOI: http://dx.doi.org/10.1371/journal.pmed.1002030.
PubMed: http://www.ncbi.nlm.nih.gov/pubmed/?term=27243232.
PubMed Central: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4886959/.

Elangovan S, Prakasam S, Gajendrareddy P, Allareddy V. A Risk of Bias Assessment of Randomized Controlled Trials (RCTs) on Periodontal Regeneration Published in 2013. J.Evid Based.Dent.Pract. 2016 Mar;16(1):30-40. PMID: 27132553.

OBJECTIVE:
The objective of this assessment is to evaluate the degree of risk of bias in randomized controlled trials published in 2013 and focusing on periodontal regeneration.
METHODS:
Three reviewers searched and selected the trials based on pre-defined inclusion criteria. Predictor variables [number of authors, primary objective of the study, biomaterial employed, follow-up time periods, split mouth study (yes/no), journal, year of publication, country, scale (single/multi-center) and nature of funding] were extracted and risk of bias assessment using Cochrane risk of bias tool were performed independently by the three reviewers.
RESULTS:
Seventeen RCTs were included in this assessment. The risk of bias in RCTs published in 2013 with a focus in periodontal regeneration varied significantly with only in less than 30% of the included trials, the overall risk of bias was found to be low, while 41% of trials were designated to have a higher degree of bias. Specifically, when looking at the domains assessed, 70% of the included trials reported an accepted method of sequence generation, blinding (whenever possible), completeness of outcome data or avoided selective outcome reporting. Meanwhile, only 47% of the included trials reported some form of allocation concealment.
CONCLUSION:
In this assessment, of the included 17 trials, slightly more than 40% of them had a high risk of bias, underscoring the importance of careful appraisal of trials before implementing the study interventions in clinical practice and the need for more detailed analyses.
Copyright © 2016 Elsevier Inc. All rights reserved.

DOI: http://dx.doi.org/10.1016/j.jebdp.2015.03.016.
PubMed: http://www.ncbi.nlm.nih.gov/pubmed/?term=27132553.

Foley D, Mackinnon A. Risk of bias in observational studies of interventions: the case of antipsychotic-induced diabetes - Authors' reply. Lancet Psychiatry. 2016 Feb;3(2):104-5. PMID: 26851324.
Comment on:

Rico-Villademoros F, Calandre EP. Risk of bias in observational studies of interventions: the case of antipsychotic-induced diabetes. Lancet Psychiatry. 2016 Feb;3(2):103-4. doi: 10.1016/S2215-0366(15)00541-6. PubMed PMID: 26851323.

Foley DL, Mackinnon A, Morgan VA, Watts GF, Castle DJ, Waterreus A, Galletly CA. Effect of age, family history of diabetes, and antipsychotic drug treatment on risk of diabetes in people with psychosis: a population-based cross-sectional study. Lancet Psychiatry. 2015 Dec;2(12):1092-8. doi: 10.1016/S2215-0366(15)00276-X. Epub 2015 Oct 22. PubMed PMID: 26477242.

DOI: http://dx.doi.org/10.1016/S2215-0366(16)00004-3.
PubMed: http://www.ncbi.nlm.nih.gov/pubmed/?term=26851324.

Gattrell WT, Hopewell S, Young K, Farrow P, White R, Wager E, Winchester CC. Professional medical writing support and the quality of randomised controlled trial reporting: a cross-sectional study. BMJ Open. 2016 Feb 21;6(2):e010329. PMID: 26899254.

OBJECTIVES:
Authors may choose to work with professional medical writers when writing up their research for publication. We examined the relationship between medical writing support and the quality and timeliness of reporting of the results of randomised controlled trials (RCTs).
DESIGN:
Cross-sectional study.
STUDY SAMPLE:
Primary reports of RCTs published in BioMed Central journals from 2000 to 16 July 2014, subdivided into those with medical writing support (n=110) and those without medical writing support (n=123).
MAIN OUTCOME MEASURES:
Proportion of items that were completely reported from a predefined subset of the Consolidated Standards of Reporting Trials (CONSORT) checklist (12 items known to be commonly poorly reported), overall acceptance time (from manuscript submission to editorial acceptance) and quality of written English as assessed by peer reviewers. The effect of funding source and publication year was examined.
RESULTS:
The number of articles that completely reported at least 50% of the CONSORT items assessed was higher for those with declared medical writing support (39.1% (43/110 articles); 95% CI 29.9% to 48.9%) than for those without (21.1% (26/123 articles); 95% CI 14.3% to 29.4%). Articles with declared medical writing support were more likely than articles without such support to have acceptable written English (81.1% (43/53 articles); 95% CI 67.6% to 90.1% vs 47.9% (23/48 articles); 95% CI 33.5% to 62.7%). The median time of overall acceptance was longer for articles with declared medical writing support than for those without (167 days (IQR 114.5-231 days) vs 136 days (IQR 77-193 days)).
CONCLUSIONS:
In this sample of open-access journals, declared professional medical writing support was associated with more complete reporting of clinical trial results and higher quality of written English. Medical writing support may play an important role in raising the quality of clinical trial reporting.
Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

FREE FULL TEXT: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4762118/pdf/bmjopen-2015-010329.pdf
DOI: http://dx.doi.org/10.1136/bmjopen-2015-010329.
PubMed: http://www.ncbi.nlm.nih.gov/pubmed/?term=26899254.
PubMed Central: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4762118/.

Hernández AF, González-Alzaga B, López-Flores I, Lacasaña M. Systematic reviews on neurodevelopmental and neurodegenerative disorders linked to pesticide exposure: Methodological features and impact on risk assessment. Environ.Int. 2016 Jul-Aug;92-93:657-79. PMID: 26896854.

BACKGROUND:
Epidemiological data are not currently used in the risk assessment of chemical substances in a systematic and consistent manner. However, systematic reviews (SRs) could be useful for risk assessment as they appraise and synthesize the best epidemiological knowledge available.
OBJECTIVES:
To conduct a comprehensive literature search of SRs pertaining to pesticide exposure and various neurological outcomes, namely neurodevelopmental abnormalities, Parkinson's disease (PD) and Alzheimer's disease (AD), and to assess the potential contribution of SRs to the risk assessment process.
SEARCH METHODS AND SELECTION CRITERIA:
Search was conducted in PubMed and Web of Science databases and articles were selected if the following inclusion criteria were met: being a SR, published until April 2015 and without language restrictions.
DATA COLLECTION AND ANALYSIS:
For each neurological outcome, two review authors independently screened the search results for included studies. Data were extracted and summarized in two tables according to 16 criteria. Disagreements were resolved by discussion.
MAIN RESULTS:
The total number of studies identified in the first search was 65, 304 and 108 for neurodevelopment, PD and AD, respectively. From them, 8, 10 and 2 met the defined inclusion criteria for those outcomes, respectively. Overall, results suggest that prenatal exposure to organophosphates is associated with neurodevelopmental disturbances in preschool and school children. In contrast, postnatal exposures failed to show a clear effect across cohort studies. Regarding PD, 6 SRs reported statistically significant combined effect size estimates, with OR/RR ranging between 1.28 and 1.94. As for AD, 2 out of the 8 original articles included in the SRs found significant associations, with OR of 2.39 and 4.35, although the quality of the data was rather low.
CONCLUSIONS:
The critical appraisal of the SRs identified allowed for discussing the implications of SRs for risk assessment, along with the identification of gaps and limitations of current epidemiological studies that hinder their use for risk assessment. Recommendations are proposed to improve studies for this purpose. In particular, harmonized quantitative data (expressed in standardized units) would allow a better interpretation of results and would facilitate direct comparison of data across studies. Outcomes should be also harmonized for an accurate and reproducible measurement of adverse effects. Appropriate SRs and quantitative synthesis of the evidence should be performed regularly for a continuous update of the risk factors on health outcomes and to determine, if possible, dose-response curves for risk assessment.
Copyright © 2016 Elsevier Ltd. All rights reserved.

DOI: http://dx.doi.org/10.1016/j.envint.2016.01.020.
PubMed: http://www.ncbi.nlm.nih.gov/pubmed/?term=26896854.

Herson J. Strategies for dealing with fraud in clinical trials. Int.J.Clin.Oncol. 2016 Feb;21(1):22-7. PMID: 26194810.

Research misconduct and fraud in clinical research is an increasing problem facing the scientific community. This problem is expected to increase due to discoveries in central statistical monitoring and with the increase in first-time clinical trial investigators in the increasingly global reach of oncology clinical trials. This paper explores the most common forms of fraud in clinical trials in order to develop offensive and defensive strategies to deal with fraud. The offensive strategies are used when fraud is detected during a trial and the defensive strategies are those design strategies that seek to minimize or eliminate the effect of fraud. This leads to a proposed fraud recovery plan (FRP) that would be specified before the start of a clinical trial and would indicate actions to be taken upon detecting fraud of different types. Statistical/regulatory issues related to fraud include: dropping all patients from a site that committed fraud, or just the fraudulent data (perhaps replacing the latter through imputation); the role of intent-to-treat analysis; effect on a planned interim analysis; effect on stratified analyses and model adjustment when fraud is detected in covariates; effect on trial-wide randomization, etc. The details of a typical defensive strategy are also presented. It is concluded that it is best to follow a defensive strategy and to have an FRP in place to follow if fraud is detected during the trial.

DOI: http://dx.doi.org/10.1007/s10147-015-0876-6.
PubMed: http://www.ncbi.nlm.nih.gov/pubmed/?term=26194810.

Jelicic Kadic A, Vucic K, Dosenovic S, Sapunar D, Puljak L. Extracting data from figures with software was faster, with higher interrater reliability than manual extraction. J.Clin.Epidemiol. 2016 Jun;74:119-23. PMID: 26780258.

OBJECTIVES:
To compare speed and accuracy of graphical data extraction using manual estimation and open source software.
STUDY DESIGN AND SETTING:
Data points from eligible graphs/figures published in randomized controlled trials (RCTs) from 2009 to 2014 were extracted by two authors independently, both by manual estimation and with the Plot Digitizer, open source software. Corresponding authors of each RCT were contacted up to four times via e-mail to obtain exact numbers that were used to create graphs. Accuracy of each method was compared against the source data from which the original graphs were produced.
RESULTS:
Software data extraction was significantly faster, reducing time for extraction for 47%. Percent agreement between the two raters was 51% for manual and 53.5% for software data extraction. Percent agreement between the raters and original data was 66% vs. 75% for the first rater and 69% vs. 73% for the second rater, for manual and software extraction, respectively.
CONCLUSIONS:
Data extraction from figures should be conducted using software, whereas manual estimation should be avoided. Using software for data extraction of data presented only in figures is faster and enables higher interrater reliability.
Copyright © 2016 Elsevier Inc. All rights reserved.

DOI: http://dx.doi.org/10.1016/j.jclinepi.2016.01.002.
PubMed: http://www.ncbi.nlm.nih.gov/pubmed/?term=26780258.

Kearney MH. Moving from Facts to Wisdom: Facilitating Synthesis in Literature Reviews. Res.Nurs.Health. 2016 Feb;39(1):3-6. PMID: 26660333.
DOI: http://dx.doi.org/10.1002/nur.21706.
PubMed: http://www.ncbi.nlm.nih.gov/pubmed/?term=26660333.

Lakens D, Hilgard J, Staaks J. On the reproducibility of meta-analyses: six practical recommendations. BMC Psychol. 2016 May 31;4(1):24. PMID: 27241618.

BACKGROUND:
Meta-analyses play an important role in cumulative science by combining information across multiple studies and attempting to provide effect size estimates corrected for publication bias. Research on the reproducibility of meta-analyses reveals that errors are common, and the percentage of effect size calculations that cannot be reproduced is much higher than is desirable. Furthermore, the flexibility in inclusion criteria when performing a meta-analysis, combined with the many conflicting conclusions drawn by meta-analyses of the same set of studies performed by different researchers, has led some people to doubt whether meta-analyses can provide objective conclusions.
DISCUSSION:
The present article highlights the need to improve the reproducibility of meta-analyses to facilitate the identification of errors, allow researchers to examine the impact of subjective choices such as inclusion criteria, and update the meta-analysis after several years. Reproducibility can be improved by applying standardized reporting guidelines and sharing all meta-analytic data underlying the meta-analysis, including quotes from articles to specify how effect sizes were calculated. Pre-registration of the research protocol (which can be peer-reviewed using novel 'registered report' formats) can be used to distinguish a-priori analysis plans from data-driven choices, and reduce the amount of criticism after the results are known. The recommendations put forward in this article aim to improve the reproducibility of meta-analyses. In addition, they have the benefit of "future-proofing" meta-analyses by allowing the shared data to be re-analyzed as new theoretical viewpoints emerge or as novel statistical techniques are developed. Adoption of these practices will lead to increased credibility of meta-analytic conclusions, and facilitate cumulative scientific knowledge.

FREE FULL TEXT: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4886411/pdf/40359_2016_Article_126.pdf
DOI: http://dx.doi.org/10.1186/s40359-016-0126-3.
PubMed: http://www.ncbi.nlm.nih.gov/pubmed/?term=27241618.
PubMed Central: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4886411/.

Mastracci TM. Scientific Methods and the Reporting of Negative Results: Critically Important to Patient Safety. Eur.J.Vasc.Endovasc.Surg. 2016 Feb;51(2):165-6. PMID: 26403823.

[First paragraph; reference html link removed]

Vascular surgeons are no strangers to adopting new technology. After its introduction in 1991,[1] endovascular aneurysm repair (EVAR) supplanted conventional open surgery in many jurisdictions as the dominant modality for treating aneurysmal disease.2 and 3 The early adoption of this technology can be attributed to two equally important factors: one logical and one evidentiary. First, the concept behind EVAR (i.e. the exclusion of the diseased aortic wall from the circulation to prevent pressure-related expansion and rupture), is in accordance with our understanding of the pathophysiology of aneurysmal disease. Second, randomized controlled trials4 and 5 comparing EVAR with conventional surgery have demonstrated a rigorous, reproducible, and statistically significant perioperative benefit. But what is even more important about the introduction of endovascular techniques into modern vascular surgical practice is that iterative changes have occurred from the first generation devices, to make them safer and more durable. The successful evolution of EVAR technology has been the product of careful reviews of individual centre experience, pooled registry data, critical appraisal of the reasons for device failure, and the courage of leaders in our field to report negative results. This has also held true throughout the development of endovascular options for treating thoracoabdominal disease, where devices have evolved because centres have been open about the various modes of failure and those areas that required technical improvement.

Comment on:

Lowe C, Worthington A, Serracino-Inglott F, Ashleigh R, McCollum C. Multi-layer Flow-modulating Stents for Thoraco-abdominal and Peri-renal Aneurysms: The UK Pilot Study. Eur J Vasc Endovasc Surg. 2016 Feb;51(2):225-31. doi: 10.1016/j.ejvs.2015.09.014. Epub 2015 Oct 21. PubMed PMID: 26497254.

DOI: http://dx.doi.org/10.1016/j.ejvs.2015.08.007.
PubMed: http://www.ncbi.nlm.nih.gov/pubmed/26403823.

Mayer MG. How publication bias and inadequate research transparency endanger medicine. JAAPA. 2016 Jun;29(6):1-2. PMID: 27168044.
DOI: http://dx.doi.org/10.1097/01.JAA.0000483095.89264.81.
PubMed: http://www.ncbi.nlm.nih.gov/pubmed/?term=27168044.

Pencina MJ, Louzao DM, McCourt BJ, Adams MR, Tayyabkhan RH, Ronco P, Peterson ED. Supporting open access to clinical trial data for researchers: The Duke Clinical Research Institute-Bristol-Myers Squibb Supporting Open Access to Researchers Initiative. Am.Heart J. 2016 Feb;172:64-9. PMID: 26856217.

There are growing calls for sponsors to increase transparency by providing access to clinical trial data. In response, Bristol-Myers Squibb and the Duke Clinical Research Institute have collaborated on a new initiative, Supporting Open Access to Researchers. The aim is to facilitate open sharing of Bristol-Myers Squibb trial data with interested researchers. Key features of the Supporting Open Access to Researchers data sharing model include an independent review committee that ensures expert consideration of each proposal, stringent data deidentification/anonymization and protection of patient privacy, requirement of prespecified statistical analysis plans, and independent review of manuscripts before submission for publication. We believe that these approaches will promote open science by allowing investigators to verify trial results as well as to pursue interesting secondary uses of trial data without compromising scientific integrity.

DOI: http://dx.doi.org/10.1016/j.ahj.2015.11.002.
PubMed: http://www.ncbi.nlm.nih.gov/pubmed/?term=26856217.

Rico-Villademoros F, Calandre EP. Risk of bias in observational studies of interventions: the case of antipsychotic-induced diabetes. Lancet Psychiatry. 2016 Feb;3(2):103-4. PMID: 26851323.
Comment on:

Foley DL, Mackinnon A, Morgan VA, Watts GF, Castle DJ, Waterreus A, Galletly CA. Effect of age, family history of diabetes, and antipsychotic drug treatment on risk of diabetes in people with psychosis: a population-based cross-sectional study. Lancet Psychiatry. 2015 Dec;2(12):1092-8. doi: 10.1016/S2215-0366(15)00276-X. Epub 2015 Oct 22. PubMed PMID: 26477242.

DOI: http://dx.doi.org/10.1016/S2215-0366(15)00541-6.
PubMed: http://www.ncbi.nlm.nih.gov/pubmed/26851323.

Ross JS. Making Data Submitted to the Food and Drug Administration More Visible. JAMA Intern.Med. 2016 Feb;176(2):259. PMID: 26752009.

[First paragraph]

The visibility of clinical research and its underlying data has grown through efforts such as the National Library of Medicine’s online trial registry, ClinicalTrials.gov, along with data-sharing initiatives, such as the Yale Open Data Access Project (in which I am involved). While the Food and Drug Administration (FDA) has similarly enhanced clinical research visibility by making FDA-prepared documents more widely available, much important material, including clinical trial data, is considered confidential information or trade secrets.1 A Research Letter in this issue of JAMA Internal Medicine illustrates how this potentially limits our understanding of the research supporting therapies regulated by the FDA.2 Marciniak and colleagues examined participant loss–to–follow-up rates for major trials of oral antithrombotic therapies, comparing the rates reported in medical journal publications with the rates independently estimated by Marciniak using data submitted by manufacturers to the FDA as part of an analysis investigating the cancer risk associated with these therapies that was performed when he was an FDA medical officer. Their analysis demonstrated substantial discrepancies between the published and the independently estimated loss–to–follow-up rates. While this research has implications for any interpretation of antithrombotics’ therapeutic efficacy and safety, it also demonstrates the need for greater data visibility and the importance of making all clinical trial data submitted to the FDA widely available, including to external researchers for independent scrutiny. As recently explained by the Institute of Medicine, “Patients and their physicians depend on clinical trials for reliable evidence on what therapies are effective and safe. Responsible sharing of the data gleaned from clinical trials will increase the validity and extent of this evidence.

Comment on:

Marciniak TA, Cherepanov V, Golukhova E, Kim MH, Serebruany V. Drug Discontinuation and Follow-up Rates in Oral Antithrombotic Trials. JAMA Intern Med. 2016 Feb;176(2):257-9. doi: 10.1001/jamainternmed.2015.6769. PubMed PMID: 26752247.

DOI: http://dx.doi.org/10.1001/jamainternmed.2015.6773.
PubMed: http://www.ncbi.nlm.nih.gov/pubmed/?term=26752009.

Taichman DB, Backus J, Baethge C, Bauchner H, de Leeuw PW, Drazen JM, Fletcher J, Frizelle FA, Groves T, Haileamlak A, et al. Sharing clinical trial data. BMJ. 2016 Jan 20;532:255. PMID: 26790902.

[First paragraph]

A proposal from the International Committee of Medical Journal Editors
The International Committee of Medical Journal Editors (ICMJE) believes that there is an ethical obligation to responsibly share data generated by interventional clinical trials because participants have put themselves at risk. In a growing consensus, many funders around the world—foundations, government agencies, and industry—now mandate data sharing. Here we outline ICMJE’s proposed requirements to help meet this obligation. We encourage feedback on the proposed requirements.

DOI: http://dx.doi.org/10.1136/bmj.i255.
PubMed: http://www.ncbi.nlm.nih.gov/pubmed/?term=26790902.

Wise J. Relationships between biomedical companies and guideline makers are often undisclosed. BMJ. 2016 May 31;353:i3065. PMID: 27247272.

[First paragraph]

Financial relationships between organisations that produce clinical guidelines and biomedical companies are common and often not disclosed in the guidelines, according to research published in PLOS Medicine.

DOI: http://dx.doi.org/10.1136/bmj.i3065.
PubMed: http://www.ncbi.nlm.nih.gov/pubmed/?term=27247272.

Harris M, Macinko J, Jimenez G, Mahfoud M, Anderson C. Does a research article's country of origin affect perception of its quality and relevance? A national trial of US public health researchers. BMJ Open. 2015 Dec 30;5(12):e008993. PMID: 26719313.

OBJECTIVES:
The source of research may influence one's interpretation of it in either negative or positive ways, however, there are no robust experiments to determine how source impacts on one's judgment of the research article. We determine the impact of source on respondents' assessment of the quality and relevance of selected research abstracts.
DESIGN:
Web-based survey design using four healthcare research abstracts previously published and included in Cochrane Reviews.
SETTING:
All Council on the Education of Public Health-accredited Schools and Programmes of Public Health in the USA.
PARTICIPANTS:
899 core faculty members (full, associate and assistant professors)
INTERVENTION:
Each of the four abstracts appeared with a high-income source half of the time, and low-income source half of the time. Participants each reviewed the same four abstracts, but were randomly allocated to receive two abstracts with high-income source, and two abstracts with low-income source, allowing for within-abstract comparison of quality and relevance
PRIMARY OUTCOME MEASURES:
Within-abstract comparison of participants' rating scores on two measures--strength of the evidence, and likelihood of referral to a peer (1-10 rating scale). OR was calculated using a generalised ordered logit model adjusting for sociodemographic covariates.
RESULTS:
Participants who received high income country source abstracts were equal in all known characteristics to the participants who received the abstracts with low income country sources. For one of the four abstracts (a randomised, controlled trial of a pharmaceutical intervention), likelihood of referral to a peer was greater if the source was a high income country (OR 1.28, 1.02 to 1.62, p<0.05).
CONCLUSIONS:
All things being equal, in one of the four abstracts, the respondents were influenced by a high-income source in their rating of research abstracts. More research may be needed to explore how the origin of a research article may lead to stereotype activation and application in research evaluation.
Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

Erratum in:

Correction. BMJ Open. 2016 Jan 19;6(1):e008993corr1. doi: 10.1136/bmjopen-2015-008993corr1. PubMed PMID: 26787248; PubMed Central PMCID: PMC4735140.

FREE FULL TEXT: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4710821/pdf/bmjopen-2015-008993.pdf
DOI: http://dx.doi.org/10.1136/bmjopen-2015-008993.
PubMed: http://www.ncbi.nlm.nih.gov/pubmed/?term=26719313.
PubMed Central: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4710821/.