GLORIA

GEOMAR Library Ocean Research Information Access

feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    Publication Date: 2018-02-28
    Description: Introduction Reliable evidence syntheses, based on rigorous systematic reviews, provide essential support for evidence-informed clinical practice and health policy. Systematic reviews should use reproducible and transparent methods to draw conclusions from the available body of evidence. Narrative synthesis of quantitative data (NS) is a method commonly used in systematic reviews where it may not be appropriate, or possible, to meta-analyse estimates of intervention effects. A common criticism of NS is that it is opaque and subject to author interpretation, casting doubt on the trustworthiness of a review’s conclusions. Despite published guidance funded by the UK’s Economic and Social Research Council on the conduct of NS, recent work suggests that this guidance is rarely used and many review authors appear to be unclear about best practice. To improve the way that NS is conducted and reported, we are developing a reporting guideline for NS of quantitative data. Methods We will assess how NS is implemented and reported in Cochrane systematic reviews and the findings will inform the creation of a Delphi consensus exercise by an expert panel. We will use this Delphi survey to develop a checklist for reporting standards for NS. This will be accompanied by supplementary guidance on the conduct and reporting of NS, as well as an online training resource. Ethics and dissemination Ethical approval for the Delphi survey was obtained from the University of Glasgow in December 2017 (reference 400170060). Dissemination of the results of this study will be through peer-reviewed publications, and national and international conferences.
    Keywords: Health policy, Open access, Research methods
    Electronic ISSN: 2044-6055
    Topics: Medicine
    Published by BMJ Publishing
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 2
    Publication Date: 2018-03-15
    Description: Background Several scales, checklists and domain-based tools for assessing risk of reporting biases exist, but it is unclear how much they vary in content and guidance. We conducted a systematic review of the content and measurement properties of such tools. Methods We searched for potentially relevant articles in Ovid MEDLINE, Ovid Embase, Ovid PsycINFO and Google Scholar from inception to February 2017. One author screened all titles, abstracts and full text articles, and collected data on tool characteristics. Results We identified 18 tools that include an assessment of the risk of reporting bias. Tools varied in regard to the type of reporting bias assessed (eg, bias due to selective publication, bias due to selective non-reporting), and the level of assessment (eg, for the study as a whole, a particular result within a study or a particular synthesis of studies). Various criteria are used across tools to designate a synthesis as being at ‘high’ risk of bias due to selective publication (eg, evidence of funnel plot asymmetry, use of non-comprehensive searches). However, the relative weight assigned to each criterion in the overall judgement is unclear for most of these tools. Tools for assessing risk of bias due to selective non-reporting guide users to assess a study, or an outcome within a study, as ‘high’ risk of bias if no results are reported for an outcome. However, assessing the corresponding risk of bias in a synthesis that is missing the non-reported outcomes is outside the scope of most of these tools. Inter-rater agreement estimates were available for five tools. Conclusion There are several limitations of existing tools for assessing risk of reporting biases, in terms of their scope, guidance for reaching risk of bias judgements and measurement properties. Development and evaluation of a new, comprehensive tool could help overcome present limitations.
    Keywords: Open access, Research methods
    Electronic ISSN: 2044-6055
    Topics: Medicine
    Published by BMJ Publishing
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 3
    Publication Date: 2016-04-29
    Description: Objective To explore whether systematic reviewers selectively include trial effect estimates in meta-analyses when multiple are available, and what impact this may have on meta-analytic effects. Design Cross-sectional study. Data sources We randomly selected systematic reviews of interventions from 2 clinical specialties published between January 2010 and 2012. The first presented meta-analysis of a continuous outcome in each review was selected (index meta-analysis), and all trial effect estimates that were eligible for inclusion in the meta-analysis (eg, from multiple scales or time points) were extracted from trial reports. Analysis We calculated a statistic (the Potential Bias Index (PBI)) to quantify and test for evidence of selective inclusion. The PBI ranges from 0 to 1; values above or below 0.5 are suggestive of selective inclusion of effect estimates more or less favourable to the intervention, respectively. The impact of any potential selective inclusion was investigated by comparing the index meta-analytic standardised mean difference (SMD) to the median of a randomly constructed distribution of meta-analytic SMDs (representing the meta-analytic SMD expected when there is no selective inclusion). Results 31 reviews (250 trials) were included. The estimated PBI was 0.57 (95% CI 0.50 to 0.63), suggesting that trial effect estimates that were more favourable to the intervention were included in meta-analyses slightly more often than expected under a process consistent with random selection; however, the 95% CI included the null hypothesis of no selective inclusion. Any potential selective inclusion did not have an important impact on the meta-analytic effects. Conclusion There was no clear evidence that selective inclusion of trial effect estimates occurred in this sample of meta-analyses. Further research on selective inclusion in other clinical specialties is needed. To enable readers to assess the risk of selective inclusion bias, we recommend that systematic reviewers report the methods used to select effect estimates to include in meta-analyses.
    Keywords: Open access, Evidence based practice, Research methods
    Electronic ISSN: 2044-6055
    Topics: Medicine
    Published by BMJ Publishing
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...