GLORIA

GEOMAR Library Ocean Research Information Access

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    Online Resource
    Online Resource
    SAGE Publications ; 2017
    In:  Statistical Methods in Medical Research Vol. 26, No. 4 ( 2017-08), p. 1896-1911
    In: Statistical Methods in Medical Research, SAGE Publications, Vol. 26, No. 4 ( 2017-08), p. 1896-1911
    Abstract: Hierarchical models such as the bivariate and hierarchical summary receiver operating characteristic (HSROC) models are recommended for meta-analysis of test accuracy studies. These models are challenging to fit when there are few studies and/or sparse data (for example zero cells in contingency tables due to studies reporting 100% sensitivity or specificity); the models may not converge, or give unreliable parameter estimates. Using simulation, we investigated the performance of seven hierarchical models incorporating increasing simplifications in scenarios designed to replicate realistic situations for meta-analysis of test accuracy studies. Performance of the models was assessed in terms of estimability (percentage of meta-analyses that successfully converged and percentage where the between study correlation was estimable), bias, mean square error and coverage of the 95% confidence intervals. Our results indicate that simpler hierarchical models are valid in situations with few studies or sparse data. For synthesis of sensitivity and specificity, univariate random effects logistic regression models are appropriate when a bivariate model cannot be fitted. Alternatively, an HSROC model that assumes a symmetric SROC curve (by excluding the shape parameter) can be used if the HSROC model is the chosen meta-analytic approach. In the absence of heterogeneity, fixed effect equivalent of the models can be applied.
    Type of Medium: Online Resource
    ISSN: 0962-2802 , 1477-0334
    Language: English
    Publisher: SAGE Publications
    Publication Date: 2017
    detail.hit.zdb_id: 2001539-2
    detail.hit.zdb_id: 1136948-6
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 2
    In: Statistical Methods in Medical Research, SAGE Publications, Vol. 27, No. 11 ( 2018-11), p. 3505-3522
    Abstract: If individual participant data are available from multiple studies or clusters, then a prediction model can be externally validated multiple times. This allows the model’s discrimination and calibration performance to be examined across different settings. Random-effects meta-analysis can then be used to quantify overall (average) performance and heterogeneity in performance. This typically assumes a normal distribution of ‘true’ performance across studies. We conducted a simulation study to examine this normality assumption for various performance measures relating to a logistic regression prediction model. We simulated data across multiple studies with varying degrees of variability in baseline risk or predictor effects and then evaluated the shape of the between-study distribution in the C-statistic, calibration slope, calibration-in-the-large, and E/O statistic, and possible transformations thereof. We found that a normal between-study distribution was usually reasonable for the calibration slope and calibration-in-the-large; however, the distributions of the C-statistic and E/O were often skewed across studies, particularly in settings with large variability in the predictor effects. Normality was vastly improved when using the logit transformation for the C-statistic and the log transformation for E/O, and therefore we recommend these scales to be used for meta-analysis. An illustrated example is given using a random-effects meta-analysis of the performance of QRISK2 across 25 general practices.
    Type of Medium: Online Resource
    ISSN: 0962-2802 , 1477-0334
    Language: English
    Publisher: SAGE Publications
    Publication Date: 2018
    detail.hit.zdb_id: 2001539-2
    detail.hit.zdb_id: 1136948-6
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 3
    Online Resource
    Online Resource
    SAGE Publications ; 2017
    In:  Statistical Methods in Medical Research Vol. 26, No. 6 ( 2017-12), p. 2853-2868
    In: Statistical Methods in Medical Research, SAGE Publications, Vol. 26, No. 6 ( 2017-12), p. 2853-2868
    Abstract: Multivariate and network meta-analysis have the potential for the estimated mean of one effect to borrow strength from the data on other effects of interest. The extent of this borrowing of strength is usually assessed informally. We present new mathematical definitions of ‘borrowing of strength’. Our main proposal is based on a decomposition of the score statistic, which we show can be interpreted as comparing the precision of estimates from the multivariate and univariate models. Our definition of borrowing of strength therefore emulates the usual informal assessment. We also derive a method for calculating study weights, which we embed into the same framework as our borrowing of strength statistics, so that percentage study weights can accompany the results from multivariate and network meta-analyses as they do in conventional univariate meta-analyses. Our proposals are illustrated using three meta-analyses involving correlated effects for multiple outcomes, multiple risk factor associations and multiple treatments (network meta-analysis).
    Type of Medium: Online Resource
    ISSN: 0962-2802 , 1477-0334
    Language: English
    Publisher: SAGE Publications
    Publication Date: 2017
    detail.hit.zdb_id: 2001539-2
    detail.hit.zdb_id: 1136948-6
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 4
    Online Resource
    Online Resource
    SAGE Publications ; 2009
    In:  Clinical Trials Vol. 6, No. 1 ( 2009-02), p. 16-27
    In: Clinical Trials, SAGE Publications, Vol. 6, No. 1 ( 2009-02), p. 16-27
    Abstract: Background In clinical trials following individuals over a period of time, the same assessment may be made at a number of time points during the course of the trial. Our review of current practice for handling longitudinal data in Cochrane systematic reviews shows that the most frequently used approach is to ignore the correlation between repeated observations and to conduct separate meta-analyses at each of a number of time points. Purpose The purpose of this paper is to show the link between repeated measurement models used with aggregate data and those used when individual patient data (IPD) are available, and provide guidance on the methods that practitioners might use for aggregate data meta-analyses, depending on the type of data available. Methods We discuss models for the meta-analysis of longitudinal continuous outcome data when IPD are available. In these models time is included either as a factor or as a continuous variable, and account is taken of the correlation between repeated observations. The meta-analysis of IPD can be conducted using either a one-step or a two-step approach: the latter involves analysing the IPD separately in each study and then combining the study estimates taking into account their covariance structure. We discuss the link between models for use with aggregate data and the two-step IPD approach, and the problems which arise when only aggregate data are available. The methods are applied to IPD from 5 trials in Alzheimer's disease. Results Two major issues for the meta-analysis of aggregate data are the lack of information about correlation coefficients and the effect of missing data at the patient-level. Application to the Alzheimer's disease data set shows that ignoring correlation can lead to different pooled estimates of the treatment difference and their standard errors. Furthermore, the amount of missing data at the patient level can affect these estimates. Limitations The models assume fixed treatment effects across studies, and that any missing data is missing at random, both at the patient-level and the study level. Conclusions It is preferable to obtain IPD from all studies to correctly account for the correlation between repeated observations. When IPD are not available, the ideal aggregate data are model-based estimates of treatment difference and their variance and covariance estimates. If covariance estimates are not available, sensitivity analyses should be undertaken to investigate the robustness of the results to different amounts of correlation. Clinical Trials 2009; 6: 16—27. http:// ctj.sagepub.com
    Type of Medium: Online Resource
    ISSN: 1740-7745 , 1740-7753
    Language: English
    Publisher: SAGE Publications
    Publication Date: 2009
    detail.hit.zdb_id: 2159773-X
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 5
    Online Resource
    Online Resource
    SAGE Publications ; 2019
    In:  Statistical Methods in Medical Research Vol. 28, No. 9 ( 2019-09), p. 2768-2786
    In: Statistical Methods in Medical Research, SAGE Publications, Vol. 28, No. 9 ( 2019-09), p. 2768-2786
    Abstract: It is widely recommended that any developed—diagnostic or prognostic—prediction model is externally validated in terms of its predictive performance measured by calibration and discrimination. When multiple validations have been performed, a systematic review followed by a formal meta-analysis helps to summarize overall performance across multiple settings, and reveals under which circumstances the model performs suboptimal (alternative poorer) and may need adjustment. We discuss how to undertake meta-analysis of the performance of prediction models with either a binary or a time-to-event outcome. We address how to deal with incomplete availability of study-specific results (performance estimates and their precision), and how to produce summary estimates of the c-statistic, the observed:expected ratio and the calibration slope. Furthermore, we discuss the implementation of frequentist and Bayesian meta-analysis methods, and propose novel empirically-based prior distributions to improve estimation of between-study heterogeneity in small samples. Finally, we illustrate all methods using two examples: meta-analysis of the predictive performance of EuroSCORE II and of the Framingham Risk Score. All examples and meta-analysis models have been implemented in our newly developed R package “metamisc”.
    Type of Medium: Online Resource
    ISSN: 0962-2802 , 1477-0334
    Language: English
    Publisher: SAGE Publications
    Publication Date: 2019
    detail.hit.zdb_id: 2001539-2
    detail.hit.zdb_id: 1136948-6
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 6
    In: Statistical Methods in Medical Research, SAGE Publications, Vol. 27, No. 10 ( 2018-10), p. 2885-2905
    Abstract: Many meta-analysis models contain multiple parameters, for example due to multiple outcomes, multiple treatments or multiple regression coefficients. In particular, meta-regression models may contain multiple study-level covariates, and one-stage individual participant data meta-analysis models may contain multiple patient-level covariates and interactions. Here, we propose how to derive percentage study weights for such situations, in order to reveal the (otherwise hidden) contribution of each study toward the parameter estimates of interest. We assume that studies are independent, and utilise a decomposition of Fisher’s information matrix to decompose the total variance matrix of parameter estimates into study-specific contributions, from which percentage weights are derived. This approach generalises how percentage weights are calculated in a traditional, single parameter meta-analysis model. Application is made to one- and two-stage individual participant data meta-analyses, meta-regression and network (multivariate) meta-analysis of multiple treatments. These reveal percentage study weights toward clinically important estimates, such as summary treatment effects and treatment-covariate interactions, and are especially useful when some studies are potential outliers or at high risk of bias. We also derive percentage study weights toward methodologically interesting measures, such as the magnitude of ecological bias (difference between within-study and across-study associations) and the amount of inconsistency (difference between direct and indirect evidence in a network meta-analysis).
    Type of Medium: Online Resource
    ISSN: 0962-2802 , 1477-0334
    Language: English
    Publisher: SAGE Publications
    Publication Date: 2018
    detail.hit.zdb_id: 2001539-2
    detail.hit.zdb_id: 1136948-6
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 7
    In: Statistical Methods in Medical Research, SAGE Publications, Vol. 30, No. 12 ( 2021-12), p. 2545-2561
    Abstract: Recent minimum sample size formula (Riley et al.) for developing clinical prediction models help ensure that development datasets are of sufficient size to minimise overfitting. While these criteria are known to avoid excessive overfitting on average, the extent of variability in overfitting at recommended sample sizes is unknown. We investigated this through a simulation study and empirical example to develop logistic regression clinical prediction models using unpenalised maximum likelihood estimation, and various post-estimation shrinkage or penalisation methods. While the mean calibration slope was close to the ideal value of one for all methods, penalisation further reduced the level of overfitting, on average, compared to unpenalised methods. This came at the cost of higher variability in predictive performance for penalisation methods in external data. We recommend that penalisation methods are used in data that meet, or surpass, minimum sample size requirements to further mitigate overfitting, and that the variability in predictive performance and any tuning parameters should always be examined as part of the model development process, since this provides additional information over average (optimism-adjusted) performance alone. Lower variability would give reassurance that the developed clinical prediction model will perform well in new individuals from the same population as was used for model development.
    Type of Medium: Online Resource
    ISSN: 0962-2802 , 1477-0334
    Language: English
    Publisher: SAGE Publications
    Publication Date: 2021
    detail.hit.zdb_id: 2001539-2
    detail.hit.zdb_id: 1136948-6
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 8
    Online Resource
    Online Resource
    SAGE Publications ; 2023
    In:  Statistical Methods in Medical Research Vol. 32, No. 3 ( 2023-03), p. 555-571
    In: Statistical Methods in Medical Research, SAGE Publications, Vol. 32, No. 3 ( 2023-03), p. 555-571
    Abstract: Multinomial logistic regression models allow one to predict the risk of a categorical outcome with 〉 2 categories. When developing such a model, researchers should ensure the number of participants ([Formula: see text]) is appropriate relative to the number of events ([Formula: see text] ) and the number of predictor parameters ([Formula: see text]) for each category k. We propose three criteria to determine the minimum n required in light of existing criteria developed for binary outcomes. Proposed criteria The first criterion aims to minimise the model overfitting. The second aims to minimise the difference between the observed and adjusted [Formula: see text] Nagelkerke. The third criterion aims to ensure the overall risk is estimated precisely. For criterion (i), we show the sample size must be based on the anticipated Cox-snell [Formula: see text] of distinct ‘one-to-one’ logistic regression models corresponding to the sub-models of the multinomial logistic regression, rather than on the overall Cox-snell [Formula: see text] of the multinomial logistic regression. Evaluation of criteria We tested the performance of the proposed criteria (i) through a simulation study and found that it resulted in the desired level of overfitting. Criterion (ii) and (iii) were natural extensions from previously proposed criteria for binary outcomes and did not require evaluation through simulation. Summary We illustrated how to implement the sample size criteria through a worked example considering the development of a multinomial risk prediction model for tumour type when presented with an ovarian mass. Code is provided for the simulation and worked example. We will embed our proposed criteria within the pmsampsize R library and Stata modules.
    Type of Medium: Online Resource
    ISSN: 0962-2802 , 1477-0334
    Language: English
    Publisher: SAGE Publications
    Publication Date: 2023
    detail.hit.zdb_id: 2001539-2
    detail.hit.zdb_id: 1136948-6
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 9
    Online Resource
    Online Resource
    SAGE Publications ; 2018
    In:  Statistical Methods in Medical Research Vol. 27, No. 2 ( 2018-02), p. 428-450
    In: Statistical Methods in Medical Research, SAGE Publications, Vol. 27, No. 2 ( 2018-02), p. 428-450
    Abstract: Multivariate random-effects meta-analysis allows the joint synthesis of correlated results from multiple studies, for example, for multiple outcomes or multiple treatment groups. In a Bayesian univariate meta-analysis of one endpoint, the importance of specifying a sensible prior distribution for the between-study variance is well understood. However, in multivariate meta-analysis, there is little guidance about the choice of prior distributions for the variances or, crucially, the between-study correlation, ρ B ; for the latter, researchers often use a Uniform(−1,1) distribution assuming it is vague. In this paper, an extensive simulation study and a real illustrative example is used to examine the impact of various (realistically) vague prior distributions for ρ B and the between-study variances within a Bayesian bivariate random-effects meta-analysis of two correlated treatment effects. A range of diverse scenarios are considered, including complete and missing data, to examine the impact of the prior distributions on posterior results (for treatment effect and between-study correlation), amount of borrowing of strength, and joint predictive distributions of treatment effectiveness in new studies. Two key recommendations are identified to improve the robustness of multivariate meta-analysis results. First, the routine use of a Uniform(−1,1) prior distribution for ρ B should be avoided, if possible, as it is not necessarily vague. Instead, researchers should identify a sensible prior distribution, for example, by restricting values to be positive or negative as indicated by prior knowledge. Second, it remains critical to use sensible (e.g. empirically based) prior distributions for the between-study variances, as an inappropriate choice can adversely impact the posterior distribution for ρ B , which may then adversely affect inferences such as joint predictive probabilities. These recommendations are especially important with a small number of studies and missing data.
    Type of Medium: Online Resource
    ISSN: 0962-2802 , 1477-0334
    Language: English
    Publisher: SAGE Publications
    Publication Date: 2018
    detail.hit.zdb_id: 2001539-2
    detail.hit.zdb_id: 1136948-6
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 10
    Online Resource
    Online Resource
    SAGE Publications ; 1968
    In:  The Sociological Review Vol. 16, No. 3 ( 1968-11), p. 399-435
    In: The Sociological Review, SAGE Publications, Vol. 16, No. 3 ( 1968-11), p. 399-435
    Type of Medium: Online Resource
    ISSN: 0038-0261 , 1467-954X
    Language: English
    Publisher: SAGE Publications
    Publication Date: 1968
    detail.hit.zdb_id: 1482764-5
    detail.hit.zdb_id: 209926-3
    SSG: 3,4
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...