Background: Meta-epidemiological studies have shown that the design of a randomised trial or diagnostic accuracy study influences the study results and can be a source of bias. Evidence for the influence of study-design characteristics on the findings of prognostic-prediction modelling studies is, however, lacking.
Objectives: To determine the influence of design characteristics of external validation studies on the performance (discrimination and calibration) of prognostic models.
Methods: We searched electronic databases for systematic reviews of prognostic models published between 2010 and 2016. Reviews from non-overlapping clinical fields were selected if they reported performance measures (concordance (c)-statistic or ratio of observed over expected number of events (OE ratio)) from 10 or more validations of the same prognostic model. From the included primary external validation studies we extracted information on design characteristics, including but not limited to the study design, study dates, methods of predictor and outcome assessment, and the handling of missing data. Measures of model performance (c-statistic and OE ratio) were extracted from systematic reviews and primary studies. Random effects meta-regression was used to quantify the effect of these characteristics on model performance.
Results: We identified 50 systematic reviews of prediction models, of which 11 were included, resulting in a total of 353 external validation studies, of which >300 reported model performance. Preliminary analyses of models predicting cardiovascular disease in the general population revealed mixed trends towards better c-statistics and worse OE ratios in validation studies with more flawed designs. At the Summit, we will present which design characteristics tend to influence model performance for all 11 clinical fields.
Conclusions: Our results will provide empirical evidence of the importance of design features in external validation studies of prognostic models.