Measurement instruments in digital education research: A systematic review of validity evidence and psychometric properties




Poster session 1 Wednesday: Evidence production and synthesis


Wednesday 13 September 2017 - 12:30 to 14:00


All authors in correct order:

Posadzki P1
1 NTU, Singapore
Presenting author and contact person

Presenting author:

Pawel Posadzki

Contact person:

Abstract text
Background: eLearning or digital education employs the use of information and communication technologies (ICT); and is an umbrella term which includes a multitude of interventions aimed at increasing learners’ knowledge, skills, attitudes or competencies; and promoting continuous, lifelong learning. Randomised-controlled trials (RCTs) of eLearning ought to have measurement instruments with adequate validity and reliability evidence because this enables measuring what is intended to be quantified and, in turn, drawing meaningful conclusions. In other words, RCTs using invalid interpretation and use of measurement instruments are a major source of bias in eLearning research; resulting in non‐comparable study results, and non‐evidence‐based practice.

Objectives: To conduct a systematic review of the validity evidence and psychometric properties of measurement instruments currently used in RCTs of digital education of healthcare professionals. To reduce the existing uncertainties and guide researchers, academics and curriculum developers by formulating guidelines for validation of measurement instruments in research of digital education of healthcare professionals.

Methods: Cochrane methodology for systematic reviews; and, systematic review of psychometric properties using the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) criteria.

Results: Our preliminary review of evidence identified that over 90% of RCTs of eLearning of healthcare professionals use measurement instruments without adequate validity and reliability evidence for assessing knowledge, skills, attitudes, satisfaction or professional competencies.

Conclusions: This causes difficulties in making meaningful comparisons between studies; undermines the credibility and trustworthiness of research leading to biased results; and, prevents the evidence from being widely adopted into routine health professionals’ education and clinical practice guidelines. Taken together, these factors can compromise the welfare of patients and adversely affect the advance of eLearning research, raising methodological and practical concerns.