Ecological validity and usability of a critical-appraisal tool for qualitative, quantitative and mixed-methods studies: researchers’ views and experiences




Long oral session 11: Qualitative and mixed methods for evidence synthesis


Thursday 14 September 2017 - 11:00 to 12:30


All authors in correct order:

Hong QN1, Gonzalez-Reyes A1, Pluye P1
1 McGill University, Canada
Presenting author and contact person

Presenting author:

Quan Nha Hong

Contact person:

Abstract text
Background: Systematic mixed-studies reviews are reviews combining qualitative, quantitative and mixed-methods studies. They are increasingly popular due to their potential for addressing complex interventions and phenomena. Because of the heterogeneous nature of study designs, one major challenge encountered with this type of review is the appraisal of the quality of individual studies. A critical-appraisal tool was developed for use in systematic mixed-studies reviews: the Mixed Methods Appraisal Tool (MMAT). The MMAT includes 19 items for appraising the methodological quality of five types of study: (a) qualitative studies, (b) randomised-controlled trials, (c) non-randomised studies, (d) quantitative descriptive studies, and (e) mixed-methods studies.

Objective: This study aimed to explore the ecological validity and usability of the MMAT by seeking the views and experiences of researchers who have used this tool for the appraisal of studies.

Methods: We conducted a qualitative descriptive study using semi-structured interviews with MMAT users. A purposeful sample was drawn from two main sources: a list of people who had contacted the developer of the MMAT, and a list of people who published a review in which they had used the MMAT. All interviews were transcribed and analysed by two coders using inductive thematic analysis.

Results: A total of 20 participants from 8 countries were interviewed. They were PhD students, postdoctoral fellows, professors, lecturers, research associates and librarians. Twenty-five main themes were identified and grouped into 3 broad categories: strengths of the MMAT, difficulties encountered when using the MMAT, and changes made or suggested in the MMAT. The comparison of these themes led to the identification of 6 main divergent views.

Conclusions: Based on the results of this study, several recommendations for improving the MMAT were put forward. This will contribute to greater validity and usability of the MMAT. The validated tool will facilitate the appraisal process in systematic mixed-studies reviews.