An evaluation of theoretical and operational fidelity of best-practice implementation studies conducted in low- and middle-income economies




Short oral session 4: Evidence implementation and evaluation


Wednesday 13 September 2017 - 16:00 to 17:30


All authors in correct order:

Lockwood C1, Moola S2, McAurthur A2, Lizerondo L2
1 Cochrane Nursing Care, Australia
2 Joanna Briggs Institute, Australia
Presenting author and contact person

Presenting author:

Lucylynn Lizarondo

Contact person:

Abstract text
Background: Implementation programmes often rely on participant experiences to inform programme review and evaluation. While generating value-impact statements, they may lack objectivity and the conceptual basis to provide a robust, external evaluation of programme fidelity and integrity. The Centers for Disease Control and Prevention (CDC) evaluation framework provides a standardised, theory-informed framework by which implementation groups can map and systematically evaluate overall programme structure and effectiveness across 6 key domains (engaging stakeholders, the programme outline, the programme design, gathering credible evidence, justifying conclusions, and ensuring use and sharing of lessons learned).

Objectives: To map the Joanna Briggs Implementation Programme against the steps and standards in the CDC framework and logic model.
To identify goodness of fit, including gaps and limitations in organisational planning within the JBI Implementation programme.

Methods: The JBI Implementation Science team mapped the objectives and programme elements of the JBI Clinical Fellowship programme against the 6 steps of the CDC evaluation framework. A second comparison of the CDC framework against 24 published implementation reports was then used to evaluate fidelity and establish the extent to which the JBI implementation programme framework was integrated in clinical fellows’ reports.

Results: The JBI programme showed goodness of fit with 4 of the CDC domains; but poor fit was found with engaging stakeholders and sharing of lessons learned. Fidelity evaluation of individual implementation studies demonstrated engaging with stakeholders and using and sharing lessons learned was central to each project but may not be well represented in the overall programme design.

Conclusions: The evaluation demonstrates that change in practice can be achieved with programme fidelity in resource-limited settings if interventions are supported by credible evidence and facilitation implications for programme design are discussed within a low- and middle-income economy context.