Determining inter-rater reliability of an innovation implementation checklist

Journal of Nursing & Care

ISSN: 2167-1168

Open Access

Determining inter-rater reliability of an innovation implementation checklist

6th World Nursing and Healthcare Conference

August 15-17, 2016 London, UK

Patricia A Patrician, Lori A Loan, Pauline A Swiger, Sara Breckenridge, Mary S McCarthy, Julie J Freeman and Donna L Belew

University of Alabama at Birmingham, USA
European Regional Medical Command, USA
Madigan Army Medical Center, USA
Fort Belvoir Community Hospital, USA
The Geneva Foundation, USA

Scientific Tracks Abstracts: J Nurs Care

Abstract :

Inter-rater reliability is an important consideration in instrument development as well as in the ongoing fidelity of measurements that can be somewhat subjective.The Cohen├ó┬?┬?s kappa statistic takes chance into consideration and thus, provides a more robust measurement of agreement than inter-rater reliability. This analysis was an important step in a program evaluation of an innovative, multi-faceted professional nursing frameworkthat incorporated a newly developed instrument. In order to evaluate the implementation and diffusion of the innovation, site visits were conducted by a team of two investigators using the instrument comprised of six unit-level components. The two investigators met separately with nursing staff and leaders on all study units in 50% of the military hospitals included in the program evaluation. Using the ├ó┬?┬?Optimized Performance Checklist,├ó┬?┬Ł each rated the implementation as met, not met, or partially met. Each of the 34 units was rated separately on 20 data elements, or items, in the checklist, generating 675 pairs of data elements for the observers. The formula for the kappa statistic (observed-expected agreement/1-expected agreement) was applied. The observers agreed on 652 of the 675 ratings, resulting in 97% agreement. However, when taking into consideration chance agreements and disagreements, the Cohen├ó┬?┬?s kappa statistic was .91. The Cohen├ó┬?┬?s kappa indicates a very high level of agreement even when chance is considered. The kappa is an easy to calculate statistic that provides a more conservative and realistic estimate of inter-rater reliability. It should be used when attempting to verify observer fidelity.

Biography :

Patricia A Patrician, PhD, RNN, FAAN, is the Donna Brown Banton Endowed Professor at the University of Alabama at Birmingham (UAB). She joined the UAB faculty in 2008 after a 26 year career in the US Army Nurse Corps. She teaches in the PhD Program and conducts research on nurse staffing, the nursing practice environment and patient and nurse quality and safety outcomes. She is a Senior Nurse Faculty/Scholar in the Veteran’s Administration Quality Scholars fellowship program that focuses on the science of quality improvement and a national Consultant for the Quality and Safety Education for Nurses program.


Google Scholar citation report
Citations: 4230

Journal of Nursing & Care received 4230 citations as per Google Scholar report

Journal of Nursing & Care peer review process verified at publons

Indexed In

arrow_upward arrow_upward