Content uploaded by Suzie Kardong-Edgren
Author content
All content in this area was uploaded by Suzie Kardong-Edgren on Dec 27, 2022
Content may be subject to copyright.
244 JUlY/AUGUST 2014 VOlUME 35 NUMBER 4 245
Nursing Education Perspectives
Reliability and Validity Testing of the
Creighton Competency Evaluation
Instrument for Use in the NCSBN
National Simulation Study
As requests for the use of simulation as a
substitute for traditional clinical experi-
ences have increased, the National Council
of State Boards of Nursing instituted the
National Simulation Study (NCSBN NSS),
designed to provide evidence-based infor-
mation to state boards of nursing. Ten
pre-licensure nursing programs (five bacca-
laureate, five associate degree) were chosen
to participate, by a competitive application
process, in a randomized, controlled, lon-
gitudinal multisite study. The entering stu-
dents in the fall class of 2011 were random-
ized into three groups: the control group (a
maximum of 10 percent simulation hours
in every major clinical course), a 25 percent
group (25 percent simulation hours in every
major clinical course) or a 50 percent group
(50 percent simulation hours in every major
clinical course). Major clinical courses were
defined as: foundations, medical-surgical
nursing, advanced medical-surgical nursing,
obstetrical nursing, pediatric nursing, men-
tal health nursing, and community/public
health nursing.
Evaluating students to determine clinical
competencies is a challenge faced by educa-
tors working at all levels of nursing prepara-
tion, including pre-licensure education, staff
development, and the competency evaluation
of experienced nurses. For the NCSBN NSS,
clinical competency was defined as the abil-
ity to “observe and gather information, rec-
ognize deviations from expected patterns,
prioritize data, make sense of data, maintain
a professional response demeanor, provide
clear communication, execute effective inter-
ventions, perform nursing skills correctly,
evaluate nursing interventions, and self-re-
flect for performance improvement within a
culture of safety (Hayden, Jeffries, Kardong-
Edgren, & Spector, 2011).
Current strategies for evaluating student
clinical performance are often criticized for
being too subjective. In reality, each encoun-
ter with a patient is unique. Students use dif-
ferent combinations of personal knowledge,
skills, and abilities in each patient situation,
whether with a real patient or in a simulation.
Such variability makes it difficult to stan-
dardize the assessment of clinical compe-
tency. Nonetheless, with the need to prepare
health care professionals to safely care for
patients, it is imperative that nurse educators
AIM The Creighton Competency Evaluation Instrument (CCEI) was modied from an existing instrument, the Creighton
Simulation Evaluation Instrument, for use in the National Council of States Boards of Nursing National Simulation Study
(NCSBN NSS).
BACKGROUND The CCEI was developed for the NCSBN NSS for use as the evaluation instrument for both simulation and
traditional clinical experiences in associate and baccalaureate nursing programs.
METHOD Five nursing programs assisted with reliability and validity testing of the CCEI. Using a standardized validation
questionnaire, faculty rated the CCEI on its ability to accurately measure student performance and clinical competency.
Videos scripted at three levels of performance were used to test reliability.
RES U LTS Content validity ranged from 3.78 to 3.89 on a four-point Likert-like scale. Cronbach’s alpha was >.90 when used to
score three different levels of simulation performance.
CONCLUSION The CCEI is useful for evaluating both the simulation and traditional clinical environments.
Abstract
Jennifer Hayden, Mary Keegan, Suzan Kardong-Edgren, and Richard A. Smiley
doi: 10.5480/13-1130.1
244 JUlY/AUGUST 2014 VOlUME 35 NUMBER 4 245
use instruments that produce reliable and
valid scores to assist them with conducting
formative and summative evaluations of all
activities of student learning (Oermann &
Gaberson, 2009).
McDonald (2007) noted that nurse fac-
ulty often spend more time developing
classroom-based activities and instruments
to measure student performance and com-
petence and less time on how to evaluate
these same course objectives in the clini-
cal and laboratory setting. In the clinical
setting or simulation laboratory, faculty
often use checklists to evaluate how well
students have performed predetermined
psychomotor skills or tasks. To objectively
evaluate competency, a tool or instrument
must incorporate components of the cogni-
tive, psychomotor, and affective domains.
However, until recently, reliable and valid
instruments have been lacking in their abil-
ity to assist nurse educators with being more
objective (Kardong-Edgren, Adamson, &
Fitzgerald, 2010).
DEVELOPING A RELIABLE, VALID
ASSESSMENT INSTRUMENT FOR
THE SIMULATION STUDY
The simulation laboratory makes it possible
to standardize the clinical experiences that
students receive; instructors can ensure all
students experience the same patient con-
dition. Clinical instructors can also observe
students caring for patients in the nurse role
and making clinical decisions on their own in
the simulated clinical environment, making it
easier for the instructor to see more of each
student’s abilities. With the adoption of simu-
lation in nursing programs, many assessment
instruments have been developed to evaluate
student performance; however, many of these
instruments were developed by individual
instructors for use in individual scenarios and
often lack rigorous testing for reliability and
validity. One instrument that has undergone
rigorous development and refinement is the
Creighton Simulation Evaluation Instrument
(C-SEI).
The C-SEI was developed by a group of
nurse educators interested in a quantitative
instrument to evaluate student participation
in simulated clinical experiences (Todd,
Manz, Hawkins, Parsons, & Hercinger,
2008). Critical elements of the instru-
ment were identified and organized using
core competencies integrated throughout
Essentials of Baccalaureate Education for
Professional Nursing Practice by the American
Association of Colleges of Nursing (AACN,
1998). The core competencies chosen were
assessment, communication, critical think-
ing, and technical skills.
Simulation Study Evaluation Tool
Table 1: Comparison of Wording on the C-SEI and CCEI
C-SEI CCEI
Assessment Assessment
Obtains pertinent data Obtains pertinent data
Obtains pertinent objective data
Performs follow-up assessments as
needed
Performs follow-up assessments as needed
Assesses in a systematic and orderly
manner using the correct technique
Assesses the environment in an orderly
manner
Communication Communication
Communicates effectively with providers
(delegation, medical terms, SBAR, RBO)
Communicates effectively with intra-/
interprofessional teams (TeamSTEPPS,
SBAR, WRBO)-
Communicates effectively with patient,
and SO (verbal, nonverbal, teaching)
Communicates effectively with patient and
signicant other (verbal, nonverbal, teaching)
Writes documentation clearly, concisely,
and accurately
Documents clearly, concisely, and accurately
Responds to abnormal ndings
appropriately
Responds to abnormal ndings appropriately
Promotes realism/professionalism Promotes professionalism
Critical Thinking Clinical Judgment
Interprets vital signs (T,P,R, BP, pain) Interprets vital signs (T,P,R, BP, pain)
Interprets laboratory values Interprets laboratory values
Interprets subjective/objective data
(recognizes relevant from irrelevant data)
Interprets subjective/objective data
(recognizes relevant from irrelevant data)
Formulates measurable priority-driven
outcomes
Prioritizes appropriately
Performs outcome-driven interventions Performs evidence-based interventions
Provides specic rationale for
interventions
Evaluates interventions and outcomes Provides evidence-based rationale for
interventions
Reects on simulation experience Reects on clinical experience
Delegates appropriately
Technical Skills Patient Safety
Uses patient identiers Uses patient identiers
Utilizes standard precautions, including
hand washing
Utilizes standardized practices and
precautions, including hand washing
Manages equipment, tubes, and drains
therapeutically
Administers medications safely
Performs procedures correctly Manages technology and equipment
Performs procedures correctly
Reects on potential hazards and errors
Note. RBO stands for Read Back Orders
SBAR stands for Situation, Background, Assessment, Recommendation
TeamSTEPPS stands for Team Strategies and Tools to Enhance Performance and Patient Safety
WRBO stands for Written Read Back Orders
246 JUlY/AUGUST 2014 VOlUME 35 NUMBER 4 247
Nursing Education Perspectives
According to Todd et al. (2008), each of
these core competencies were sufficiently
documented in the health care literature and
focused on quality and safety, warranting
inclusion as a major heading in an evaluation
tool. The initial pilot testing of this instru-
ment included content validity testing by a
panel of experts. Inter-rater reliability ranged
from 84.4 percent to 89.1 percent agreement
overall on the tool; some items ranged from
62.5 percent to 100 percent, with three items
less than 70 percent (Todd et al.). Adamson,
Gubrud-Howe, Sideras, and Lasater (2012)
reported inter-rater reliability findings from
multiple studies that used the C-SEI for sim-
ulation evaluation. These included intraclass
correlation (2,1) = 0.889 (Adamson, 2011),
percent agreement (range: 92 percent to 96
percent) with two raters (Gubrud-Howe,
2008), and percent agreement (range: 57 per-
cent to 100 percent) with four raters (Sideras,
2007). Adamson and colleagues (2011) per-
formed additional reliability calculations on
the C-SEI and found the Cronbach's alpha to
be .979.
The NCSBN study team modified the
existing C-SEI with feedback from the
instrument designers, to clarify scoring,
incorporate Quality and Safety Education for
Nurses (QSEN Institute, n.d.) terminology
and concepts, and reflect the 2008 revision of
the AACN Essentials. The categories of criti-
cal thinking and technical skills on the origi-
nal instrument were changed to clinical judg-
ment and patient safety to incorporate the
wording used in the QSEN documents (Tab le
1). Minor changes in wording were made to
update the tool and make it usable in clinical
as well as simulation situations. For purposes
of the NCSBN study, the term clinical judg-
ment was defined using the description from
the International Nursing Association for
Clinical Simulation & Learning (INACSL,
2011): “The art of making a series of deci-
sions in situations, based on various types of
knowledge, in a way that allows the individ-
ual to recognize salient aspects of or changes
in a clinical situation, interpret their mean-
ing, respond appropriately, and reflect on
the effectiveness of the intervention. Clinical
judgment is influenced by the individu-
al’s overall experiences that have helped to
develop problem solving, critical thinking
and clinical reasoning abilities” (p. S4).
The revised instrument reflects the con-
ceptual definition of clinical competency
used in the NCSBN National Simulation
Study. Therefore, the assessment instrument
was renamed the Creighton Competency
Evaluation Instrument (CCEI).
Although the initial testing of the C-SEI
was conducted in a baccalaureate simula-
tion setting (Todd et al., 2008), we believed
that the concepts presented in the revised
tool are applicable to students in associate
degree nursing (ADN) programs. We also
believed that clinical competency (or lack of
competency) demonstrated in the simulation
Table 2: Content Validity Faculty Demographics
Note. aMean = 28.5, SD = 8.8. bMean = 15.2, SD = 7.9. cMean = 11.4, SD = 7.7. dMean = 53.5, SD = 10.0.
Characteristic Frequency Percent
RN Experience (years)a
< 20 6 17.1
20–29 10 28.6
30–39 14 40.0
≥ 40 514.3
Teaching Experience (years)b
< 10 9 25.7
10–19 17 48.6
≥ 20 925.7
Teaching Experience at Current Institution (years)c
< 5 5 17.1
5–9 12 28.6
10–14 840.0
≥ 15 10 14.3
Current Position in Academic Institution
Professor 4 11.4
Associate professor 8 22.9
Assistant professor 17 48.6
Adjunct faculty 2 5.7
Other (dean, administration, simulation laboratory
coordinator, distance education director)
411.4
Highest Degree Held
Baccalaureate 1 2.9
Master’s 16 47.1
Post-master’s 411.8
Doctorate 13 38.2
Age (years)d
< 40 4 11.8
40–49 617.6
50–59 16 47.1
≥ 60 823.5
Sex
Female 35 100.0
Race
White/Caucasian 31 91.2
Black/African American 38.8
246 JUlY/AUGUST 2014 VOlUME 35 NUMBER 4 247
environment would also be demonstrated in
the clinical setting as simulation is designed
to mimic the clinical environment. With the
scope of the instrument expanded to the clin-
ical setting and to ADN programs, the CCEI
required additional validity and reliability
testing. The aims of this study, then, were to:
a) assess the content validity, reliability, and
the usability of the CCEI, b) validate its use
in ADN and BSN nursing programs, and c)
validate its use in the traditional clinical set-
ting and the simulated clinical environment.
METHOD
Three BSN programs in the Midwest and
two ADN programs in Florida participated
in the validity and reliability testing of the
revised instrument. Participating BSN pro-
grams volunteered to assist the main study;
the ADN programs were known to one of
the researchers. Approval of the institutional
review board at each institution was obtained
prior to obtaining faculty consent and com-
mencing data collection.
Simulation Study Evaluation Tool
Table 3: Content Validity Questionnaire Results
Note. Likert scale: 1 = strongly disagree, 2 = disagree, 3 = agree, 4 = strongly agree.
Expected Behavior Necessity
Mean SD
Fitness
Mean SD
Understanding
Mean SD
Assessment
Obtains pertinent data
Performs follow-up assessments as needed
Assesses the environment
Total
Communication
Communicates effectively with intra/interprofessional team
Communicates effectively with patient and signicant other
Documents clearly, concisely, and accurately
Responds to abnormal ndings appropriately
Promotes professionalism
Total
Clinical Judgment
Interprets vital signs
Interprets laboratory results
Interprets subjective/objective data
Prioritizes appropriately
Performs evidence-based interventions
Provides evidence-based rationale for interventions
Evaluates evidence-based interventions and outcomes
Reects on clinical experience
Delegates appropriately
Total
Patient Safety
Uses patient identiers
Utilizes standardized practices and precautions, including handwashing
Administers medications safely
Manages technology and equipment
Performs procedures correctly
Reects on potential hazards and errors
Total
Grand Total
3.94
3.85
3.61
3.80
3.94
4.00
4.00
3.94
3.79
3.94
3.94
3.89
3.94
3.97
3.80
3.82
3.85
3.76
3.76
3.85
3.97
3.94
4.00
3.86
3.94
3.89
3.93
3.89
.34
.44
.70
.43
.24
.00
.00
.24
.41
.12
.24
.40
.24
.17
.58
.58
.44
.50
.50
.29
.17
.34
.00
.36
.24
.53
.19
.19
3.82
3.79
3.67
3.76
3.97
3.97
3.97
3.59
3.65
3.84
3.94
3.89
3.91
3.94
3.85
3.77
3.91
3.71
3.85
3.85
3.97
3.94
4.00
3.91
3.97
3.83
3.94
3.86
.63
.48
.65
.52
.17
.17
.17
.84
.51
.21
.24
.40
.51
.34
.56
.65
.29
.63
.44
.31
.17
.34
.00
.28
.17
.62
.19
.22
3.69
3.66
3.23
3.52
3.83
3.89
3.89
3.81
3.61
3.80
3.91
3.83
3.83
3.86
3.74
3.66
3.79
3.82
3.85
3.80
3.97
3.91
4.00
3.66
3.94
3.79
3.86
3.78
.63
.61
.94
.56
.51
.32
.32
.64
.79
.30
.37
.51
.51
.43
.66
.73
.48
.46
.50
.42
.17
.37
.00
.64
.24
.59
.28
.27
248 JUlY/AUGUST 2014 VOlUME 35 NUMBER 4 249
Nursing Education Perspectives
Content Validity
The study team felt that seasoned faculty
would have greater insight than less expe-
rienced faculty into the behavioral expec-
tations required of graduating seniors and
the external documents and verbiage from
accrediting agencies associated with profes-
sional nursing competency behaviors. Thus,
to participate in the content validity portion
of the project, nursing faculty and simulation
laboratory staff were required to have at least
six years of teaching experience.
Participants completed a questionnaire
similar to that used by Todd et al. (2008)
to assess the ability of the CCEI to evaluate
student performance, as well as to determine
the comprehensiveness of the instrument to
rate clinical competency. The questionnaire
asked faculty to use a four-point Likert-type
scale, with scores ranging from 1 (strongly
disagree) to 4 (strongly agree), to rate each
item of the instrument on three criteria:
• Necessity of the item as a measurement of
clinical competency
• Fitness of the item as to how well it aligns
with its competency category (assess-
ment, communication, clinical judgment,
patient safety)
• Understanding of the item.
Reliability
All clinical faculty were eligible to participate
in the reliability testing portion of this study,
regardless of their years of teaching experi-
ence. Training in the use of the CCEI was
conducted using the met hodology established
by Adamson and Kardong-Edgren (2012).
To evaluate inter-rater reliability, 31 faculty/
staff members first watched a training video
in which a narrator provided an orientation
to the CCEI. The video continued with two
students caring for a simulated patient, fol-
lowed by a narrator discussing the rationale
for how students were scored for each item
on the instrument. Participants then viewed
and rated videos of the same simulation sce-
nario at three levels of proficiency (labeled as
circles, squares, or triangles) using the CCEI.
Faculty participants then viewed addi-
tional scripted videos portraying the same
students at three levels of proficiency: above,
at, and below the level of performance
expected for senior-level students (Adamson
& Kardong-Edgren, 2012). The faculty partic-
ipants were provided with a list of expected
behaviors to merit a rating of competent;
therefore, all participants had the same
expectations of competency for the recorded
scenarios. Faculty, who were blinded to the
intended level of proficiency, used the CCEI
to evaluate students in each of the videos.
Simulation and Traditional Clinical Settings
After establishing the validity and reliabil-
ity of scores from the CCEI, clinical faculty
trained in using the CCEI were asked to use
the instrument to evaluate their students in
both the clinical and simulation settings.
Clinical faculty rated students as competent
or not competent based on the course objec-
tives and clinical expectations of the course
being taught. After using the CCEI in both
Table 4: Reliability Faculty Demographics
Note. aMean = 25.2, SD =10.1. bMean = 12.9, SD = 8.2. cMean = 9.4, SD = 6.3. dMean = 49.2, SD = 10.0.
Characteristic Frequency Percent
RN Experience (years)a
< 20 8 27.6
20–29 931.0
30–39 827.6
≥ 40 413.8
Teaching Experience (years)b
< 10 11 39.3
10–19 12 42.8
≥ 20 517.9
Teaching Experience at Current Institution (years)c
< 5 6 21.4
5–9 10 35.7
10–14 517.9
≥ 15 7 25.0
Current Position in Academic Institution
Associate professor 6 20.7
Assistant professor 17 58.6
Instructor 2 6.9
Adjunct faculty (including 1 professor) 2 6.9
Other (simulation laboratory coordinator,
distance education director)
26.9
Highest Degree Held
Baccalaureate 1 3.6
Master’s 17 60.7
Post-master’s 310.7
Doctorate 7 25.0
Age (years)d
< 40 4 15.4
40–49 830.7
50–59 10 38.5
≥ 60 425.4
Sex
Female 28 96.6
Male 1 3.4
Race
White/Caucasian 25 86.2
Black/African American 310.3
Asian 1 3.5
248 JUlY/AUGUST 2014 VOlUME 35 NUMBER 4 249
settings, faculty were then asked to use a
Likert-type scale to rate how well they agreed
with statements on the use of the instrument
to evaluate students in the clinical environ-
ment and simulation setting.
RES U LTS
Content Validity
Thirty-five faculty members participated in
rating content validity. The demographics
of this faculty group can be seen in Table 2.
To determine content validity, faculty mem-
bers rated the individual behaviors identified
on the instrument using a Likert-type scale;
scores ranged from 1 (strongly disagree) to
4 (strongly agree). The identified behaviors
were evaluated to determine whether they
were needed in the instrument, whether they
were reflective of the section under which
they were included, and whether the behav-
ior was easy to understand. An area for com-
ments was available at the end of each section.
Study participants strongly agreed that
each behavior should be included in the
CCEI (M = 3.89, SD = 0.19) and reflected
the corresponding category (M = 3.86, SD =
0.22). They also strongly agreed that nearly
all of the behaviors were easy to understand
(M = 3.78, SD = 0.27). Tabl e 3 details the
ratings for each item of the CCEI in the
necessity, fitness, and understanding cate-
gories. The statement “Assesses the environ-
ment” was scored the lowest, indicating it
Simulation Study Evaluation Tool
Table 5: Results of Inter-Rater Reliability Testing: Percent Agreement with Expert Rater
Category
Agreement (Percent)
Circle Square Triangle Overall
Assessment
Obtains pertinent data 64.5 64.5 100.0 76.3
Performs follow-up assessments as needed 65.5 96.7 100.0 87.6
Assesses the environment 74.2 58.1 83.3 71.7
Total 68.1 72.8 94.5 78.5
Communication
Communicates effectively with intra/interprofessional team 17.2 23.3 67.7 36.7
Communicates effectively with patient and signicant other 96.8 61.3 96.8 84.9
Documents clearly, concisely, and accurately 83.9 10.0 86.7 60.4
Responds to abnormal ndings appropriately 90.0 96.7 96.8 94.5
Promotes professionalism 87.1 64.5 100.0 83.9
Total 75.7 51.3 89.6 72.3
Clinical Judgment
Interprets vital signs 23.3 100.0 100.0 75.0
Interprets laboratory results 80.6 83.3 83.9 82.6
Interprets subjective/objective data 67.7 90.3 100.0 86.0
Prioritizes appropriately 96.8 90.3 83.9 90.3
Performs evidence-based interventions 80.6 90.3 100.0 90.3
Provides evidence-based rationale for interventions 90.3 16.1 96.8 67.7
Evaluates evidence-based interventions and outcomes 71.0 87.1 93.5 83.9
Reects on clinical experience 87.1 70.0 96.8 84.8
Delegates appropriately 40.0 100.0 96.8 79.1
Total 71.1 80.8 94.6 82.2
Patient Safety
Uses patient identiers 41.9 90.0 96.8 76.1
Utilizes standardized practices and precautions, including handwashing 77.4 83.9 83.9 81.7
Administers medications safely 63.3 80.6 87.1 77.2
Manages technology and equipment 74.2 93.5 87.1 84.9
Performs procedures correctly 87.1 77.4 93.5 86.0
Reects on potential hazards and errors 96.8 67.7 87.1 83.9
Total 73.5 82.2 89.2 81.7
Grand Total 72.3 73.8 92.1 79.4
250 JUlY/AUGUST 2014 VOlUME 35 NUMBER 4 251
Nursing Education Perspectives
was the least easily understood item in the
instrument.
Reliability
To evaluate inter-rater reliability, 31 faculty
and staff viewed videos of the same simula-
tion scenario at three levels of proficiency and
rated each of the scenarios using the CCEI.
This group was a subset of the faculty who
evaluated the instrument for content validity;
it also included additional faculty and staff
volunteers from the same institutions. (See
Table 4 for demographics.)
The responses of each of the 31 raters
were compared with those of an expert rater;
the agreement percentages are summarized
in Tabl e 5. The overall agreement with the
expert rater was 79.4 percent. Cronbach’s
alphas were above .90 and considered highly
acceptable (Bland & Altman, 1997): circle
video, .975; square video, .974; and triangle
video, .979.
Kappa statistics, accounting for interob-
server variation and “based on the difference
between how much agreement is actually
present (‘observed agreement’) compared to
how much agreement would be expected to
be present by chance alone (‘expected agree-
ment’)” (Viera & Garrett, 2005), were cal-
culated for each of the three scenarios. The
Kappa scores suggest fair to moderate agree-
ment: circle scenario, 0.316; square scenario,
0.443, and triangle scenario, 0.453.
When the behaviors were individually
considered, the percent agreement for most
of the behaviors was 70 percent or higher.
The exceptions were: “provides evidence
based rationale for interventions” (67.7 per-
cent), “documents clearly, concisely, and
accurately” (60.4 percent), and “communi-
cates effectively with intra/interprofessional
team” (36.7 percent).
Breakout comparisons were run on the
percent of agreement for individual behav-
iors based on the number of years of teach-
ing experience, level of faculty education,
and whether the program was ADN or BSN.
No significant differences in percent agree-
ments were found for the comparisons based
Table 6: Inter-Rater Reliability Testing: Comparison by Program
ADN (%) BSN (%) χ² p value
Assessment
Obtains pertinent data 71.8 79.6 0.77 .380
Performs follow-up assessments as needed 78.9 94.1 4.63 .032*
Assesses the environment 66.7 75.5 0.86 .354
Communication
Communicates effectively with intra/interprofessional team 35.9 37.3 0.02 .895
Communicates effectively with patient and signicant other 76.9 90.7 3.38 .066
Documents clearly, concisely, and accurately 61.5 59.6 0.03 .853
Responds to abnormal ndings appropriately 92.3 96.2 0.63 .426
Promotes professionalism 82.1 85.2 0.16 .685
Clinical Judgment
Interprets vital signs 68.4 79.6 1.49 .222
Interprets laboratory results 69.2 92.5 8.43 .004**
Interprets subjective/objective data 76.9 92.6 4.62 .032*
Prioritizes appropriately 79.5 98.1 9.02 .003**
Performs evidence-based interventions 87.2 92.6 0.76 .384
Provides evidence-based rationale for interventions 69.2 66.7 0.07 .794
Evaluates evidence-based interventions and outcomes 74.4 90.7 4.49 .034*
Reects on clinical experience 82.1 86.8 0.39 .532
Delegates appropriately 74.4 82.7 0.94 .333
Patient Safety
Uses patient identiers 82.1 71.7 1.32 .250
Utilizes standardized practices and precautions, including handwashing 79.5 83.3 0.22 .639
Administers medications safely 65.8 85.2 4.76 .029*
Manages technology and equipment 79.5 88.9 1.57 .021
Performs procedures correctly 76.9 92.6 4.62 .032*
Reects on potential hazards and errors 74.4 90.7 4.49 .034*
Total 74.2 83.3
Note. *Statistically signicant difference. **Highly statistically signicant difference.
250 JUlY/AUGUST 2014 VOlUME 35 NUMBER 4 251
on teaching experience and level of faculty
education, but the comparison by type of
program did show some differences (Table
6). For most of individual item ratings, the
percent agreements were higher in the BSN
programs; overall, the BSN agreement was
83.3 percent, in contrast to 74.2 percent for
ADN agreement. For two behaviors (“inter-
prets laboratory results” and “prioritizes
appropriately”), the differences between
the BSN and ADN agreement percent were
highly statistically significant (p < .01). For
another six behaviors, the differences were
significant (p < .05). For all of behaviors
with significant differences, the BSN agree-
ment percent was higher than the ADN
agreement percent.
Usability
Eight faculty volunteers scheduled to have
a clinical group in both the simulation and
clinical settings evaluated six statements
regarding use of the instrument in both
learning environments. A Likert-type scale
was used for the ratings (1 = strongly dis-
agree, 4 = strongly agree). The results were
positive, with general agreement on all the
statements for both the clinical (M = 3.10,
SD = 0.25) and simulation (M = 3.25, SD =
0.38) settings (Table 7). One evaluator com-
mented that the instrument was “easy to
understand” and “easier to use in simulation
than clinical.”
DISCUSSION
The CCEI is a 23-item evaluation tool orga-
nized into four categories (assessment, com-
munication, clinical judgment, and patient
safety). Items were modified from the original
C-SEI, i ncorporating t he QSEN competencie s
(AACN, 2008). When comparing the faculty
scores with the scores assessed by an expert
rater, 79 percent agreement was obtained for
all three videos, a level of agreement that was
considered to have good reliability. No differ-
ences were seen in percent agreement with
the expert rater based on the highest educa-
tional degree obtained by faculty, nor when
years of teaching experience were taken into
consideration, indicating the CCEI can be
used by new and seasoned clinical faculty.
Overall, the BSN faculty had a higher
percent agreement with the expert rater (a
BSN faculty member) than did the ADN fac-
ulty; however, this difference was not statis-
tically significant. Therefore, the authors of
the current article are left to speculate about
why both ADN and BSN faculty agreed more
on communication than on assessment, judg-
ment, and patient safety categories. Many
BSN programs teach freestanding physical
assessment classes, so faculty in these pro-
grams may look for different things when
viewing student performance. The differ-
ences in scoring on clinical judgment and
patient safety may reflect the differences in
clinical focus between BSN and ADN pro-
grams. More commonly, ADN education
focuses on time in the actual hospital envi-
ronment, while the BSN clinical setting
may include community health and other
out-of-hospital experiences in a variety of set-
tings. Thus, ADN faculty may expect slightly
more expertise in the clinical setting among
their graduating students.
The authors of the current article also
note that the BSN programs used for this
validation were located in the Midwest and
the two ADN program were in the southern
United States. It is possible that the authors
captured regional clinical differences rather
than actual differences in educational
expectations for students. The fact that the
authors did not have ADN and BSN pro-
grams from both regions of the country used
for the validation of the tool is a limitation
of the study.
In addition, the CCEI was developed
from AACN core competencies used for
baccalaureate programs. It is likely that
BSN faculty use the CCEI terms their pro-
gram and course objectives and to develop
course syllabi and test questions. ADN fac-
ulty rated these items as valid for assessing
clinical competency indicating that these
concepts are also relevant to ADN pro-
grams, but the actual terminology used to
reflect these concepts is likely different for
the ADN faculty, which could account for
the lower percent agreement when assessing
these items.
CONCLUSION
When clinical instructors were trained
in the use of the CCEI and then had the
opportunity to use the form in both the
clinical and simulation settings, the faculty
rated the form as easy to use, easy to under-
stand, and having the ability to rate student
Table 7: Evaluation of the CCEI
Note. Likert scale: 1 = strongly disagree, 2 = disagree, 3 = agree, 4 = strongly agree.
Category
Clinical Simulation
Mean SD Mean SD
The quantitative evaluation instrument is useful. 3.38 .52 3.38 .52
The quantitative evaluation instrument is
comprehensive.
3.00 .53 3.13 .64
The quantitative evaluation instrument is easy to
understand.
3.25 .46 3.38 .52
I will use this instrument in my evaluation of students. 3.14 .38 3.29 .49
The instrument will effectively evaluate student
performance.
2.88 .35 3.25 .71
This instrument will effectively evaluate student
learning.
3.00 .00 3.13 .35
Total 3.10 .23 3.25 .38
Simulation Study Evaluation Tool
252 JUlY/AUGUST 2014 VOlUME 35 NUMBER 4 253
Nursing Education Perspectives
performance and learning in both clinical
environments. The ratings were slightly
higher when using the instrument in the
simulation environment compared with the
clinical setting. Faculty spend the bulk of
their clinical time with specific students giv-
ing medications or performing treatments,
making the scoring the CCEI for those par-
ticular areas very easy. In reality, it is rare
that clinical faculty would ever see students
in the clinical setting long enough, over the
course of a day, to realistically score them in
all areas of the tool.
The CCEI is a modification of an instru-
ment designed to assess students in the simu-
lation environment; therefore, it was difficult
to use a standardized, objective instrument
to assess competency in the clinical environ-
ment. However, the authors of the current
article found the CCEI to be a valid and
reliable instrument to assess clinical com-
petency in pre-licensure nursing students
in both simulation and traditional clinical
environments.
ABOUT TH E AUTHOR S
Jennifer Hayden, MSN, RN, is the National
Simulation Study project director and a
research associate, National Council of
State Boards of Nursing (NCSBN), Chicago,
Illinois. Mary Keegan, MSN, RN, CCRN,
a doctoral student at Nova Southeastern
University, is coordinator, clinical education,
Broward Health, Fort Lauderdale, Florida.
Suzan Kardong-Edgren, PhD, RN, ANEF,
CHSE, is the Jody De Meyer Endowed Chair
in Nursing, Boise State University School of
Nursing, Boise, Idaho. Richard A. Smiley,
MS, MA, is a statistician, NCSBN. The
authors wish to thank Dr. Katie Haerling
(Adamson) for her review of this paper.
For more information, contact Dr. Suzan
Kardong-Edgren at sedgren@boisestate.edu.
KEY WORDS
Creighton Competency Evaluation
Instrument (C-CEI) – Creighton Simulation
Evaluation Instrument (C-SEI) – Evaluation
– Clinical Nursing Education – Reliability –
Validity – Simulation Study
REFERENCES
Adamson, K. A. (2011). Assessing the reliability of simulation
evaluation instruments used in nursing education: A test of
concept study (Doctoral dissertation). Available from ProQuest
Dissertations and Theses database. (UMI No. 3460357)
Adamson, K., & Kardong-Edgren, S. (2012). A methodology and
resources for assessing the reliability of simulation evaluation
instruments. Nursing Education Perspectives, 33(5), 334-339.
doi:10.5480/1536-5026-33.5.334
Adamson, K. A., Gubrud-Howe, P., Sideras, S., & Lasater, K. (2012).
Assessing the reliability, validity, and use of the Lasater clinical
judgment rubric: Three approaches. Journal of Nursing Education,
51(2), 66-73.
Adamson, K. A., Parsons, M. E., Hawkins, K., Manz, J. A., Todd,
M., & Hercinger, M. (2011). Reliability and internal consistency
findings from the C-SEI. Journal of Nursing Education, 50(10),
583-586. doi:10.3928/01484834-20110715-02
American Association of Colleges of Nursing (1998). The essentials
of baccalaureate education for professional nursing practice.
Washington, DC: Author.
American Association of Colleges of Nursing (2008). The essentials
of baccalaureate education for professional nursing practice.
Washington, DC: Author.
Bland, J. M., & Altman, D. G. (1997). Statistical notes: Cronbach’s
alpha. British Medical Journal, 314, 572.
Gubrud-Howe, P. (2008). Development of clinical judgment in
nursing students: A learning framework to use in designing and
implementing simulated learning experiences (Unpublished
dissertation). Portland State University, Portland, OR.
Hayden, J. K., Jeffries, P. J., Kardong-Edgren, S., & Spector, N.
(2011). The National Simulation Study: Evaluating simulated clinical
experiences in nursing education. Unpublished research protocol,
Chicago, IL: National Council of State Boards of Nursing.
International Nursing Association for Clinical Simulation &
Learning. (2011). Standard I: Terminology. Clinical Simulation in
Nursing, 7(4S), S3-S7. doi:10.1016/j.ecns.2011.05.005
Kardong-Edgren, S., Adamson, K. A., & Fitzgerald, C. (2010). A
review of currently published evaluation instruments for human
patient simulation. Clinical Simulation in Nursing, 6(1), e25-e35.
doi:10.1016/j.ecns.2009.08.004
McDonald, M. E. (2007). The nurse educator’s guide to assessing
learning outcomes (2nd ed.). Sudbury, MA: Jones and Bartlett.
Oermann, M. H., & Gaberson, K. B. (2009). Evaluation and testing in
nursing education (3rd ed.). New York, NY: Springer.
QSEN Institute. (n.d.). Pre-licensure KSAs. Retrieved from http://
qsen.org/competencies/pre-licensure-ksas/
Sideras, S. (2007). An examination of the construct validity of a clinical
judgment evaluation tool in the setting of high fidelity simulation
(Unpublished doctoral dissertation). Oregon Health Sciences
University, Portland, OR.
Todd, M., Manz, J. A., Hawkins, K. S., Parsons, M. E., & Hercinger,
M. (2008). The development of a quantitative evaluation tool for
simulations in nursing education. International Journal of Nursing
Education Scholarship, 5(1). Retrieved from www.bepress.com/
ijne s/vol 5/iss1/art41
Viera, A. J., & Garrett, J. M. (2005). Understanding interobserver
agreement: The Kappa statistic. Family Medicine, 37(5), 360-363.
Reproduced with permission of the copyright owner. Further reproduction prohibited without
permission.