Integrating usability testing and think-aloud protocol analysis with "near-live" clinical simulations in evaluating clinical decision support

Article (PDF Available)inInternational Journal of Medical Informatics 81(11):761-72 · March 2012with213 Reads
DOI: 10.1016/j.ijmedinf.2012.02.009 · Source: PubMed
Abstract
Usability evaluations can improve the usability and workflow integration of clinical decision support (CDS). Traditional usability testing using scripted scenarios with think-aloud protocol analysis provide a useful but incomplete assessment of how new CDS tools interact with users and clinical workflow. "Near-live" clinical simulations are a newer usability evaluation tool that more closely mimics clinical workflow and that allows for a complementary evaluation of CDS usability as well as impact on workflow. This study employed two phases of testing a new CDS tool that embedded clinical prediction rules (an evidence-based medicine tool) into primary care workflow within a commercial electronic health record. Phase I applied usability testing involving "think-aloud" protocol analysis of 8 primary care providers encountering several scripted clinical scenarios. Phase II used "near-live" clinical simulations of 8 providers interacting with video clips of standardized trained patient actors enacting the clinical scenario. In both phases, all sessions were audiotaped and had screen-capture software activated for onscreen recordings. Transcripts were coded using qualitative analysis methods. In Phase I, the impact of the CDS on navigation and workflow were associated with the largest volume of negative comments (accounting for over 90% of user raised issues) while the overall usability and the content of the CDS were associated with the most positive comments. However, usability had a positive-to-negative comment ratio of only 0.93 reflecting mixed perceptions about the usability of the CDS. In Phase II, the duration of encounters with simulated patients was approximately 12min with 71% of the clinical prediction rules being activated after half of the visit had already elapsed. Upon activation, providers accepted the CDS tool pathway 82% of times offered and completed all of its elements in 53% of all simulation cases. Only 12.2% of encounter time was spent using the CDS tool. Two predominant clinical workflows, accounting for 75% of all cases simulations, were identified that characterized the sequence of provider interactions with the CDS. These workflows demonstrated a significant variation in temporal sequence of potential activation of the CDS. This study successfully combined "think-aloud" protocol analysis with "near-live" clinical simulations in a usability evaluation of a new primary care CDS tool. Each phase of the study provided complementary observations on problems with the new onscreen tool and was used to refine both its usability and workflow integration. Synergistic use of "think-aloud" protocol analysis and "near-live" clinical simulations provide a robust assessment of how CDS tools would interact in live clinical environments and allows for enhanced early redesign to augment clinician utilization. The findings suggest the importance of using complementary testing methods before releasing CDS for live use.
i
n
t
e
r
n
a
t
i
o
n
a
l
j
o
u
r
n
a
l
o
f
m
e
d
i
c
a
l
i
n
f
o
r
m
a
t
i
c
s
8
1
(
2
0
1
2
)
761–772
j
ourna
l
homepage:
www.ijmijournal.com
Integrating
usability
testing
and
think-aloud
protocol
analysis
with
“near-live”
clinical
simulations
in
evaluating
clinical
decision
support
Alice
C.
Li
a
,
Joseph
L.
Kannry
a
,
Andre
Kushniruk
b
,
Dillon
Chrimes
b
,
Thomas
G.
McGinn
c
,
Daniel
Edonyabo
a
,
Devin
M.
Mann
d,
a
Department
of
Medicine,
Division
of
General
Internal
Medicine,
Mount
Sinai
School
of
Medicine,
New
York,
NY,
USA
b
School
of
Health
Information
Science,
University
of
Victoria,
British
Columbia,
Canada
c
Department
of
Medicine,
Hofstra
North
Shore-LIJ
Medical
School,
Manhasset,
NY,
USA
d
Department
of
Medicine,
Section
of
Preventive
Medicine
and
Epidemiology,
Boston
University
School
of
Medicine,
Boston,
MA,
USA
a
r
t
i
c
l
e
i
n
f
o
Article
history:
Received
30
October
2011
Received
in
revised
form
12
January
2012
Accepted
21
February
2012
Keywords:
Usability
Clinical
decision
support
Clinical
prediction
rules
Evidence-based
medicine
Electronic
health
records
Clinical
simulations
a
b
s
t
r
a
c
t
Purpose:
Usability
evaluations
can
improve
the
usability
and
workflow
integration
of
clinical
decision
support
(CDS).
Traditional
usability
testing
using
scripted
scenarios
with
think-
aloud
protocol
analysis
provide
a
useful
but
incomplete
assessment
of
how
new
CDS
tools
interact
with
users
and
clinical
workflow.
“Near-live”
clinical
simulations
are
a
newer
usability
evaluation
tool
that
more
closely
mimics
clinical
workflow
and
that
allows
for
a
complementary
evaluation
of
CDS
usability
as
well
as
impact
on
workflow.
Methods:
This
study
employed
two
phases
of
testing
a
new
CDS
tool
that
embedded
clinical
prediction
rules
(an
evidence-based
medicine
tool)
into
primary
care
workflow
within
a
com-
mercial
electronic
health
record.
Phase
I
applied
usability
testing
involving
“think-aloud”
protocol
analysis
of
8
primary
care
providers
encountering
several
scripted
clinical
scenar-
ios.
Phase
II
used
“near-live”
clinical
simulations
of
8
providers
interacting
with
video
clips
of
standardized
trained
patient
actors
enacting
the
clinical
scenario.
In
both
phases,
all
ses-
sions
were
audiotaped
and
had
screen-capture
software
activated
for
onscreen
recordings.
Transcripts
were
coded
using
qualitative
analysis
methods.
Results:
In
Phase
I,
the
impact
of
the
CDS
on
navigation
and
workflow
were
associated
with
the
largest
volume
of
negative
comments
(accounting
for
over
90%
of
user
raised
issues)
while
the
overall
usability
and
the
content
of
the
CDS
were
associated
with
the
most
pos-
itive
comments.
However,
usability
had
a
positive-to-negative
comment
ratio
of
only
0.93
reflecting
mixed
perceptions
about
the
usability
of
the
CDS.
In
Phase
II,
the
duration
of
encounters
with
simulated
patients
was
approximately
12
min
with
71%
of
the
clinical
pre-
diction
rules
being
activated
after
half
of
the
visit
had
already
elapsed.
Upon
activation,
providers
accepted
the
CDS
tool
pathway
82%
of
times
offered
and
completed
all
of
its
ele-
ments
in
53%
of
all
simulation
cases.
Only
12.2%
of
encounter
time
was
spent
using
the
CDS
tool.
Two
predominant
clinical
workflows,
accounting
for
75%
of
all
cases
simulations,
were
identified
that
characterized
the
sequence
of
provider
interactions
with
the
CDS.
These
workflows
demonstrated
a
significant
variation
in
temporal
sequence
of
potential
activation
of
the
CDS.
Conclusions:
This
study
successfully
combined
“think-aloud”
protocol
analysis
with
“near-
live”
clinical
simulations
in
a
usability
evaluation
of
a
new
primary
care
CDS
tool.
Each
phase
Corresponding
author
at:
Harrison
Ave,
Boston,
MA,
USA.
Tel.:
+1
617
638
8021.
E-mail
address:
dmann@bu.edu
(D.M.
Mann).
1386-5056/$
see
front
matter
©
2012
Elsevier
Ireland
Ltd.
All
rights
reserved.
doi:10.1016/j.ijmedinf.2012.02.009
762
i
n
t
e
r
n
a
t
i
o
n
a
l
j
o
u
r
n
a
l
o
f
m
e
d
i
c
a
l
i
n
f
o
r
m
a
t
i
c
s
8
1
(
2
0
1
2
)
761–772
of
the
study
provided
complementary
observations
on
problems
with
the
new
onscreen
tool
and
was
used
to
refine
both
its
usability
and
workflow
integration.
Synergistic
use
of
“think-
aloud”
protocol
analysis
and
“near-live”
clinical
simulations
provide
a
robust
assessment
of
how
CDS
tools
would
interact
in
live
clinical
environments
and
allows
for
enhanced
early
redesign
to
augment
clinician
utilization.
The
findings
suggest
the
importance
of
using
complementary
testing
methods
before
releasing
CDS
for
live
use.
©
2012
Elsevier
Ireland
Ltd.
All
rights
reserved.
1.
Introduction
Worldwide
healthcare
organizations
are
moving
towards
implementation
of
electronic
health
records
(EHRs)
and
clini-
cal
decision
support
systems
(CDSS)
to
improve
the
efficiency
and
safety
of
healthcare.
In
the
United
States,
with
$19
billion
from
the
American
Recovery
and
Reinvestment
Act
(ARRA)
of
2009,
incentives
to
adopt
EHRs
into
clinical
practice
are
securely
in
place
[1,2].
CDSS
are
an
important
part
of
every
EHR
system;
they
computerize
information
to
allow
for
deliv-
ery
of
clinical
decision
support
(CDS)
tools
to
providers
during
the
clinical
decision-making
process.
CDSS
promise
to
bring
evidence-based
medicine
(EBM)
to
the
point-of-care
and
guide
clinicians
in
their
effort
to
deliver
more
efficient
and
effective
healthcare
[3].
Ideally,
they
provide
patient
specific
recom-
mendations
by
using
individual
patient
data,
a
rules-based
engine,
and
a
medical
knowledge
base
[4,5].
However
the
results
to
date
have
been
mixed
in
ambulatory
EHRs
[6–9].
Given
the
increasingly
chaotic
and
time
pressed
nature
of
patient
visits,
it
is
not
surprising
that
CDSS
have
had
limited
impact
on
the
delivery
of
point-of-care
EBM—a
reflection
of
poor
provider
acceptance
[8,10].
Critical
factors
for
effective
design
of
CDSS
include
integration
with
provider
workflow,
anticipation
of
provider
needs,
and
a
need
to
study
and
to
assess
the
usability
of
these
systems
[10–15].
An
increasingly
important
potential
solution
to
improving
the
adoption
of
CDSS
during
patient
care
includes
conducting
usability
testing
of
CDSS
interventions
prior
to
widespread
implementation.
Formal
usability
testing
has
begun
to
be
considered
crit-
ical
to
the
EHR
adoption
and
implementation
lifecycle;
and
this
clearly
applies
to
CDSS
[10,16].
Discussions
and
observa-
tions
of
usability
with
care
providers
have
provided
a
large
volume
of
evidence
to
suggest
optimal
system
use
and
out-
comes
depend
on
improved
usability
during
the
EHR
design
process
[14,17].
Current
best
practices
promote
utilization
of
cognitive
approaches
to
assess
human–computer
interactions
within
the
EHR
system
[17,18].
A
variety
of
both
summative
and
formative
user-based
approaches
have
been
employed
to
evaluate
EHRs
including
“think
aloud”
usability
testing,
cog-
nitive
task
analysis,
and
surveys
[19–21].
The
“think-aloud”
formative
usability
approach,
in
which
users
verbalize
their
thoughts
while
performing
pre-specified
tasks
within
CDSS,
is
particularly
well-suited
for
identifying
barriers
to
adoption.
It
integrates
qualitative
and
quantitative
analyses
of
direct
observations
of
scripted
provider–CDSS
interactions
to
iden-
tify
surface
level
usability
issues
[22].
However,
this
approach
limits
the
amount
of
unrestricted
interactions
providers
have
with
the
CDSS
and
the
underlying
EHR
system.
Therefore,
some
groups
have
employed
studies
that
measure
the
time
to
completion
of
set
tasks
as
measures
of
usability
and
learnability
of
CDSS
[10].
However,
even
these
more
unre-
stricted
studies
have
limited
correspondence
to
live
patient
encounters.
In
this
study,
we
describe
an
evaluation
approach
combining
think-aloud
protocol
analysis
from
usability
test-
ing
with
“near-live”
clinical
simulations
to
document
and
assess
provider–CDSS
interactions
of
a
newly
developed
CDSS
[23,24].
Through
a
process
involving
two
usability
studies,
we
attempted
to
improve
the
usability
of
a
CDS
prototype
for
two
integrated
clinical
prediction
rules
(CPRs):
the
Walsh
rule
for
Streptococcal
pharyngitis
(Strep
rule)
[25,26]
and
the
Heckerling
rule
for
Pneumonia
[27]
within
a
commercial
EHR
(EpicCare
©
).
CPRs
are
well-validated
EBM
tools
that,
if
used
as
frontline
decisional
aids,
can
help
physicians
make
evidence
based,
cost-effective
decisions.
These
rules
use
objective
find-
ings
in
patient
history,
physical,
and/or
labs
to
help
risk
stratify
the
disease
condition
and
to
determine
whether
further
inves-
tigative
or
treatment
efforts
are
necessary.
We
selected
these
two
clinical
prediction
rules
as
they
are
familiar
to
health-
care
providers
and
deal
with
highly
prevalent
ambulatory
care
conditions.
This
paper
describe
the
two
phases
of
evaluation
that
we
conducted
prior
to
widespread
deployment
of
the
integrated
clinical
prediction
rules
clinical
decision
support
tool
which
we
will
refer
to
as
the
iCPR
CDS.
Phase
I
involved
usability
testing
in
conjunction
with
“think-aloud”
protocol
analy-
sis
to
assess
human–computer
interaction
as
the
healthcare
providers
performed
specific
tasks
following
a
script
for
invok-
ing
the
iCPR
CDS
[28,29].
Phase
II
involved
a
“near-live”
clinical
simulation
to
assess
how
providers
interact
with
the
iCPR
CDS
while
interviewing
a
simulated
patient
[30].
We
hypothe-
size
that
both
forms
of
testing
provide
disparate,
informative
insights
that
are
critical
to
the
successful
development
and
integration
of
CDS
in
EHRs.
2.
Methods
The
actual
design
of
the
iCPR
CDS
prototype
is
described
in
detail
in
a
previous
publication
[31].
The
purpose
of
the
usabil-
ity
phase
of
the
software
development
cycle
was
to
identify
barriers
to
use
prior
to
the
implementation
of
a
randomized
controlled
trial
of
iCPR.
The
two
phases
of
evaluation
were
conducted
in
series
(see
below).
Between
each
phase,
a
period
of
analysis
and
prototype
revision
was
conducted
to
allow
for
iterative
improvements.
All
human–computer
interactions
were
captured
on
a
standard
clinical
workstation
running
Hypercam
®
screen
recording
software.
Built
into
the
CDS
is
an
algorithm
that
evaluates
provider
EHR
inputs
during
live
patient
encounters
to
assess
the
clin-
ical
relevancy
of
activating
the
iCPR
CDS
for
that
specific
i
n
t
e
r
n
a
t
i
o
n
a
l
j
o
u
r
n
a
l
o
f
m
e
d
i
c
a
l
i
n
f
o
r
m
a
t
i
c
s
8
1
(
2
0
1
2
)
761–772
763
encounter.
If
the
provider
enters
documentation
that
matches
keywords
in
the
algorithm
criteria,
the
iCPR
CDS
“triggers”.
For
this
study,
“triggers”
refers
to
the
activation
of
the
iCPR
CDS
by
clinically
relevant
information
inputted
into
the
EHR
system
by
the
provider
during
a
patient
encounter.
If
the
algorithm
matches,
an
iCPR
CDS
alert
will
prompt
the
provider
to
use
the
triggered
iCPR
CDS
(screenshots
of
iCPR
components
are
available
in
Appendix
A).
2.1.
Phase
I:
usability
testing
using
“think
aloud”
protocol
analysis
2.1.1. Subjects
The
eight
subjects
who
participated
in
the
“think
aloud”
phase
of
usability
testing
were
primary
care
providers
practicing
at
the
ambulatory
care
clinic
associated
with
the
hosting
aca-
demic
institution.
Subjects
were
selected
from
volunteers
to
form
a
convenience
sample.
Inclusion
criteria
required
that
subjects
had
previously
used
the
underlying
EHR
system
in
which
the
CDSS
was
embedded.
However,
proficiency
level
of
using
an
EHR
in
pri-
mary
care
varied
and
all
subjects
had
no
prior
exposure
to
the
iCPR
CDS.
Subjects
participating
in
Phase
I,
were
excluded
from
participating
in
Phase
II
testing.
2.1.2.
Procedure
The
usability
session
was
conducted
in
a
typical
clinic
office
setting.
Each
subject
was
presented
with
two
hypothetical
patient
cases:
a
Pneumonia
case
(high
or
low
risk)
and
then
a
Strep
case
(high,
intermediate
or
low
risk).
For
example,
the
first
subject
encountered
a
low
risk
of
Pneumonia
case
and
an
intermediate
risk
of
Strep
case.
The
research
staff
developed
interview
scripts
for
these
standardized
case
scenarios
with
associated
navigational
instructions.
Under
instruction
from
the
research
staff,
subjects
followed
the
scripted
navigation
to
enter
patient
data,
develop
a
progress
note,
trigger
the
iCPR
CDS,
and
provide
a
treatment
plan.
Throughout
each
case,
subjects
were
prompted
by
the
experimenter
to
“think
aloud”
or
verbalize
their
thoughts
while
working
through
each
of
the
components
of
the
iCPR
CDS
(as
listed
in
Table
1)
[15,16].
At
the
end
of
each
interview,
the
user
was
asked
a
series
of
follow-
up
questions
to
elicit
general
attitudes
towards
the
tool.
The
sessions
were
audio
recorded
and
all
computer
screens
during
the
interaction
were
captured
as
movie
files
using
the
screen
recording
software
Hypercam
®
.
2.1.3.
Data
analysis
The
audio
recordings
of
subjects
were
first
transcribed
verba-
tim.
A
log
file
was
created
and
the
verbatim
audio
transcripts
of
providers’
verbalizations
were
linked
to
the
relevant
video
recorded
movements
on
the
computer
screen
(e.g.
menu
selections,
etc.)
for
review.
Using
a
subsample
of
transcripts,
coders
individually
reviewed
the
transcripts
and
annotated
them
for
possible
categories.
Next,
these
annotated
tran-
scripts
were
reviewed
together
and
potential
coding
categories
were
reviewed,
compiled,
and
standardized.
All
coded
cate-
gories
converged
into
a
unified
coding
categories
dictionary.
Using
these
refined
coding
categories,
the
coders
reviewed
and
annotated
each
transcript
together.
The
log
file
was
coded
for
usability
and
workflow
issues,
which
were
identified
by
Table
1
iCPR
clinical
decision
support
(CDS)
component
descriptions.
iCPR
CDS
components
Description
of
iCPR
CDS
components
within
the
electronic
health
record
(EHR)
ALERT
The
first
interaction
a
provider
has
with
the
iCPR
CDSS
is
through
the
acceptance
or
deferral
of
Best
Practice
Alerts
(BPA).
These
alerts
are
triggered
by
relevant
information
providers
input
into
the
EHR
that
activates
the
algorithm
that
assesses
whether
the
visit
would
be
iCPR
CDSS
appropriate
(Appendix
A)
CALCULATOR
This
component
includes
the
CALCULATOR
holding
the
clinical
prediction
rules
for
Strep
or
Pneumonia
that
the
provider
uses
to
calculate
the
risk
of
the
patient
having
the
disease
and
the
subsequent
risk
notification
alert
that
recommends
the
appropriate
SMARTSET
for
ordering
(Appendix
A)
SMARTSET
The
risk
stratified
bundled
ordering
set
that
is
associated
with
the
score
found
through
the
CALCULATOR
component
(Appendix
A)
DOCUMENTATION Output
from
the
iCPR
CDS
SMARTSET
that
adds
the
findings
of
the
calculator
component
as
an
addendum
to
the
progress
note
or
as
a
completely
new
progress
note
PATIENT
INSTRUC-
TION
Output
from
the
iCPR
CDS
SMARTSET
that
gives
providers
prewritten
instructions
appropriate
for
the
level
of
disease
severity
to
the
patient
the
reviewers
(in
conjunction
with
viewing
the
correspond-
ing
screen
recordings)
and
categories
were
assigned
to
them
to
describe
these
issues.
In
addition,
use
of
specific
com-
ponents
of
the
EHR
and
CDSS
were
annotated
[16].
Two
independent
coders
read
through
the
annotated
transcripts
of
subject
verbalizations
and
watched
the
corresponding
video
for
each
case.
Initial
categories
were
refined
by
the
coders
and
collapsed,
resulting
in
the
final
list
of
coding
categories
listed
in
Table
3
(which
were
used
to
analyze
all
the
tran-
scripts
of
subject–system
interactions.
All
discrepancies
in
the
coding
were
resolved
by
discussion
to
achieve
a
consensus.
Each
coded
segment
of
the
transcribed
text
was
catego-
rized
for:
(a)
the
iCPR
CDS
components
involved
(Table
1),
(b)
usability/workflow
issues
as
conveyed
through
providers’
verbalizations
and
actions
(Table
3),
(c)
type
of
commentary
provided
by
the
subject
(i.e.
“positive”,
“neutral”
or
“negative”
commentary,
as
judged
by
the
reviewers).
Frequencies
of
codes
were
tabulated
for
each
subject.
2.2.
Phase
II:
“near-live”
clinical
simulations
2.2.1.
Subjects
The
eight
subjects
who
participated
in
the
“near-live”
clini-
cal
simulations
were
primary
care
providers
practicing
at
the
ambulatory
care
clinic
associated
with
the
hosting
academic
institution.
Inclusion
criteria
are
similar
to
that
of
Phase
I
subjects;
all
subjects
were
required
to
have
used
the
EHR
764
i
n
t
e
r
n
a
t
i
o
n
a
l
j
o
u
r
n
a
l
o
f
m
e
d
i
c
a
l
i
n
f
o
r
m
a
t
i
c
s
8
1
(
2
0
1
2
)
761–772
system
before.
Proficiency
level
with
utilization
of
an
EHR
varied
and
all
subjects
had
no
prior
exposure
to
the
iCPR
CDS.
2.2.2. Procedure
The
evaluation
session
was
conducted
in
a
mock
clinic
set-
ting
equipped
for
remote
observation
with
video
cameras
in
adjacent
rooms.
Five
case
scenario
video
clips
were
developed
using
standardized
trained
patient
actors
for
Pneumonia
(high
and
low
risk)
and
Strep
(high,
low,
and
intermediate
risk)
cases.
Each
subject
encountered
three
standardized
patient
simula-
tions:
a
Pneumonia
case
(either
high
or
low
risk),
a
Strep
case
(either
high
or
low
risk),
and
an
intermediate
Strep
case.
For
example,
the
second
subject
encountered
a
low
risk
of
Strep,
intermediate
risk
of
Strep,
and
low
risk
of
Pneumonia
case.
The
mock
clinic
room
in
which
the
provider
was
located
had
a
desktop
computer-monitor
linked
to
the
EHR,
a
laptop
placed
close
by
to
the
provider
with
the
standardized
patient–actor
interview
video,
and
a
video
camera
for
remote
viewing
for
the
research
staff.
Subjects
used
the
desktop
monitor
and
mouse
to
interact
with
the
EHR
system.
Prior
to
start
of
each
case
scenario,
subjects
were
instructed
that
patient
information
for
each
case
was
available
through
the
laptop
as
video
clips,
and
a
folder
with
patient
information.
Subjects
were
able
to
start,
stop,
pause
or
review
the
video
at
their
discretion.
Sub-
jects
were
told
to
start
the
patient
visit
once
the
research
staff
left
the
room
and
were
asked
to
conduct
the
visit
as
they
would
in
their
own
clinic.
Subjects
received
no
navigational
guidance
from
the
research
staff.
The
sessions
were
audio
recorded
and
all
computer
screens
during
the
interaction
were
captured
as
movie
files
using
the
screen
recording
software
Hypercam
®
.
2.2.3.
Data
analysis
Verbatim
audio
transcripts
of
subjects’
verbalizations
were
annotated
with
relevant
onscreen
movements
(e.g.
menu
selections,
etc.)
to
provide
an
annotated
log
file
of
the
provider–EHR
interaction
[29].
Two
independent
coders
reviewed
the
transcript
with
the
screen
recordings
to
cap-
ture
the
timing
of
specific
actions
during
each
encounter.
Coded
actions
of
interest
are
categorized
by
type
of
action
‘click,
‘accept’,
‘open’,
and
‘close’
and
by
the
EHR
compo-
nent
accessed.
Time
stamps
were
generated
from
the
raw
video
of
user
activities
and
the
recorded
sessions
were
seg-
mented
into
episodes
representing
major
user
activities,
e.g.
triggering
of
the
CDS,
documenting
patient
histories,
etc.
The
length
of
time
and
sequence
of
the
episodes
was
then
calculated
and
graphed.
Time
stamps
generated
for
each
activity
were
extracted
and
compiled
for
time
analy-
sis
of
the
EHR/CDSS
components
accessed
and
the
sequence
of
use
for
each
patient
case.
Individual
actions
performed
which
involved
similar
functions
were
collapsed
into
the
final
workflow
components
(Table
2).
Time
to
completion,
time
spent
in
workflow
components,
time
to
triggering
were
characterized.
Sequences
of
components
accessed
were
com-
piled
per
case
to
assess
workflow.
Sequence
and
amount
of
time
spent
within
each
component
were
compiled
to
build
a
timeline
to
describe
workflow
during
each
patient
encounter.
Table
2
Description
of
clinical
encounter
workflow
components.
Workflow
components
Description
of
components
within
the
electronic
health
record
(EHR)
Diagnoses
&
orders
(Dx&Orders)
Time
spent
in
the
diagnoses-&-orders
field
which
allows
for
writing
in
diagnoses
and
new
problems
as
well
as
ordering
medications,
labs,
and/or
procedures
iCPR
CDS Time
spent
using
iCPR
CDS
combined
components:
ALERT,
CALCULATOR,
SMARTSET,
searching
for
the
CDS
Other
Time
spent
in
allergy/medication
review,
chart
review,
close
encounter,
vitals,
input,
watching/interacting
with
standardized
patient
video
Progress
note
(PN)
Time
spent
documenting
the
current
clinical
complaint
within
the
Progress
Note
or
elsewhere
in
the
EHR
Reason
for
visit
(RFV)
Time
spent
in
the
reason-for-visit
(RFV)
field,
which
is
used
to
document
chief
complaint
as
well
as
administrative
information
3. Results
3.1. Phase
I:
results
A
total
of
8
subjects,
4
residents
and
4
faculty
providers,
partic-
ipated
in
the
Phase
I
“think-aloud”
usability
testing.
For
each
subject,
an
average
of
46
codes
was
identified,
with
a
range
of
24–74
codes.
There
were
four
cases
of
high
risk
of
strep,
three
cases
each
of
low
risk
and
intermediate
risk
of
strep,
five
cases
each
of
high
and
low
risk
of
pneumonia.
Overall,
there
were
366
codes
associated
with
the
subjects’
verbal-
izations
and
actions,
with
132
being
classified
as
containing
positive
comments,
57
as
being
neutral
comments,
and
234
as
being
negative
comments
(195
for
pneumonia
and
171
for
strep).
Raw
codes
for
usability
issues
were
identified
during
a
review
of
the
data
by
two
researchers.
During
this
process
categories
were
condensed
and
collapsed
to
generate
a
final
list
of
coding
categories
given
and
defined
in
Table
3
(e.g.
cate-
gories
such
as
USABILITY,
NAVIGATION,
WORKFLOW—see
the
table
for
category
definitions).
Neutral
codes
were
excluded
from
this
analysis.
Of
the
seven
coding
categories
in
Table
3,
subjects
had
the
most
commentary
(positive
and
negative)
about
the
categories
of
USABILITY,
CONTENT,
and
WORK-
FLOW
representing
a
combined
total
of
70%
of
all
comments
(30%,
22%,
18%
of
all
comments
or
108,
80,
66
comments,
respectively).
The
overall
perception
of
the
CDSS
had
a
positive-to-negative
commentary
ratio
of
0.86
favoring
the
negative.
3.2.
Coding
category
analysis
The
categories
of
NAVIGATION
and
WORKFLOW
were
asso-
ciated
with
the
largest
volume
of
negative
comments
with
positive-to-negative
commentary
ratio
of
0.18
and
0.39
rep-
resenting
over
more
than
90%
in
negative
comments
within
their
respective
categories;
criticism
of
the
iCPRs
was
evenly
distributed
between
strep
and
pneumonia
(Fig.
1).
In
partic-
ular
for
WORKFLOW,
a
number
of
subjects
indicated
a
need
for
increased
flexibility
to
allow
for
different
ways
of
using
the
i
n
t
e
r
n
a
t
i
o
n
a
l
j
o
u
r
n
a
l
o
f
m
e
d
i
c
a
l
i
n
f
o
r
m
a
t
i
c
s
8
1
(
2
0
1
2
)
761–772
765
Table
3
Coding
categories,
with
examples.
Code Definition
Example
coded
statement
USABILITY
Refers
to
commentary
on
the
perceived
effectiveness,
efficiency,
and
“ease-
or
lack-of-ease
of
use”
of
the
iCPR
CDS
“It’s
becoming
a
lot
of
clicking
and
reading
and
you
want
to
do
this
thing
quickly,
especially
if
you
have
a
lot
of
patients
waiting.
.
.
VISIBILITY
Refers
to
commentary
on
the
extent
an
image,
text,
or
message
is
noticed
or
attended
to
“I
just
see
that
BPA
[alert]
here.
Normally
I
probably
wouldn’t
see
it
since
I
don’t
usually
look
here;
but,
if
it
had
been
more
prominent,
I
might
have
seen
it— if
it
had
popped
up.”
WORKFLOW Refers
to
commentary
on
the
general
order
and
sequence
of
tasks
and
activities
involved
in
a
patient
encounter
“I
think
it
depends
on
when
you
start
using
the
tool
because
if
you
use
it
right
from
the
beginning.
.
.you
could
get
distracted
and
forget
to
go
through
other
questions
maybe.
.
.I
just
think
these[(order]
sets
should
come
when
you
ask
for
them,
not
from
the
walk-in
diagnosis.
.
.
CONTENT
Refers
to
commentary
on
the
content
of
information
provided
by
the
iCPR
CDS
“I
think
the
soup
issue
might
be
an
issue
with
my
patients,
we
see
patients
over
50
with
hypertension
and
we
wouldn’t
really
tell
them
to
take
chicken
soup
because
it
is
full
of
sodium.”
UNDERSTAND-ABILITY
Refers
to
commentary
on
the
extent
to
which
the
text
within
the
CDS
is
comprehendible
“Supportive
care
is
weird
[text
phrasing]
because
it
is
saying
pneumonia,
and
the
patient
does
not
have
it.”
USEFULNESS
Refers
to
commentary
on
the
extent
the
tool
(and
information
provided
by
it)
is
perceived
as
helpful
during
clinical
decision-making
and
care
delivery
“I
don’t
like
this
one
[order
set]
as
much.
.
.I
realize
this
is
all
about
evidence-based
medicine.
.
. [but]
I
just
think
there
is
more
in
a
clinical
picture
and
this
thing
is
pushing
you
in
a
direction
without
taking
into
account
[the
full
clinical
picture].”
NAVIGATION Refers
to
commentary
on
the
provider’s
ability
to
move
through
the
system
(i.e.
where
to
go,
how
to
move
forward
or
backward)
“A
prompt
of
some
sort
would
be
good.
I
would
need
a
prompt;
I
don’t
know
where
to
go
next.”
iCPR
CDS
to
complement
disparate
practice
styles
and
situa-
tional
contexts.
One
subject
commented
that
she
did
not
want
the
iCPR
CDS
to
be
triggered
“too
early”
during
the
patient
encounter
such
as
a
trigger
based
on
a
complaint
entered
by
a
medical
assistant
or
nurse
(before
the
provider
sees
the
patient)
but
rather
wanted
to
use
the
iCPR
CDS
when
she
decides
it
is
most
helpful
(trigger
when
the
provider
interacts
with
patient).
Most
subjects
supported
the
triggering
of
the
iCPR
CDS
during
their
decision-making
processes,
but
several
commented
that
the
tool
was
most
useful
after
the
“decision
point”
as
a
check
to
ensure
that
nothing
important
was
for-
gotten
and
that
the
best
evidence
was
applied.
The
categories
of
USABILITY
and
CONTENT
were
asso-
ciated
with
the
greatest
volume
of
positive
commentary.
Overall,
the
positive-to-negative
commentary
ratio
was
0.93
and
0.50
for
USABILITY
and
CONTENT,
respectively.
The
proportion
of
negative
comments
relative
to
all
comments
was
52%
and
67%
for
USABILITY
and
CONTENT
categories,
respectively.
This
breakdown
was
similar
between
strep
and
pneumonia.
With
the
exception
of
the
USEFULNESS
category,
which
had
a
positive-to-negative
commentary
ratio
of
1.3,
all
other
categories
had
a
ratio
less
than
1,
indicating
a
majority
of
negative
comments.
The
higher
positive-to-negative
ratio
for
USEFULNESS
is
exemplified
by
the
positive
perceptions
subjects
had
for
the
iCPR
CDS
CALCULATOR
component.
Fur-
thermore,
many
subjects
recommended
directly
embedding
the
iCPR
CDS
CALCULATOR
into
the
EHR
progress
note.
While
the
category
of
USABILITY
was
associated
with
the
most
vol-
ume
of
positive
commentary,
the
positive-to-negative
ratio
of
0.93
revealed
mixed
perceptions
about
the
usability
of
the
iCPR
CDS.
Providers
enjoyed
the
defaulted
antibiotics
but
disliked
the
overall
increased
amount
of
clicking
associated
with
use.
The
analysis
of
user
interactions
with
the
Strep
and
Pneu-
monia
cases
indicated
a
very
similar
distribution
of
codes
with
the
exception
of
the
category
of
CONTENT,
which
for
the
pneumonia
case
received
14%
of
all
coded
commentary
as
compared
to
8%
for
Strep
(Fig.
1).
Sub-analysis
of
the
com-
ments
associated
with
the
category
CONTENT
revealed
that
the
pneumonia
case
received
70%
of
all
negative
comments
as
compared
to
30%
for
strep.
Furthermore,
subjects
commented
that
they
were
more
familiar
with
the
Strep
rule
than
the
pneumonia
rule
and
felt
that
pneumonia
was
a
more
nuanced
clinical
diagnosis.
3.3. CDS
component
analysis
Analysis
of
comments
made
about
specific
iCPR
CDS
com-
ponents
showed
that
SMARTSET
and
ALERT
components
received
the
most
comments
(34%
and
21%
of
all
codes
or
123
and
77
codes,
respectively)
of
the
five
iCPR
components
(Fig.
1).
Of
all
the
iCPR
CDS
components,
providers
consistently
indi-
cated
that
they
liked
the
CALCULATOR
component
with
20%
of
negative
comments
and
a
positive-to-negative
commen-
tary
ratio
of
5.
The
ALERT
component
was
associated
with
the
most
negative
comments
(81%)
compared
to
any
other
com-
ponent
with
a
positive-to-negative
ratio
of
0.24;
subanalysis
showed
that
67%
of
negative
ALERT
commentary
was
about
pneumonia
and
only
31%
directed
towards
strep.
Providers
raised
issues
in
navigating
through
the
ALERT;
finding
diffi-
culty
in
closing
the
ALERT;
advancing
to
the
CALCULATOR;
766
i
n
t
e
r
n
a
t
i
o
n
a
l
j
o
u
r
n
a
l
o
f
m
e
d
i
c
a
l
i
n
f
o
r
m
a
t
i
c
s
8
1
(
2
0
1
2
)
761–772
Fig.
1
Phase
I
iCPR
clinical
decision
support
(CDS)
component
and
coding
categories
analysis.
and
feeling
that
the
ALERT
did
not
readily
convey
the
function
of
the
iCPR
CDS
and
appeared
to
assume
that
a
diagnosis
was
already
made
before
the
CALCULATOR.
This
perspective
was
augmented
in
the
pneumonia
iCPR
CDS
because
of
the
perception
of
decreased
clinical
relevancy
of
the
pneumonia
rules
as
compared
to
the
strep
rules.
The
iCPR
component
of
SMARTSET
was
associated
with
67%
negative
comments
with
positive-to-negative
ratio
of
0.48;
however,
unlike
the
ALERT
component,
there
was
no
major
difference
between
the
number
of
negative
and
positive
comments
between
the
strep
and
pneumonia.
Overall
subjects
felt
the
SMARTSET
should
have
had
more
defaulted
selections;
was
not
readily
intuitive
for
ordering;
and
was
unclear
in
what
contents
were
included.
The
DOCUMENTATION
component
was
also
associated
with
a
greater
percentage
of
negative
commentary
at
76%
with
a
ratio
of
0.42;
strep
dominated
these
negative
commentaries
(62%
strep
vs
38%
pneumo-
nia).
Subjects
commented
that
the
feature
that
allows
for
i
n
t
e
r
n
a
t
i
o
n
a
l
j
o
u
r
n
a
l
o
f
m
e
d
i
c
a
l
i
n
f
o
r
m
a
t
i
c
s
8
1
(
2
0
1
2
)
761–772
767
automatic
insertion
of
the
calculator
results
into
the
progress
note
was
not
sufficient
for
a
complete
patient
encounter.
Furthermore,
for
the
strep
intermediate
iCPR
CDS,
an
error
design
limited
the
import
of
the
text
that
could
be
inserted
into
the
progress
note,
causing
the
provider
to
not
be
able
to
save
the
work
completed
through
the
CALCULATOR.
3.4.
Phase
II:
results
A
total
of
8
subjects,
3
resident
and
5
faculty
providers
par-
ticipated
in
the
Phase
II
“near-live”
clinical
simulations.
Prior
experience
with
the
EHR
ranged
from
1
to
4
years
with
an
average
of
2.6
years
of
use,
with
5
of
the
8
providers
report-
ing
prior
use
of
other
EHRs.
Both
younger
and
older
providers
reported
similar
comfort
level
with
using
the
EHR
during
a
patient
interaction.
3.4.1. Total
time
Each
of
the
8
providers
engaged
in
three
different
video
patient
scenarios
for
a
total
of
24
case
scenarios.
As
seen
in
Table
4,
on
average,
each
encounter
took
12:03
min
(5:11–18:35
min).
In
71%
of
cases
(n
=
17)
the
iCPR
CDS
was
triggered
after
an
average
of
51%
of
the
visit
had
elapsed.
Providers
accepted
the
use
of
the
CALCULATOR
component
in
82%
of
triggered
cases
(Table
1).
They
completed
the
entire
iCPR
CDS
(accepted
ALERT,
used
the
CALCULATOR,
and
signed
the
SMARTSET)
in
53%
(n
=
9)
of
the
triggered
cases
spending
on
average
12.2%
of
encounter
time
using
the
iCPR
CDS.
Of
the
providers
who
triggered
the
iCPR
CDS,
18%
(n
=
3)
decided
to
not
advance
to
use
the
next
part
of
the
iCPR
CDS.
3.4.2. Triggering
The
most
frequent
inputs
that
triggered
the
CDS
in
the
diagnosis
field
were
“sore
throat,”
“strep
throat,”
and
“pharyn-
gitis”
for
the
Strep
iCPR
CDS;
“pneumonia”
and
“community
acquired
pneumonia”
for
the
pneumonia
iCPR
CDS.
Of
the
top
three
distinct
diagnoses
that
did
not
lead
to
triggering,
“cough”
was
inputted
five
times,
“shortness
of
breath”
four
times,
and
“pharyngitis,
acute”
twice.
The
rate
of
triggering
was
75%
or
higher
among
high
and
intermediate
risk
conditions
for
both
strep
and
pneumonia
while
the
low
risk
scenarios
had
lower
triggering
rates
(50%
for
low
risk
strep
and
25%
for
low
risk
pneumonia).
3.4.3.
Workflow
Through
the
24
cases,
two
main
workflows
that
characterized
the
sequence
of
user
interactions
were
identified
accounting
for
75%
(n
=
18)
of
all
cases
(Table
5).
In
cases
where
the
iCPR
CDS
triggered,
all
subjects
followed
a
sequence
going
from
PROGRESS-NOTE
(PN)
to
Dx&Orders
and
then
to
the
iCPR
CDS.
The
two
major
sequences
identified
differed
mostly
regarding
when
the
REASON-FOR-VISIT
(RFV)
field
was
accessed,
and
can
be
characterized
as
the
following:
(1)
EARLY-REASON-FOR-
VISIT
SEQUENCE
(EARLY-RFV),
which
refers
to
accessing
of
RFV
at
the
beginning
of
the
encounter;
and
(2)
LATE-REASON-FOR-
VISIT
SEQUENCE
(LATE-RFV),
which
refers
to
using
the
RFV
field
at
the
end
of
the
encounter.
Overall,
the
cases
were
evenly
split
between
EARLY-
and
LATE-RFV
with
each
of
these
two
major
sequence
categories
seeing
37.5%
(n
=
9)
of
all
cases.
Fig.
2
Phase
II
time
sequence
analysis
of
observed
examples
of
early-
and
late-reason-for-visit
(RFV)
workflows.
3.4.4. Temporal
integration
Fig.
2
illustrates
the
sequence
and
duration
for
the
two
major
workflow
sequences
(for
two
representative
subjects):
EARLY-
RFV
and
LATE-RFV.
This
specific
EARLY-RFV
example
lasted
13:26
min
with
the
majority
of
the
session
(68%)
spent
in
doc-
umenting
the
progress
note
(PN),
which
is
the
third
activity
in
sequence.
The
subject
started
the
visit
in
the
OTHER
category
then
entered
RFV
followed
by
PN
before
going
to
Dx&Orders
where
the
input
of
“strep
throat”
diagnosis
triggered
the
iCPR
CDS
after
74%
(9:58
min)
of
the
visit
had
elapsed.
In
the
LATE-RFV
example
(see
Fig.
2),
the
subject
accessed
the
RFV
field
at
the
end
of
the
patient
encounter.
In
this
spe-
cific
example,
the
visit
lasted
5:09
min
with
the
majority
of
the
session
(64%)
also
spent
in
the
PN,
which
is
the
second
activity
in
the
sequence.
The
provider
triggered
the
CDSS
by
enter-
ing
“community
acquired
pneumonia”
in
the
diagnosis
field
of
the
EHR
after
68%
(4:21
min)
of
the
visit
had
elapsed.
Simi-
larly
to
the
subject
from
the
EARLY-RFV
example,
this
subject
also
returned
to
Dx&Orders
at
the
end
of
the
visit
to
remove
extraneous
orders
after
using
the
iCPR
CDS.
4.
Discussion
This
study
integrated
two
different
evaluation
approaches
including
a
novel
“near-live”
approach
to
improve
the
design
of
a
new
CDSS
[12].
Phase
I
involved
usability
testing,
applying
the
well-established
“think-aloud”
protocol
analysis
method
to
assess
subject–CDSS
interactions
by
having
healthcare
providers
think
aloud
while
using
the
iCPR
CDS
to
complete
scripted
tasks
[28,29].
Phase
II
used
a
simulated
“near-live”
study
approach
to
assess
a
more
global
subject–EHR
interac-
tion
to
characterize
the
ability
of
providers
to
use
the
iCPR
CDS
in
an
unscripted
condition
where
they
interviewed
a
simulated
patient
[30].
While
there
are
overlaps
between
infor-
mation
provided
by
both
usability
techniques,
each
approach
provided
distinct
insights
that
improved
the
overall
design
and
complemented
our
understanding
of
provider
workflow
of
a
point-of-care
CDS.
The
two
different
types
of
usability
analyses
gave
us
a
multi-faceted
understanding
of
what
physicians
expect
from
decisional
supports
and
how
they
expect
to
interact
with
768
i
n
t
e
r
n
a
t
i
o
n
a
l
j
o
u
r
n
a
l
o
f
m
e
d
i
c
a
l
i
n
f
o
r
m
a
t
i
c
s
8
1
(
2
0
1
2
)
761–772
Table
4
iCPR
clinical
decision
support
(CDS)
triggering
rates
and
time
spent.
Cases
(%
of
total)
As
%
of
triggered
cases
Average
time
spent
(mm:ss)
Min
time
spent
(mm:ss)
Max
time
spent
(mm:ss)
Average
%
of
visit
elapsed
when
CDS
triggered
Total
24
(100)
12:03
05:11
18:35
Not
triggered
7
(20)
11:35
05:11
18:35
Triggered
17
(71)
12:14
06:24
18:11
51%
Triggered
and
accepted
iCPR
14
(58)
82%
CDSS
during
clinical
care.
Phase
I
extracted
details
of
intrin-
sic
weaknesses
in
the
composition
of
the
CDSS
and
confirmed
previously
reported
usability
issues
found
in
earlier
studies
for
the
intended
use
of
the
iCPR
CDS
[10–15].
During
this
analysis,
providers
reported
that
the
alerts
were
especially
cumbersome
with
poor
content
and
navigation
that
limited
the
ability
of
the
provider
to
know
how
to
proceed
to
the
next
step;
this
was
particularly
prominent
with
the
pneumonia
CDS
as
indicated
by
the
larger
percentage
of
negative
com-
mentary.
In
addition,
analysis
from
Phase
I
showed
that
the
bundled
order
sets
(SmartSet)
needed
adjustment
in
content
with
providers
raising
concerns
with
the
selection
of
orders
and
defaults.
As
a
consequence
the
alerts
were
modified
to
simplify
the
wording
and
instructions
to
improve
navigation.
Provider
feedback
on
the
SmartSet
from
Phase
I
led
to
numer -
ous
changes
that
included
the
addition
of
supportive
therapy
scripts
for
direct
ordering,
prefilled,
defaulted,
and
automat-
ically
associated
antibiotic
ordering,
and
also
elimination
of
the
chest
X-ray
for
pneumonia
as
a
line
item
within
the
SmartSet
as
it
generated
unanticipated
confusion.
Workflow
and
navigation
issues
were
the
main
user
issues
observed
in
Phase
I,
but
given
the
artificial
nature
of
the
“think-aloud”
protocol
analysis
approach
and
lack
of
data
on
provider
self-
directed
usage
of
the
CDSS,
major
modifications
to
these
areas
were
deferred
until
the
more
realistic
“near-live”
Phase
II
testing.
Phase
II
“near-live”
clinical
simulations
focused
on
under-
standing
where,
when,
and
how
providers
access
the
iCPR
CDS
in
dealing
with
patient
cases.
Specifically,
Phase
II
character-
ized
the
unpredictable
aspects
of
use
of
the
iCPR
CDS
that
were
not
anticipated
by
the
CDSS
designers
and
also
identi-
fied
potential
disruptions
caused
by
the
iCPR
CDS
on
“natural”
provider
workflow.
Analysis
of
the
“triggering”
of
the
CDS
and
provider
sequence
workflow
helped
the
development
team
understand
previously
unidentified
barriers
to
integration
of
the
iCPR
CDS
with
provider
workflow.
For
example,
Phase
II
testing
uncovered
new
workflows
where
the
iCPR
CDS
did
not
trigger
at
all;
it
should
be
noted
that
this
issue
was
not
detected
by
Phase
I
testing.
Unexpected
inputs
entered
by
the
subjects
in
Phase
II
highlighted
deficiencies
in
the
iCPR
CDS
activation
algorithm
(i.e.
initially
ordering
of
point-of-care
testing
did
not
trigger
the
iCPR
CDS).
In
the
pneumonia
CDS,
concerns
over
alert
fatigue
led
to
the
eventual
exclusion
of
potential
over-
triggering
resulting
from
entry
of
nonspecific
diagnoses
(such
as
“upper
respiratory
infection”
and
“cough”)
[6].
Analysis
of
the
workflow
indicated
that
the
iCPR
CDS
was
triggered
late
in
the
encounters,
a
workflow
sequence
that
occurred
much
more
frequently
than
was
predicted
during
design.
Only
one
provider
triggered
the
tool
within
the
first
10%
of
the
visit,
and
actively
engaged
with
it
during
the
first
half
of
a
visit.
This
was
an
infrequent
workflow;
however,
this
encounter
style
was
considered
to
promote
a
more
optimal
workflow
and
based
on
these
results,
the
study
team
has
since
implemented
impor-
tant
changes
in
training
and
clinic
workflow
to
encourage
its
use
among
providers.
As
described
above
Phase
I
and
Phase
II
resulted
in
dif-
ferent
types
of
results
and
had
different
implications
for
the
designers
of
the
iCPR
CDS.
Phase
I
uncovered
lower
level
usability
problems
which
Phase
II
would
not
have
revealed
while
Phase
II
discovered
workflow
issues
related
to
how
sub-
jects
would
have
naturally
interacted
with
the
tool.
However,
taken
together
both
phases
of
testing
led
to
additive
comple-
mentary
information
and
refinement
of
the
iCPR
CDS
prior
to
deployment
in
live
settings.
During
Phase
I,
providers
did
not
like
the
scripted
trigger
from
RFV;
this
was
clarified
in
Phase
II,
in
which
the
main
Table
5
Phase
II
provider
workflow
sequence
analysis.
Sequence
of
what
the
provider
accessed
Early-RFV
access
sequence
Late-RFV
access
sequence
RFV
triggering
iCPR
sequence
No
progress
note
use
sequence
Total
First OTHER
OTHER
OTHER
OTHER
Second
RFV
PN
RFV
Dx&Orders
Third
PN
Dx&Orders
PN
(iCPR)
Fourth
Dx&Orders
(iCPR)
(iCPR)
Dx&Orders
Fifth
(iCPR)
PN/OTHER
(Dx&Orders)
Sixth
PN/Dx&Orders
RFV
(PN)
Seventh
OTHER
(OTHER)
Cases
9
9
3
3
24
%
of
total
cases 37.5% 37.5%
12.5%
12.5%
100%
Distinct
number
of
providers
4
5
1
1
8
()
some
cases
did
not
have
this
section:
e.g.
did
not
trigger
iCPR.
i
n
t
e
r
n
a
t
i
o
n