Evaluating Alert Fatigue Over Time to EHR-Based Clinical Trial Alerts: Findings from a Randomized Controlled Study

Departments of Biomedical Informatics and Internal Medicine, College of Medicine, The Ohio State University, Columbus, Ohio, USA.
Journal of the American Medical Informatics Association (Impact Factor: 3.5). 04/2012; 19(e1):e145-e148. DOI: 10.1136/amiajnl-2011-000743
Source: PubMed
ABSTRACT
OBJECTIVE: Inadequate participant recruitment is a major problem facing clinical research. Recent studies have demonstrated that electronic health record (EHR)-based, point-of-care, clinical trial alerts (CTA) can improve participant recruitment to certain clinical research studies. Despite their promise, much remains to be learned about the use of CTAs. Our objective was to study whether repeated exposure to such alerts leads to declining user responsiveness and to characterize its extent if present to better inform future CTA deployments. METHODS: During a 36-week study period, we systematically documented the response patterns of 178 physician users randomized to receive CTAs for an ongoing clinical trial. Data were collected on: (1) response rates to the CTA; and (2) referral rates per physician, per time unit. Variables of interest were offset by the log of the total number of alerts received by that physician during that time period, in a Poisson regression. RESULTS: Response rates demonstrated a significant downward trend across time, with response rates decreasing by 2.7% for each advancing time period, significantly different from zero (flat) (p<0.0001). Even after 36 weeks, response rates remained in the 30%-40% range. Subgroup analyses revealed differences between community-based versus university-based physicians (p=0.0489). DISCUSSION: CTA responsiveness declined gradually over prolonged exposure, although it remained reasonably high even after 36 weeks of exposure. There were also notable differences between community-based versus university-based users. CONCLUSIONS: These findings add to the limited literature on this form of EHR-based alert fatigue and should help inform future tailoring, deployment, and further study of CTAs.

Full-text

Available from: Peter J Embi
Evaluating alert fatigue over time to EHR-based
clinical trial alerts: findings from a randomized
controlled study
Peter J Embi,
1
Anthony C Leonard
2
ABSTRACT
Objective Inadequate participant recruitment is a major
problem facing clinical research. Recent studies have
demonstrated that electronic health record (EHR)-based,
point-of-care, clinical trial alerts (CTA) can improve
participant recruitment to certain clinical research studies.
Despite their promise, much remains to be learned about
the use of CTAs. Our objective was to study whether
repeated exposure to such alerts leads to declining user
responsiveness and to characterize its extent if present to
better inform future CTA deployments.
Methods During a 36-week study period, we
systematically documented the response patterns of 178
physician users randomized to receive CTAs for an
ongoing clinical trial. Data were collected on: (1) response
rates to the CTA; and (2) referral rates per physician, per
time unit. Variables of interest were offset by the log of
the total number of alerts received by that physician
during that time period, in a Poisson regression.
Results Response rates demonstrated a significant
downward trend across time, with response rates
decreasing by 2.7% for each advancing time period,
significantly different from zero (flat) (p<0.0001).
Even after 36 weeks, response rates remained in the
30%e40% range. Subgroup analyses revealed
differences between community-based versus
university-based physicians (p¼0.0489).
Discussion CTA responsiveness declined gradually over
prolonged exposure, although it remained reasonably
high even after 36 weeks of exposure. There were also
notable differences between community-based versus
university-based users.
Conclusions These findings add to the limited literature
on this form of EHR-based alert fatigue and should help
inform future tailoring, deployment, and further study of
CTAs.
BACKGROUND AND SIGNIFICANCE
Clinical trials are essential to the advancement of
medicine, and research participant recruitment is
critical to successful trial conduct. Unfortunately,
difculties achieving recruitment goals are
common, and failure to meet such goals can impede
the development and evaluation of new medical
therapies.
1 2
It is well recognized that physicians
often play a vital role in the recruitment of
participants for certain trials. However, barriers
including time constraints, unfamiliarity with
available trials, and difculty referring patients to
trials, often make it challenging to recruit during
routine practice.
3e6
Consequently, most clinicians
do not engage in traditional recruitment activities
and recruitment rates suffer.
46
The increasing availability of electronic health
records (EHRs) presents an opportunity to address
the issue of inadequate recruitment for clinical trials
by leveraging the information and decision support
resources often built into such systems. Indeed,
recent studies of EHR-based, point-of-care, clinical
trial alerts (CTA) have demonstrated they have the
potential to improve recruitment rates when
applied to clinical trials.
7e9
Despite their promise
and the fact that they have been well tolerated in
recent studies, CTAs like any point-of-care alert do
have the potential for misuse, and further study is
needed to better understand and inform their
appropriate, widespread use.
10
One important but
poorly understood aspect relates to the performance
characteristics of such alerts, particularly the issue
of clinician responsiveness to alerts over time, and
the implications of such phenomena on alert design
and deployment decisions.
It is well recognized that when clinicians are
exposed to too many clinical decision support (CDS)
alerts they may eventually stop responding to them.
This phenomenon is often called alert fatigue.
11 12
While denitions vary and empirical evidence as to
its cause is limited, alert fatigue is generally thought
to result from one or more distinct but closely
related factors. One such factor is declining clinician
responsiveness to alerts as the number of simulta-
neous alerts increases. This is thought to be related
to issues such as alert irrelevance and cognitive
overload, and has also been referred to as alert
overload.
13 14
A second factor relates to declining
clinician responsiveness to a particular type of alert
as the clinician is repeatedly exposed to that alert
over a period of time, gradually becoming fatigued
or desensitized to it.
15
Few studies have explored
this latter issue of fatigue due to repeated exposure
to alerts over time,
16
and it is this latter aspect of
alert fatigue that motivated and was the focus of
the current study.
Although the purpose of CTAs is to provide
decision support for trial recruitment rather than for
clinical care, we hypothesized that this phenomenon
of alert fatigue over time would be a factor in the
usage patterns of CTAs, as also noted for CDS alerts,
and would present itself as a gradual reduction in
response rates over time. As the nature and extent
of this issue and the overall performance character-
istics of CTAs are not fully understood, the current
study was performed among physician subjects as a
planned part of a recently conducted randomized
controlled intervention study of a CTA.
17
1
Departments of Biomedical
Informatics and Internal
Medicine, College of Medicine,
The Ohio State University,
Columbus, Ohio, USA
2
Department of Family and
Community Medicine, College of
Medicine, University of
Cincinnati, Cincinnati, Ohio, USA
Correspondence to
Dr Peter J Embi, Departments of
Biomedical Informatics and
Internal Medicine, College of
Medicine, The Ohio State
University, 3190 Graves Hall,
333 W. 10th Ave, Columbus,
OH 43210, USA;
peter.embi@osumc.edu
Received 2 December 2011
Accepted 21 March 2012
Published Online First
25 April 2012
This paper is freely available
online under the BMJ Journals
unlocked scheme, see http://
jamia.bmj.com/site/about/
unlocked.xhtml
J Am Med Inform Assoc 2012;19:e145ee148. doi:10.1136/amiajnl-2011-000743 e145
Research and applications
Page 1
METHODS
We performed a cluster-randomized controlled study of a CTA
intervention across three health system environments that share
a common, commercial ambulatory EHR (GE Centricity EMR).
The study of the CTA and the clinical trial to which it was
applied were approved by our institutional review board. The
associated clinical trial involving patients with insulin resistance
after a recent stroke event was registered in Clinical Trials.gov
(NCT00091949).
Subjects involved in this 36-week study of the CTA inter-
vention in 2009 (the rst, pre-cross-over phase of a larger study
on the impact of this CTA intervention) included 178 physicians
who were randomly divided into equal groups within their
specialties (ie, neurologists, n¼26; family medicine physicians,
n¼35; general internists, n¼46; internal medicine-pediatrics
specialists, n¼8; and internal medicine house staff, n¼63). All
neurologists were university-based practitioners, while the other
generalist physicians practiced either in university-based or
community-based settings. Prior to CTA activation, all physi-
cians were encouraged via traditional means (eg, discussion at
staff meetings, email blasts, yers) to recruit patients to the
trial.
Upon CTA activation, intervention physicians seeing eligible
patients were presented with on-screen CTAs that suggested
they consider and discuss trial recruitment with the patient, and
click an on-screen button to send a secure referral message to the
trial coordinator if appropriate (gure 1). The physicians
involved in this study did not receive any incentives for their
participation in these recruitment efforts. It is worth noting that
the results of the phase of the underlying CTA intervention
study during which this analysis of responses took place revealed
a signicant 20-fold increase in referrals (p<0.0002) and a nine-
fold increase in enrollments (p<0.006).
17
The design of the CTA
intervention itself has also previously been reported and
involved a minimal amount of novel programming as the built-
in tools and resources of the particular EHR system were used.
18
This was a similar approach to that employed with another
popular EHR system upon which we have previously reported.
19
Additional data were also collected during the same period via
direct query of the EHR-fed enterprise data warehouse for the
two types of events in which we were interested: (1) responses
that indicated interaction with the CTA (ie, taking action by
responding to at least one of the questions posed in the alert); and
(2) ags (referrals) sent to the study coordinator via positive
responses to both questions and then processing of the CTA. In
addition, data on the total number of alerts triggered were
extracted as the denominator value for determination of a ratio of
(1) response rate and (2) referral rate per physician, per time unit.
In our analyses, the dependent variables responses and
refer rals were offset by the log of the total number of alerts
received by that physician during the same period, in a Poisson
regression. Cor relations across time were modeled as a spatial
power process (equivalent to an autoregressive 1 process). While
CTAs could have triggered more than once for a given patiente
physician pair if the initial CTA was ignored and if the patient
returned during the study period and was still eligible for the
alert, any such subsequent alerts were not included in our
analyses. We used 2-week time periods for our analyses, for
a total of 18 time periods over this 36-week study. Results are
presented in numeric and graphical form along with the relevant
p values to indicate the signicance of the ndings.
RESULTS
During the 36-week CTA exposure period, 915 total alerts were
triggered for 178 physicians in the intervention arm of the
associated randomized controlled study of the CTA. Eight alerts
were discarded because they represented second alerts for
patients who had earlier received an alert during the study
period that was not responded to, leaving 907 eligible alerts for
analysis.
During the initial time period, the response rate to CTAs was
about 50% among all users but dropped signicantly over time
by 2.7% for each advancing 2-week period, and that trend was
signicantly different from zero (at) (p<0.0001; gure 2).
Notably, there was still a 35% response rate at the 36th week of
exposure.
Subgroup analyses of response rate changes over time reveal
that there are no signicant differences between subspecialists
(neurologists in this case) and generalists, with both groups
trending downward to a similar extent, whether or not
Figure 1 Screen shot of the clinical
trial alert used in the randomized
controlled trial that was the basis for
the current study.
e146 J Am Med Inform Assoc 2012;19:e145ee148. doi:10.1136/amiajnl-2011-000743
Research and applications
Page 2
community generalists are included in the analyses. However,
there is a greater response rate drop-off among all community-
based providers compared with all university-based physicians
(p¼0.0489). Further, that drop-off is stronger when university-
based subspecialists (neurologists) are removed and the analyses
are restricted to generalists in both types of settings (p¼0.0146).
Referrals rates started at about 33% and, although they uc-
tuated, declined to about 9% by the end of the study period. Of
note, generalists made fewer referrals than subspecialists
(p<0.0001), and community-based physicians made fewer
referrals than university-based physicians (p¼0.006). The decline
in referrals rates over time was more pronounced than the
decline in response rates noted above. Specically, there was
a signicant 4.9% decrease in referral rates per time period
(p¼0.0294) (gure 3).
While absolute referral rates differed between groups,
subgroup analyses of physician-generated CTA referral rates
revealed no signicant differences in rate declines between: (1)
subspecialists versus generalists; (2) community-based versus
university-based physicians; and (3) community-based versus
university-based generalists only (ie, excluding neurologists).
DISCUSSION
The use of EHR-based CTAs has been demonstrated to increase
participant recruitment rates to clinical trials, and is a promising
approach for overcoming the major problem of inadequate and
slow participant recruitment.
7e9 17
Becausesuchanapproachwill
necessarily be employed in the context of complex and varied
clinical care environments, information on the performance
characteristics and response patterns among different groups of
potential end-users is needed to inform its application and use.
These ndings add to our understanding of how such alerts
for clinical trials operate in real-world implementations by
demonstrating empirically how and to what extent the rates of
responses to such alerts decline across a variety of settings and
end-users. Notably, they reveal, as hypothesized, that responses
to point-of-care CTAs decline over time, although not as
severely as anticipated, at least with regard to response rates.
Indeed, overall response rates to this series of alerts was
initially high at 50% and remained reasonably high at 35% even
after 36 months of exposure, compared to CDS alerts which
tend to have 4%e51% response rates.
12
While the fall in
response rate suggests alert fatigue over time, the fact that
a substantial proportion of the alerts were still being responded
to at 36 weeks suggests that such a duration of use may still
provide benet. However, the nding that referral rates declined
more quickly and more precipitously over time than response
rates suggests there might be a point after which use of a CTA
might not be worth even the minimal disruption they cause.
10
In addition, the differences seen among community-based versus
university-based physicians suggest that future CTA deploy-
ments should be tailored to a particular setting (ie, shorter in
community-based settings and longer in university-based
settings) in order to maximize bene t while avoiding excess
fatigue. Additionally, as noted with some CDS alerts, tailoring of
the alerts operating characteristics (eg, increasing specicity
such that they trigger less often) might also affect response
patterns and ultimately effectiveness, particularly in practice
settings or specialties where response rates fall more rapidly.
13 20
While the design of this study does not allow for denitive
determination of the reasons for the declines noted, the differ-
ence between the response rate decline (2.7% per time period)
and the referral rate decline (4.9% per time period) might reect
the fact that the act of CTA referral requires more effort than
a simple response, and therefore causes more fatigability over
time. However, this difference could also suggest the presence of
Figure 2 Physician response rates to
clinical trial alerts (CTAs) are plotted at
2-week intervals over the 36-week
study. The solid line tracks response
rates at each time point. The dashed
line represents the linear regression line
through each time point. Response
rates declined at a rate of 2.7% per
2-week time period (p<0.0001).
Figure 3 Physician-generated referral
rates using clinical trial alerts (CTAs)
are plotted at 2-week intervals over the
36-week study. The solid line tracks
referrals rates at each time point. The
dashed line represents the linear
regression line through each time point.
Referral rates declined at a rate of 4.9%
per 2-week time period (p¼0.0294).
J Am Med Inform Assoc 2012;19:e145ee148. doi:10.1136/amiajnl-2011-000743 e147
Research and applications
Page 3
other factors such as the possibility that declines in referrals
reect a drop in the available pool of eligible or interested
candidates rather than alert fatigue. However, the population of
potentially eligible participants (ie, patients with a recent
stroke) remained relatively constant during the study, making
this less likely. Nevertheless, it is probable that the reasons for
the declines were multi-factorial, reecting the combined inu-
ence of alert fatigue and other factors. Additional studies,
including qualitative studies to assess physician-user percep-
tions, are ongoing and should help clarify other reasons for the
declines noted.
Comparison of physician response patterns over time and
apparent alert fatigue with those when similar CDS approaches
are employed for clinical use would be useful. Unfortunately,
data on such changes over time in CDS response rates appear to
be lacking in the published literature. As noted above, plentiful
circumstantial evidence of this aspect of alert fatigue in many
studies reveals less than ideal average rates of response to CDS
interventions,
11 12
with some studies commenting on the
common behavior of overriding alerts,
20
and still others
addressing changes that can increase average response rates by
improving the usability or appropriateness of alerts.
13
However,
although this form of alert fatigue over time undoubtedly exists,
there has been surprisingly little empirical evidence of it, or data
to characterize the nature of the phenomenon. Our study
appears to be among the rst to empirically demonstrate this
aspect of alert fatigue by tracking changes in clinician response
to alerts over time. Therefore, we believe it has implications
beyond recruitment using CTAs, and that such an approach to
measuring responses over time can help advance understanding
of alert fatigue in general. We also believe that the methodology
employed here could be used to evaluate and rene the design
and application of decision support alerts in the future.
Although the randomized study design and multi-user, multi-
environment setting strengthen these ndings and advance our
understanding of CTA usage, this study has some limitations.
These ndings were derived from a single study of CTAs
employed in a single trial of patients with recent stroke.
Whether these ndings would differ if the CTA were applied to
another type of trial or in different settings remains to be
determined. Also, while the CTA approach has been demon-
strated to be effective using multiple EHR platforms,
7e9 17
this
study employed a single EHR and these ndings might differ
with the use of another EHR. Furthermore, this alert was
employed in a setting where other alerts were rarely triggered.
Another factor possibly impacting response rates over time is
the threshold setting (ie, sensitivity vs specicity) for a given
alert. Whether the ndings of this study would differ if there
were multiple or more frequent alerts is not known but is
possible given that multiple simultaneous alerts are a commonly
cited factor leading to alert fatigue as noted above, and should be
studied.
CONCLUSION
Physician response rates to CTAs started and remained relatively
high even after a period of use, although they gradually but
signicantly declined over time. While overall response rates
were lower among generalists than subspecialists, the rates of
decline in CTA responses and referrals varied signicantly only
between university-based versus community-based physicians,
and not between generalists versus subspecialists. These data
also suggest that alert fatigue over time is likely a factor that
must be taken into account when CTAs are employed.
While it is currently unclear how the nature and degree of
alert fatigue for CTAs compares to that of other types of CDS
alerts, this study has implications for the implementation and
management of such alerts. The methodology used here also
appears to have implications for studies into the relative impact
of alert fatigue across a range of decision support alert inter-
ventions. Overall, these ndings offer much-needed empirical
data about the performance characteristics of CTAs, data that
should help inform the tailoring and application of CTAs in real-
world environments in order to overcome the major research
challenge of improving and accelerating participant recruitment.
Acknowledgments Preliminary findings from this study were presented in abstract
form at the 2011 AMIA Joint Summits on Translational Science. Special thanks go
to our collaborators on the associated intervention study: Drs Mark Eckman, Philip
Payne, Nancy Elder, Sian Cotton, and Emily Patterson, and Ms. Ruth Wise.
Contributors PJE contributed to the conception, design, and acquisition and
interpretation of data, and drafted and revised the manuscript. AL contributed to the
design, interpretation of data, and critical revisions to the manuscript. Both authors
approved the final version to be published.
Funding This project was supported by a grant from the National Library of Medicine
of the National Institutes of Health, R01-LM009533.
Competing interests None.
Ethics approval Ethics approval was granted by the University of Cincinnati
Institutional Review Board.
Provenance and peer review Not commissioned; externally peer reviewed.
REFERENCES
1. Nathan DG, Wilson JD. Clinical research and the NIH: a report card. N Engl J Med
2003;349:1860e5.
2. Campbell EG, Weissman JS, Moy E, et al. Status of clinical research in academic
health centers: views from the research leadership. JAMA 2001;286:800e6.
3. Mansour EG. Barriers to clinical trials. Part III: knowledge and attitudes of health
care providers. Cancer 1994;74(9 Suppl):2672e5.
4. Siminoff LA, Zhang A, Colabianchi N, et al. Factors that predict the referral of breast
cancer patients onto clinical trials by their surgeons and medical oncologists. J Clin
Oncol 2000;18:1203e11.
5. Somkin CP, Altschuler A, Ackerson L, et al. Organizational barriers to physician
participation in cancer clinical trials. Am J Manag Care 2005;11:413e21.
6. Winn RJ. Obstacles to the accrual of patients to clinical trials in the community
setting. Semin Oncol 1994;21(4 Suppl 7):112e17.
7. Embi PJ, Jain A, Clark J, et al. Effect of a clinical trial alert system on physician
participation in trial recruitment. Arch Intern Med 2005;165:2272e7.
8. Rollman BL, Fischer GS, Zhu F, et al. Comparison of electronic physician prompts
versus waitroom case-finding on clinical trial enrollment. J Gen Intern Med
2008;23:447e50.
9. Grundmeier RW, Swietlik M, Bell LM. Research subject enrollment by primary care
pediatricians using an el ectronic health record. AMIA Annu Symp Proc 2007:289e93.
10. Embi PJ, Jain A, Harris CM. Physicians’ perceptions of an electronic health record-
based clinical trial alert approach to subject recruitment: a survey. BMC Med Inform
Decis Mak 2008;8:13.
11. Ash JS, Sittig DF, Campbell EM, et al. Some unintended consequences of clinical
decision support systems. AMIA Annu Symp Proc 2007:26e30.
12. van der Sijs H, Aarts J, Vulto A, et al. Overriding of drug safety alerts in
computerized physician order entry. J Am Med Inform Assoc 2006;13:138e47.
13. Shah NR, Seger AC, Seger DL, et al. Improving acceptance of computerized
prescribing alerts in ambulatory care. J Am Med Inform Assoc 2006;13:5e11.
14. Horsky J, Zhang J, Patel VL. To err is not entirely human: complex technology and
user cognition. J Biomed Inform 2005;38:264e6.
15. Cash JJ. Alert fatigue. Am J Health Syst Pharm 2009;66:2098e101.
16. Shah A. Alert Fatigue. 2011. http://clinfowiki.org/wiki/index.php/Alert_fatigue
(accessed 15 Jan 2012).
17. Embi PJ, Eckman MH, Payne PR, et al. EHR-based clinical trial alert effects on
recruitment to a neurology trial across settings: interim analysis of a randomized
controlled Study. AMIA Summits Transl Sci Proc; March 2010. San Francisco, CA,
2010.
18. Embi PJ, Lieberman MI, Ricciardi TN. Early Development of a Clinical Trial Alert
System in an EHR Used in Small Practices: Toward Generalizability. Phoenix, AZ:
AMIA Spring Congress, 2006.
19. Embi PJ, Jain A, Clark J, et al. Development of an electronic health record-based
Clinical Trial Alert system to enhance recruitment at the point of care. AMIA Annu
Symp Proc 2005:231e5.
20. Weingart SN, Toth M, Sands DZ, et al. Physicians’ decisions to override
computerized drug alerts in primary care. Arch Intern Med 2003;163
:2625e31.
PAGE fraction trail=4
e148 J Am Med Inform Assoc 2012;19:e145ee148. doi:10.1136/amiajnl-2011-000743
Research and applications
Page 4
  • Source
    • "Alerts were generated solely based on absolute or relative rise in the creatinine level in comparison with the lowest creatinine level measured within the past 48 h (for 26 mmol/L [ E-alert efficacy and effectiveness should be measured proactively and encompass quality assurance, providerbased responses, and clinical outcomes. The use of e-alerts for a variety of conditions has expanded dramatically in the past several years but has also placed new burdens on providers [38,40414243. In the best case scenarios, alerts can prevent medical error or promote timely and appropriate treatment of a severe condition. "
    [Show abstract] [Hide abstract] ABSTRACT: Purpose of the review: Among hospitalized patients, acute kidney injury is common and associated with significant morbidity and risk for mortality. The use of electronic health records (EHR) for prediction and detection of this important clinical syndrome has grown in the past decade. The steering committee of the 15(th) Acute Dialysis Quality Initiative (ADQI) conference dedicated a workgroup with the task of identifying elements that may impact the course of events following Acute Kidney Injury (AKI) e-alert. Sources of information: Following an extensive, non-systematic literature search, we used a modified Delphi process to reach consensus regarding several aspects of the utilization of AKI e-alerts. Findings: Topics discussed in this workgroup included progress in evidence base practices, the characteristics of an optimal e-alert, the measures of efficacy and effectiveness, and finally what responses would be considered best practices following AKI e-alerts. Authors concluded that the current evidence for e-alert system efficacy, although growing, remains insufficient. Technology and human-related factors were found to be crucial elements of any future investigation or implementation of such tools. The group also concluded that implementation of such systems should not be done without a vigorous plan to evaluate the efficacy and effectiveness of e-alerts. Efficacy and effectiveness of e-alerts should be measured by context-specific process and patient outcomes. Finally, the group made several suggestions regarding the clinical decision support that should be considered following successful e-alert implementation. Limitations: This paper reflects the findings of a non-systematic review and expert opinion. Implications: We recommend implementation of the findings of this workgroup report for use of AKI e-alerts.
    Full-text · Article · Dec 2016
  • Source
    • "The limited evidence of patient benefit was illustrated in one review [24] that found only 57% of studies affected user behaviour with only 30% able to demonstrate an effect on patient outcomes. The negative impact of top-down solutions, without 'end-user pull' in tailoring CCDS, has been emphasized [13, 25] , but specific points of end-user dissatisfaction include alert fatigue [12,26272829, mandated responses (that are frequently bypassed if workflows are disrupted [30]) and the complexity of clinical decision-making—far from facilitating patient management, CCDS may complicate it by forcing the user to interpret the solution they have been presented with [13]. Without understanding how technologies, people and organizations dynamically interact [31, 32] and the importance of pre-existing atti- tudes [33], it will remain unclear how to tailor and integrate more complex CCDS (such as for AKI) within existing practice. "
    [Show abstract] [Hide abstract] ABSTRACT: Background Although the efficacy of computerized clinical decision support (CCDS) for acute kidney injury (AKI) remains unclear, the wider literature includes examples of limited acceptability and equivocal benefit. Our single-centre study aimed to identify factors promoting or inhibiting use of in-patient AKI CCDS. Methods Targeting medical users, CCDS triggered with a serum creatinine rise of ≥25 μmol/L/day and linked to guidance and test ordering. User experience was evaluated through retrospective interviews, conducted and analysed according to Normalization Process Theory. Initial pilot ward experience allowed tool refinement. Assessments continued following CCDS activation across all adult, non-critical care wards. Results Thematic saturation was achieved with 24 interviews. The alert was accepted as a potentially useful prompt to early clinical re-assessment by many trainees. Senior staff were more sceptical, tending to view it as a hindrance. ‘Pop-ups’ and mandated engagement before alert dismissal were universally unpopular due to workflow disruption. Users were driven to close out of the alert as soon as possible to review historical creatinines and to continue with the intended workflow. Conclusions Our study revealed themes similar to those previously described in non-AKI settings. Systems intruding on workflow, particularly involving complex interactions, may be unsustainable even if there has been a positive impact on care. The optimal balance between intrusion and clinical benefit of AKI CCDS requires further evaluation.
    Full-text · Article · Dec 2015 · CKJ: Clinical Kidney Journal
  • Source
    • "In a recent evaluation, the effectiveness and the efficacy of the feasibility process using the EHR4CR QB compared with traditional methods were assessed [13]. However, other systems have been proven to be accurate and effective, while the final software was not usable due to its lack of user-friendliness [14]. Thus, there is a need for a user satisfaction evaluation to ensure that the system fits the user needs and an estimation of the training required for the use of the EHR4CR QB in a production environment. "
    [Show abstract] [Hide abstract] ABSTRACT: The Electronic Health Records for Clinical Research (EHR4CR) project aims to develop services and technology for the leverage reuse of Electronic Health Records with the purpose of improving the efficiency of clinical research processes. A pilot program was implemented to generate evidence of the value of using the EHR4CR platform. The user acceptance of the platform is a key success factor in driving the adoption of the EHR4CR platform; thus, it was decided to evaluate the user satisfaction. In this paper, we present the results of a user satisfaction evaluation for the EHR4CR multisite patient count cohort system. This study examined the ability of testers (n=22 and n=16 from 5 countries) to perform three main tasks (around 20 minutes per task), after a 30-minute period of self-training. The System Usability Scale score obtained was 55.83 (SD: 15.37), indicating a moderate user satisfaction. The responses to an additional satisfaction questionnaire were positive about the design of the interface and the required procedure to design a query. Nevertheless, the most complex of the three tasks proposed in this test was rated as difficult, indicating a need to improve the system regarding complicated queries.
    Full-text · Article · Jul 2015 · BioMed Research International
Show more