Using process elicitation and validation to understand and improve chemotherapy ordering and delivery.

Tufts University School of Medicine, Boston, USA.
Joint Commission journal on quality and patient safety / Joint Commission Resources 11/2012; 38(11):497-505.
Source: PubMed

ABSTRACT Chemotherapy ordering and administration, in which errors have potentially severe consequences, was quantitatively and qualitatively evaluated by employing process formalism (or formal process definition), a technique derived from software engineering, to elicit and rigorously describe the process, after which validation techniques were applied to confirm the accuracy of the described process.
The chemotherapy ordering and administration process, including exceptional situations and individuals' recognition of and responses to those situations, was elicited through informal, unstructured interviews with members of an interdisciplinary team. The process description (or process definition), written in a notation developed for software quality assessment purposes, guided process validation (which consisted of direct observations and semistructured interviews to confirm the elicited details for the treatment plan portion of the process).
The overall process definition yielded 467 steps; 207 steps (44%) were dedicated to handling 59 exceptional situations. Validation yielded 82 unique process events (35 new expected but not yet described steps, 16 new exceptional situations, and 31 new steps in response to exceptional situations). Process participants actively altered the process as ambiguities and conflicts were discovered by the elicitation and validation components of the study. Chemotherapy error rates declined significantly during and after the project, which was conducted from October 2007 through August 2008.
Each elicitation method and the subsequent validation discussions contributed uniquely to understanding the chemotherapy treatment plan review process, supporting rapid adoption of changes, improved communication regarding the process, and ensuing error reduction.

1 Follower
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This article provides a framework for analyzing the performance of three popular root cause analysis tools: the cause-and-effect diagram, the interrelationship diagram, and the current reality tree. The literature confirmed that these tools have the capacity to find root causes with varying degrees of accuracy and quality. The literature, however, lacks a means for selecting the appropriate root cause analysis tool based upon objective performance criteria. Some of the important performance characteristics of root cause analysis tools include the ability to find root causes, causal interdependencies, factor relationships, and cause categories. Root cause analysis tools must also promote focus, stimulate discussion, be readable when complete, and have mechanisms for evaluating the integrity of group findings. This analysis found that each tool has advantages and disadvantages, with varying levels of causal yield and selected causal factor integrity. This framework provides decision makers with the knowledge of root cause analysis performance characteristics so they can better understand the underlying assumptions of a recommended solution.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The authors describe HFMEA, a five-step process used to proactively evaluate a health care process, and provide examples of a team's forms and actions regarding prostate-specific antigen testing.
    The Joint Commission journal on quality improvement 06/2002; 28(5):248-67, 209.
  • Source
    Conference Paper: Mining specifications.
    [Show abstract] [Hide abstract]
    ABSTRACT: Program verification is a promising approach to improving program quality, because it can search all possible program executions for specific errors. However, the need to formally describe correct behavior or errors is a major barrier to the widespread adoption of program verification, since programmers historically have been reluctant to write formal specifications. Automating the process of formulating specifications would remove a barrier to program verification and enhance its practicality.This paper describes , a machine learning approach to discovering formal specifications of the protocols that code must obey when interacting with an application program interface or abstract data type. Starting from the assumption that a working program is well enough debugged to reveal strong hints of correct protocols, our tool infers a specification by observing program execution and concisely summarizing the frequent interaction patterns as state machines that capture both temporal and data dependences. These state machines can be examined by a programmer, to refine the specification and identify errors, and can be utilized by automatic verification tools, to find bugs.Our preliminary experience with the mining tool has been promising. We were able to learn specifications that not only captured the correct protocol, but also discovered serious bugs.


Available from
May 23, 2014